Llama 2: Open foundation and fine-tuned chat models H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ... arXiv preprint arXiv:2307.09288, 2023 | 8384 | 2023 |
Linformer: Self-attention with linear complexity S Wang, BZ Li, M Khabsa, H Fang, H Ma arXiv preprint arXiv:2006.04768, 2020 | 1642 | 2020 |
The number of scholarly documents on the public web M Khabsa, CL Giles PloS one 9 (5), e93949, 2014 | 466 | 2014 |
The CHEMDNER corpus of chemicals and drugs and its annotation principles M Krallinger, O Rabal, F Leitner, M Vazquez, D Salgado, Z Lu, R Leaman, ... Journal of cheminformatics 7, 1-17, 2015 | 362 | 2015 |
Clear: Contrastive learning for sentence representation Z Wu, S Wang, J Gu, M Khabsa, F Sun, H Ma arXiv preprint arXiv:2012.15466, 2020 | 354 | 2020 |
The llama 3 herd of models A Dubey, A Jauhri, A Pandey, A Kadian, A Al-Dahle, A Letman, A Mathur, ... arXiv preprint arXiv:2407.21783, 2024 | 220 | 2024 |
Entailment as few-shot learner S Wang, H Fang, M Khabsa, H Mao, H Ma arXiv preprint arXiv:2104.14690, 2021 | 195 | 2021 |
Citeseerx: Ai in a digital library search engine J Wu, KM Williams, HH Chen, M Khabsa, C Caragea, S Tuarob, ... AI Magazine 36 (3), 35-48, 2015 | 159 | 2015 |
Unipelt: A unified framework for parameter-efficient language model tuning Y Mao, L Mathias, R Hou, A Almahairi, H Ma, J Han, W Yih, M Khabsa arXiv preprint arXiv:2110.07577, 2021 | 145 | 2021 |
Llama guard: Llm-based input-output safeguard for human-ai conversations H Inan, K Upasani, J Chi, R Rungta, K Iyer, Y Mao, M Tontchev, Q Hu, ... arXiv preprint arXiv:2312.06674, 2023 | 135 | 2023 |
Rayyan: a systematic reviews web app for exploring and filtering searches for eligible studies for Cochrane Reviews A Elmagarmid, Z Fedorowicz, H Hammady, I Ilyas, M Khabsa, M Ouzzani Evidence-informed public health: opportunities and challenges. Abstracts of …, 2014 | 106 | 2014 |
Learning to identify relevant studies for systematic reviews using random forest and external information M Khabsa, A Elmagarmid, I Ilyas, H Hammady, M Ouzzani Machine Learning 102, 465-482, 2016 | 104 | 2016 |
Effective long-context scaling of foundation models W Xiong, J Liu, I Molybog, H Zhang, P Bhargava, R Hou, L Martin, ... arXiv preprint arXiv:2309.16039, 2023 | 103 | 2023 |
Progressive prompts: Continual learning for language models A Razdaibiedina, Y Mao, R Hou, M Khabsa, M Lewis, A Almahairi arXiv preprint arXiv:2301.12314, 2023 | 94 | 2023 |
Llama 2: open foundation and fine-tuned chat models. arXiv H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ... arXiv preprint arXiv:2307.09288, 2023 | 86 | 2023 |
Language models as fact checkers? N Lee, BZ Li, S Wang, W Yih, H Ma, M Khabsa arXiv preprint arXiv:2006.04102, 2020 | 80 | 2020 |
Building natural language interfaces to web apis Y Su, AH Awadallah, M Khabsa, P Pantel, M Gamon, M Encarnacion Proceedings of the 2017 ACM on Conference on Information and Knowledge …, 2017 | 78 | 2017 |
Llama 2: Open foundation and fine-tuned chat models. arXiv 2023 H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ... arXiv preprint arXiv:2307.09288, 0 | 78 | |
Towards building a scholarly big data platform: Challenges, lessons and opportunities Z Wu, J Wu, M Khabsa, K Williams, HH Chen, W Huang, S Tuarob, ... IEEE/ACM Joint Conference on Digital Libraries, 117-126, 2014 | 73 | 2014 |
Scholarly big data information extraction and integration in the CiteSeerχ digital library K Williams, J Wu, SR Choudhury, M Khabsa, CL Giles 2014 IEEE 30th international conference on data engineering workshops, 68-73, 2014 | 72 | 2014 |