Követés
Ta-Chung Chi
Ta-Chung Chi
E-mail megerősítve itt: andrew.cmu.edu - Kezdőlap
Cím
Hivatkozott rá
Hivatkozott rá
Év
Just ask: An interactive learning framework for vision and language navigation
TC Chi, M Shen, M Eric, S Kim, D Hakkani-Tur
Proceedings of the AAAI conference on artificial intelligence 34 (03), 2459-2466, 2020
652020
Dynamic time-aware attention to speaker roles and contexts for spoken language understanding
PC Chen, TC Chi, SY Su, YN Chen
2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU …, 2017
412017
KERPLE: Kernelized Relative Positional Embedding for Length Extrapolation
TC Chi, TH Fan, PJ Ramadge, AI Rudnicky
NeurIPS, 2022
312022
Speaker role contextual modeling for language understanding and dialogue policy learning
TC Chi, PC Chen, SY Su, YN Chen
IJCNLP, 2017
312017
xSense: Learning sense-separated sparse representations and textual definitions for explainable word sense networks
TY Chang, TC Chi, SC Tsai, YN Chen
arXiv preprint arXiv:1809.03348, 2018
252018
Dissecting Transformer Length Extrapolation via the Lens of Receptive Field Analysis
TC Chi, TH Fan, AI Rudnicky, PJ Ramadge
ACL, 2023
23*2023
Structured dialogue discourse parsing
TC Chi, AI Rudnicky
arXiv preprint arXiv:2306.15103, 2023
132023
PESCO: Prompt-enhanced Self Contrastive Learning for Zero-shot Text Classification
YS Wang, TC Chi, R Zhang, Y Yang
ACL, 2023
92023
CLUSE: Cross-lingual unsupervised sense embeddings
TC Chi, YN Chen
EMNLP, 2018
82018
Training discrete deep generative models via gapped straight-through estimator
TH Fan, TC Chi, AI Rudnicky, PJ Ramadge
International Conference on Machine Learning, 6059-6073, 2022
62022
Tartan: A two-tiered dialog framework for multi-domain social chitchat
F Chen, TC Chi, S Lyu, J Gong, T Parekh, R Joshi, A Kaushik, A Rudnicky
Alexa prize proceedings, 2020
52020
Transformer Working Memory Enables Regular Language Reasoning and Natural Language Length Extrapolation
TC Chi, TH Fan, AI Rudnicky, PJ Ramadge
Findings of EMNLP, 2023
42023
Latent Positional Information is in the Self-Attention Variance of Transformer Language Models Without Positional Embeddings
TC Chi, TH Fan, LW Chen, AI Rudnicky, PJ Ramadge
ACL, 2023
32023
Zero-Shot Dialogue Disentanglement by Self-Supervised Entangled Response Selection
TC Chi, AI Rudnicky
EMNLP, 2021
32021
BCWS: Bilingual contextual word similarity
TC Chi, CY Shih, YN Chen
arXiv preprint arXiv:1810.08951, 2018
22018
Attention Alignment and Flexible Positional Embeddings Improve Transformer Length Extrapolation
TC Chi, TH Fan, AI Rudnicky
arXiv preprint arXiv:2311.00684, 2023
12023
On Task-Adaptive Pretraining for Dialogue Response Selection
TH Lin, TC Chi, A Rumshisky
arXiv preprint arXiv:2210.04073, 2022
12022
Are you doing what I say? On modalities alignment in ALFRED
TR Chiang, YT Yeh, TC Chi, YS Wang
arXiv preprint arXiv:2110.05665, 2021
12021
Automatic Speech Verification Spoofing Detection
S Mo, H Wang, P Ren, TC Chi
arXiv preprint arXiv:2012.08095, 2020
12020
Tartan: an LLM Driven SocialBot
L Li, Z Liu, LW Chen, TC Chi, AI Rudnicky
Alexa Prize SocialBot Grand Challenge 5, 0
1
A rendszer jelenleg nem tudja elvégezni a műveletet. Próbálkozzon újra később.
Cikkek 1–20