Follow
Andre Xian Ming Chang
Andre Xian Ming Chang
Chaos Industries
Verified email at chaosinc.com - Homepage
Title
Cited by
Cited by
Year
Recurrent neural networks hardware implementation on FPGA
AXM Chang, B Martini, E Culurciello
arXiv preprint arXiv:1511.05552, 2015
1992015
Hardware accelerators for recurrent neural networks on FPGA
AXM Chang, E Culurciello
2017 IEEE International symposium on circuits and systems (ISCAS), 1-4, 2017
1432017
Snowflake: An efficient hardware accelerator for convolutional neural networks
V Gokhale, A Zaidy, AXM Chang, E Culurciello
2017 IEEE International Symposium on Circuits and Systems (ISCAS), 1-4, 2017
1162017
Snowflake: A model agnostic accelerator for deep convolutional neural networks
V Gokhale, A Zaidy, AXM Chang, E Culurciello
arXiv preprint arXiv:1708.02579, 2017
252017
Compiling deep learning models for custom hardware accelerators
AXM Chang, A Zaidy, V Gokhale, E Culurciello
arXiv preprint arXiv:1708.00117, 2017
142017
Reinforcement learning approach for mapping applications to dataflow-based coarse-grained reconfigurable array
AXM Chang, P Khopkar, B Romanous, A Chaurasia, P Estep, S Windh, ...
arXiv preprint arXiv:2205.13675, 2022
72022
Recurrent neural networks hardware implementation on FPGA. CoRR abs/1511.05552 (2015)
AXM Chang, B Martini, E Culurciello
arXiv preprint arXiv:1511.05552, 2015
62015
Efficient compiler code generation for deep learning snowflake co-processor
AXM Chang, A Zaidy, E Culurciello
2018 1st Workshop on Energy Efficient Machine Learning and Cognitive …, 2018
42018
Inference engine circuit architecture
A Zaidy, AXM Chang, E Culurciello
US Patent 11,675,624, 2023
32023
Deep neural networks compiler for a trace-based accelerator
AXM Chang, A Zaidy, M Vitez, L Burzawa, E Culurciello
Journal of Systems Architecture 102, 101659, 2020
32020
A high efficiency accelerator for deep neural networks
A Zaidy, AXM Chang, V Gokhale, E Culurciello
2018 1st Workshop on Energy Efficient Machine Learning and Cognitive …, 2018
32018
Evolutionary imitation learning
E Culurciello, AXM Chang
US Patent 12,045,718, 2024
22024
Compiler with an artificial neural network to optimize instructions generated for execution on a deep learning accelerator of artificial neural networks
AXM Chang, AT Zaidy, M Vitez, MC Glapa, A Chaurasia, E Culurciello
US Patent App. 17/092,040, 2022
22022
Deep neural networks compiler for a trace-based accelerator
AXM Chang, A Zaidy, E Culurciello, M Vitez
US Patent 11,861,337, 2024
12024
Secure Artificial Neural Network Models in Outsourcing Deep Learning Computation
AXM Chang
US Patent App. 17/715,835, 2023
12023
Hardware accelerator for convolutional neural networks and method of operation thereof
E Culurciello, V Gokhale, A Zaidy, A Chang
US Patent 11,775,313, 2023
12023
Compiler configurable to generate instructions executable by different deep learning accelerators from a description of an artificial neural network
AXM Chang, AT Zaidy, E Culurciello, J Cummins, M Vitez
US Patent App. 17/092,013, 2022
12022
Deep neural networks compiler for a trace-based accelerator (short WIP paper)
AXM Chang, A Zaidy, L Burzawa, E Culurciello
Proceedings of the 19th ACM SIGPLAN/SIGBED International Conference on …, 2018
12018
Techniques to implement transformers with multi-task neural networks
P Khopkar, SN Wadekar, A Chaurasia, AXM Chang
US Patent App. 18/420,489, 2024
2024
Neural network model definition code generation and optimization
A Chaurasia, AXM Chang
US Patent App. 18/513,232, 2024
2024
The system can't perform the operation now. Try again later.
Articles 1–20