Recurrent neural networks hardware implementation on FPGA AXM Chang, B Martini, E Culurciello arXiv preprint arXiv:1511.05552, 2015 | 199 | 2015 |
Hardware accelerators for recurrent neural networks on FPGA AXM Chang, E Culurciello 2017 IEEE International symposium on circuits and systems (ISCAS), 1-4, 2017 | 143 | 2017 |
Snowflake: An efficient hardware accelerator for convolutional neural networks V Gokhale, A Zaidy, AXM Chang, E Culurciello 2017 IEEE International Symposium on Circuits and Systems (ISCAS), 1-4, 2017 | 116 | 2017 |
Snowflake: A model agnostic accelerator for deep convolutional neural networks V Gokhale, A Zaidy, AXM Chang, E Culurciello arXiv preprint arXiv:1708.02579, 2017 | 25 | 2017 |
Compiling deep learning models for custom hardware accelerators AXM Chang, A Zaidy, V Gokhale, E Culurciello arXiv preprint arXiv:1708.00117, 2017 | 14 | 2017 |
Reinforcement learning approach for mapping applications to dataflow-based coarse-grained reconfigurable array AXM Chang, P Khopkar, B Romanous, A Chaurasia, P Estep, S Windh, ... arXiv preprint arXiv:2205.13675, 2022 | 7 | 2022 |
Recurrent neural networks hardware implementation on FPGA. CoRR abs/1511.05552 (2015) AXM Chang, B Martini, E Culurciello arXiv preprint arXiv:1511.05552, 2015 | 6 | 2015 |
Efficient compiler code generation for deep learning snowflake co-processor AXM Chang, A Zaidy, E Culurciello 2018 1st Workshop on Energy Efficient Machine Learning and Cognitive …, 2018 | 4 | 2018 |
Inference engine circuit architecture A Zaidy, AXM Chang, E Culurciello US Patent 11,675,624, 2023 | 3 | 2023 |
Deep neural networks compiler for a trace-based accelerator AXM Chang, A Zaidy, M Vitez, L Burzawa, E Culurciello Journal of Systems Architecture 102, 101659, 2020 | 3 | 2020 |
A high efficiency accelerator for deep neural networks A Zaidy, AXM Chang, V Gokhale, E Culurciello 2018 1st Workshop on Energy Efficient Machine Learning and Cognitive …, 2018 | 3 | 2018 |
Evolutionary imitation learning E Culurciello, AXM Chang US Patent 12,045,718, 2024 | 2 | 2024 |
Compiler with an artificial neural network to optimize instructions generated for execution on a deep learning accelerator of artificial neural networks AXM Chang, AT Zaidy, M Vitez, MC Glapa, A Chaurasia, E Culurciello US Patent App. 17/092,040, 2022 | 2 | 2022 |
Deep neural networks compiler for a trace-based accelerator AXM Chang, A Zaidy, E Culurciello, M Vitez US Patent 11,861,337, 2024 | 1 | 2024 |
Secure Artificial Neural Network Models in Outsourcing Deep Learning Computation AXM Chang US Patent App. 17/715,835, 2023 | 1 | 2023 |
Hardware accelerator for convolutional neural networks and method of operation thereof E Culurciello, V Gokhale, A Zaidy, A Chang US Patent 11,775,313, 2023 | 1 | 2023 |
Compiler configurable to generate instructions executable by different deep learning accelerators from a description of an artificial neural network AXM Chang, AT Zaidy, E Culurciello, J Cummins, M Vitez US Patent App. 17/092,013, 2022 | 1 | 2022 |
Deep neural networks compiler for a trace-based accelerator (short WIP paper) AXM Chang, A Zaidy, L Burzawa, E Culurciello Proceedings of the 19th ACM SIGPLAN/SIGBED International Conference on …, 2018 | 1 | 2018 |
Techniques to implement transformers with multi-task neural networks P Khopkar, SN Wadekar, A Chaurasia, AXM Chang US Patent App. 18/420,489, 2024 | | 2024 |
Neural network model definition code generation and optimization A Chaurasia, AXM Chang US Patent App. 18/513,232, 2024 | | 2024 |