Vitcod: Vision transformer acceleration via dedicated algorithm and accelerator co-design H You, Z Sun, H Shi, Z Yu, Y Zhao, Y Zhang, C Li, B Li, Y Lin
2023 IEEE International Symposium on High-Performance Computer Architecture …, 2023
30 2023 Vitality: Unifying low-rank and sparse approximation for vision transformer acceleration with a linear taylor attention J Dass, S Wu, H Shi, C Li, Z Ye, Z Wang, Y Lin
2023 IEEE International Symposium on High-Performance Computer Architecture …, 2023
24 2023 Instant-3d: Instant neural radiance field training towards on-device ar/vr 3d reconstruction S Li, C Li, W Zhu, B Yu, Y Zhao, C Wan, H You, H Shi, Y Lin
Proceedings of the 50th Annual International Symposium on Computer …, 2023
13 2023 ShiftAddNAS: Hardware-inspired search for more accurate and efficient neural networks H You, B Li, S Huihong, Y Fu, Y Lin
International Conference on Machine Learning, 25566-25580, 2022
9 2022 Intelligent typography: Artistic text style transfer for complex texture and structure W Mao, S Yang, H Shi, J Liu, Z Wang
IEEE Transactions on Multimedia, 2022
5 2022 Max-affine spline insights into deep network pruning H You, R Balestriero, Z Lu, Y Kou, H Shi, S Zhang, S Wu, Y Lin, ...
arXiv preprint arXiv:2101.02338, 2021
5 2021 ShiftAddViT: Mixture of multiplication primitives towards efficient vision transformer H You, H Shi, Y Guo, Y Lin
Advances in Neural Information Processing Systems 36, 2024
2 2024 NASA : Neural Architecture Search and Acceleration for Multiplication-Reduced Hybrid Networks H Shi, H You, Z Wang, Y Lin
IEEE Transactions on Circuits and Systems I: Regular Papers, 2023
2 2023 NASA: Neural architecture search and acceleration for hardware inspired hybrid networks H Shi, H You, Y Zhao, Z Wang, Y Lin
Proceedings of the 41st IEEE/ACM International Conference on Computer-Aided …, 2022
2 2022 Max-affine spline insights into deep network pruning R Balestriero, H You, Z Lu, Y Kou, H Shi, Y Lin, R Baraniuk
1 2021 An FPGA-Based Reconfigurable Accelerator for Convolution-Transformer Hybrid EfficientViT H Shao, H Shi, W Mao, Z Wang
arXiv preprint arXiv:2403.20230, 2024
2024 A Computationally Efficient Neural Video Compression Accelerator Based on a Sparse CNN-Transformer Hybrid Network S Zhang, W Mao, H Shi, Z Wang
arXiv preprint arXiv:2312.10716, 2023
2023 NASA-F: FPGA-Oriented Search and Acceleration for Multiplication-Reduced Hybrid Networks H Shi, Y Xu, Y Wang, W Mao, Z Wang
IEEE Transactions on Circuits and Systems I: Regular Papers, 2023
2023 S R: Exploring a Double-Win Transformer-Based Framework for Ideal and Blind Super-Resolution M She, W Mao, H Shi, Z Wang
International Conference on Artificial Neural Networks, 522-537, 2023
2023 LITNet: A Light-weight Image Transform Net for Image Style Transfer H Shi, W Mao, Z Wang
2021 International Joint Conference on Neural Networks (IJCNN), 1-8, 2021
2021