SLIM-Net: Rethinking How Neural Networks Use Systolic Arrays

2023 IEEE 5th International Conference on Artificial Intelligence Circuits and Systems (AICAS)(2023)

引用 0|浏览10
摘要
Systolic arrays of processing elements are widely used to massively parallelise neural network layers. However, the execution of traditional convolutional and fully-connected layers on such hardware typically requires a non-negligible latency to distribute data over the array before each operation - data is not immediately in-place. This arises from the fundamental incompatibility between the physical spatial nature of a systolic array and the un-physical form of existing neural networks. We propose the systolic lateral mixer network (SLIM-Net) in an effort to reconcile this mismatch. The architecture of SLIM-Net maps directly onto the physical structure of a systolic array such that, after evaluating one layer, data immediately finds itself where it needs to be to begin the next. To evaluate the potential of SLIM-Net we compare it to a UNet model on a COCO segmentation task and find that, for models of equivalent size, SLIM-Net not only achieves a slightly better performance but requires almost an order of magnitude fewer MAC operations. Furthermore, we implement a lateral mixing layer on a systolic smart imager chip which executes seven times faster than similar convolutional layers on the same hardware and provides encouraging initial insights into the practicality of this new neuromorphic approach.
更多
查看译文
关键词
Neural networks,Neuromorphic,Smart imagers
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
0
您的评分 :

暂无评分

数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn