Abstract
This paper investigates the use of Large Language Models (LLMs) for automating the generation of hardware description code, aiming to explore their potential in supporting and enhancing the development of efficient neuromorphic computing architectures. Building on our prior work, we employ OpenAI's ChatGPT4 and natural language prompts to synthesize a RTL Verilog module of a programmable recurrent spiking neural network, while also generating test benches to assess the system's correctness. The resultant design was validated in three case studies, the exclusive OR,the IRIS flower classification and the MNIST hand-written digit classification, achieving accuracies of up to 96.6%. To verify its synthesizability and implementability, the design was prototyped on a field-programmable gate array and implemented on SkyWater 130 nm technology by using an open-source electronic design automation flow. Additionally, we have submitted it to Tiny Tapeout 6 chip fabrication program to further evaluate the system on-chip performance in the future.
Abstract (translated)
本文研究了使用大型语言模型(LLMs)自动生成硬件描述代码的应用,旨在探讨它们在支持和发展高效神经形态计算架构方面的潜力。在我们之前的工作基础上,我们利用OpenAI的ChatGPT4和自然语言提示来合成一个可编程反复抽动的神经网络的RTL Verilog模块,同时生成测试基准以评估系统的正确性。基于所得设计,我们在三个案例研究中进行了验证: exclusive OR,IRIS花分类和MNIST手写数字分类,达到96.6%的准确度。为了验证其可合成性和可实现性,该设计在一块现场可编程门阵列上进行了原型设计,并使用SkyWater 130纳米技术通过开源电子设计自动化流程进行了实现。此外,我们还将其提交给Tiny Tapeout 6芯片制造程序,以进一步评估系统在芯片上的性能。
URL
https://arxiv.org/abs/2405.01419