Skip to content

Innervator: Hardware Acceleration for Artificial Neural Networks in FPGA using VHDL.

License

GPL-3.0 and 2 other licenses found

Licenses found

GPL-3.0
LICENSE-GPL
LGPL-3.0
LICENSE-LGPL
CERN-OHL-W-2.0
LICENSE-OHLW
Notifications You must be signed in to change notification settings

Thraetaona/Innervator

Innervator

Hardware Acceleration for Artificial Neural Networks in FPGA using VHDL.

Artificial Intelligence ("AI") is deployed in various applications, ranging from noise cancellation to image recognition. AI-based products often come at remarkably high hardware and electricity costs, making them inaccessible to consumer devices and small-scale edge electronics. Inspired by biological brains, artificial neural networks are modeled in mathematical formulae and functions. However, brains (i.e., analog systems) deal with continuous values along a spectrum (e.g., variance of voltage) rather than being restricted to the binary on/off states that digital hardware has; this continuous nature of analog logic allows for a smoother and more efficient representation of data. Given how present computers are almost exclusively digital, they emulate analog-based AI algorithms in a space-inefficient and slow manner: a single analog value gets encoded as multitudes of binary digits on digital hardware. In addition, general-purpose computer processors treat otherwise-parallelizable AI algorithms as step-by-step sequential logic. So, in my research, I have explored the possibility of improving the state of AI performance on currently available mainstream digital hardware. A family of digital circuitry known as Programmable Logic Devices ("PLDs") can be customized down to the specific parameters of a trained neural network, thereby ensuring data-tailored computation and algorithmic parallelism. Furthermore, a subgroup of PLDs, the Field-Programmable Gate Arrays ("FPGAs"), are dynamically re-configurable; they are reusable and can have subsequent customized designs swapped out in-the-field. As a proof of concept, I have implemented a sample 8x8-pixel handwritten digit-recognizing neural network, in a low-cost "Xilinx Artix-7" FPGA, using VHDL-2008 (a hardware description language by the U.S. DoD and IEEE). Compared to software-emulated implementations, power consumption and execution speed were shown to have greatly improved; ultimately, this hardware-accelerated approach bridges the inherent mismatch between current AI algorithms and the general-purpose digital hardware they run on.