Project Overview

DCNNs are increasing in prevalence; however, their high computational load makes them difficult for real-time. FPGAs are used to parallelize and accelerate this load, which is why they are the ideal hardware to use for the design of a convolutional neural network.

 

Project Goals

A system for accelerating convolutions used for image processing and filtering, or implementation of a convolutional neural network. Hardware based acceleration, supporting flexible zero padding.

  • Have an FPGA do an on-demand image convolution when the PC tells it to                     
  • Build a convolution hardware framework using Verilog, then using (low level) C have to PC send commands/image data to the FPGA
  • Explore the implementation of neural networks in FPGAs
    • Map out resource usage/latency for deep neural network architectures and hyperparameters
      • Latency refers to the total time (typically expressed in units of “clocks”), required for a single iteration of the algorithm to complete
    • Demonstrate deep learning techniques in FPGA applications