본문 바로가기

하드웨어/Verilog-NN

Verilog - NN : Feed Forward

반응형

module ffnn(input wire [N-1:0] x, output wire [M-1:0] y);
  parameter N = 4;   // number of inputs
  parameter M = 1;   // number of outputs
  parameter H = 2;   // number of hidden neurons

  // weights and biases
  wire [H-1:0] h;
  wire [M-1:0] o;
  wire [H-1:0][N-1:0] w_ih;
  wire [M-1:0][H-1:0] w_ho;
  wire [H-1:0] b_h;
  wire [M-1:0] b_o;

  // hidden layer computation
  assign h = {b_h[H-1:0]} + w_ih[H-1:0][N-1:0] * x;

  // output layer computation
  assign o = {b_o[M-1:0]} + w_ho[M-1:0][H-1:0] * h;

  // activation function
  assign y = activation(o);

endmodule

// activation function definition
function [M-1:0] activation;
  input [M-1:0] o;
  // implementation of activation function (e.g. sigmoid, ReLU, etc.)
  ...
endfunction

 

 

 

https://towardsdatascience.com/the-mostly-complete-chart-of-neural-networks-explained-3fb6f2367464

 

The mostly complete chart of Neural Networks, explained

The zoo of neural network types grows exponentially. One needs a map to navigate between many emerging architectures and approaches.

towardsdatascience.com

https://towardsdatascience.com/diy-ai-an-old-school-matrix-nn-401a00021a55   

 

module ffnn(input wire [N-1:0] x, output wire [M-1:0] y);
  parameter N = 4;   // number of inputs
  parameter M = 1;   // number of outputs
  parameter H = 2;   // number of hidden neurons

  // weights and biases
  wire [H-1:0] h;
  wire [M-1:0] o;
  wire [H-1:0][N-1:0] w_ih;
  wire [M-1:0][H-1:0] w_ho;
  wire [H-1:0] b_h;
  wire [M-1:0] b_o;

  // hidden layer computation
  assign h = {b_h[H-1:0]} + w_ih[H-1:0][N-1:0] * x;

  // output layer computation
  assign o = {b_o[M-1:0]} + w_ho[M-1:0][H-1:0] * h;

  // activation function
  assign y = activation(o);

endmodule

// activation function definition
function [M-1:0] activation;
  input [M-1:0] o;
  // implementation of activation function (e.g. sigmoid, ReLU, etc.)
  ...
endfunction

 

 

 

 

 

 

 

 

 

 

 

 

 

반응형