본문 바로가기

반응형

하드웨어/Verilog-NN

(9)
Verilog-NN : Variational Aueo Encoder : VAE module VAE ( input clk, input [15:0] x, output [15:0] y ); // Import required libraries and modules // Define inputs and outputs // Create encoding layer // Create decoding layer // Create loss function // Train the model // Test the model endmodule https://towardsdatascience.com/the-mostly-complete-chart-of-neural-networks-explained-3fb6f2367464 The mostly complete chart of Neural Networks, exp..
Verilog-NN : Auto Encoder : AE module Autoencoder(input clk, input [2:0] inputs, output reg [2:0] outputs); reg [2:0] hidden_layer; reg [2:0] output_layer; // weights for inputs to hidden layer parameter w0 = 0.3; parameter w1 = 0.4; parameter w2 = 0.2; // weights for hidden layer to output parameter w3 = 0.5; parameter w4 = 0.7; parameter w5 = 0.6; always @(posedge clk) begin hidden_layer[0]
Verilog-NN : Gated Recurrent Unit : GRU module GRU(input clk, input [3:0] inputs, output reg [2:0] hidden_state, output reg [2:0] output); reg [2:0] reset_gate; reg [2:0] update_gate; reg [2:0] candidate; // weights for inputs to reset and update gates parameter w_reset_0 = 0.3; parameter w_reset_1 = 0.4; parameter w_reset_2 = 0.2; parameter w_reset_3 = 0.1; parameter w_update_0 = 0.6; parameter w_update_1 = 0.5; parameter w_update_2 ..
Verilog-NN : Long Short Term Memory : LSTM module LSTMCell(input clk, input [3:0] input_data, input [3:0] prev_hidden_state, input [3:0] prev_cell_state, output reg [3:0] next_hidden_state, output reg [3:0] next_cell_state); reg [3:0] forget_gate, input_gate, output_gate, cell_input; // weights for inputs parameter w0_f = 0.3; parameter w1_f = 0.4; parameter w2_f = 0.2; parameter w0_i = 0.5; parameter w1_i = 0.7; parameter w2_i = 0.9; pa..
Verilog-NN : Recurrent Neural Network : RNN module RecurrentNeuralNetwork(input clk, input [2:0] inputs, output reg [0:0] output); reg [1:0] hidden_neurons; reg [0:0] output_neuron; // weights for inputs to hidden layer parameter w0 = 0.3; parameter w1 = 0.4; parameter w2 = 0.2; // weights for hidden layer to output parameter w3 = 0.5; parameter w4 = 0.7; // weights for hidden layer to itself parameter w5 = 0.8; parameter w6 = 0.6; always..
Verilog-NN : Deep Feed Forward : DFF 3input : 4 hidden 1'st layer : 3 hidden 2'nd layer : 1 output layer module NeuralNetwork(input clk, input [2:0] inputs, output reg [0:0] output); reg [3:0] hidden_neurons_layer1; reg [2:0] hidden_neurons_layer2; reg [0:0] output_neuron; // weights for inputs to first hidden layer parameter w0 = 0.3; parameter w1 = 0.4; parameter w2 = 0.2; parameter w3 = 0.1; // weights for first hidden layer to ..
Verilog-NN : Radial Basis Network, RBF module RBFN(input x, output y); parameter N = 8; // number of basis functions parameter M = 2; // number of inputs parameter r = 1; // width of Gaussian function reg [N-1:0] w; // weights reg [M-1:0] mu; // centers of basis functions // Gaussian radial basis function assign y = exp(-r * (x - mu)**2); // Network architecture assign y = w * y; endmodule​ https://towardsdatascience.com/the-mostly-c..
Verilog - NN : Feed Forward module ffnn(input wire [N-1:0] x, output wire [M-1:0] y); parameter N = 4; // number of inputs parameter M = 1; // number of outputs parameter H = 2; // number of hidden neurons // weights and biases wire [H-1:0] h; wire [M-1:0] o; wire [H-1:0][N-1:0] w_ih; wire [M-1:0][H-1:0] w_ho; wire [H-1:0] b_h; wire [M-1:0] b_o; // hidden layer computation assign h = {b_h[H-1:0]} + w_ih[H-1:0][N-1:0] * x; ..

반응형