반응형
module Autoencoder(input clk,
input [2:0] inputs,
output reg [2:0] outputs);
reg [2:0] hidden_layer;
reg [2:0] output_layer;
// weights for inputs to hidden layer
parameter w0 = 0.3;
parameter w1 = 0.4;
parameter w2 = 0.2;
// weights for hidden layer to output
parameter w3 = 0.5;
parameter w4 = 0.7;
parameter w5 = 0.6;
always @(posedge clk) begin
hidden_layer[0] <= inputs[0] * w0 + inputs[1] * w1 + inputs[2] * w2;
hidden_layer[1] <= inputs[0] * w2 + inputs[1] * w1 + inputs[2] * w0;
hidden_layer[2] <= inputs[0] * w1 + inputs[1] * w2 + inputs[2] * w0;
output_layer[0] <= hidden_layer[0] * w3 + hidden_layer[1] * w4 + hidden_layer[2] * w5;
output_layer[1] <= hidden_layer[0] * w5 + hidden_layer[1] * w3 + hidden_layer[2] * w4;
output_layer[2] <= hidden_layer[0] * w4 + hidden_layer[1] * w5 + hidden_layer[2] * w3;
end
assign outputs = output_layer;
endmodule
https://towardsdatascience.com/the-mostly-complete-chart-of-neural-networks-explained-3fb6f2367464
The mostly complete chart of Neural Networks, explained
The zoo of neural network types grows exponentially. One needs a map to navigate between many emerging architectures and approaches.
towardsdatascience.com
module autoencoder (input clk,
input [7:0] data_in,
output [7:0] data_out);
// Encoder
reg [3:0] encoded_data;
always @(posedge clk) begin
encoded_data = data_in[7:4];
end
// Decoder
reg [7:0] decoded_data;
always @(posedge clk) begin
decoded_data = {encoded_data, encoded_data};
end
assign data_out = decoded_data;
endmodule
반응형
'하드웨어 > Verilog-NN' 카테고리의 다른 글
Verilog-NN : Variational Aueo Encoder : VAE (0) | 2023.02.09 |
---|---|
Verilog-NN : Gated Recurrent Unit : GRU (0) | 2023.02.09 |
Verilog-NN : Long Short Term Memory : LSTM (0) | 2023.02.09 |
Verilog-NN : Recurrent Neural Network : RNN (0) | 2023.02.09 |
Verilog-NN : Deep Feed Forward : DFF (0) | 2023.02.09 |