struct Ai4cr::NeuralNetwork::Backpropagation

Overview

= Introduction

This is an implementation of a multilayer perceptron network, using the backpropagation algorithm for learning.

Backpropagation is a supervised learning technique (described by Paul Werbos in 1974, and further developed by David E. Rumelhart, Geoffrey E. Hinton and Ronald J. Williams in 1986)

= Features

= Parameters

Use class method get_parameters_info to obtain details on the algorithm parameters. Use set_parameters to set values for this parameters.

= How to use it

Create the network with 4 inputs, 1 hidden layer with 3 neurons,

and 2 outputs

net = Ai4cr::NeuralNetwork::Backpropagation.new([4, 3, 2])

Train the network

1000.times do |i| net.train(example[i], result[i]) end

Use it: Evaluate data with the trained network

net.eval([12, 48, 12, 25]) => [0.86, 0.01]

More about multilayer perceptron neural networks and backpropagation:

= About the project Ported By:: Daniel Huffman Url:: https://github.com/drhuffman12/ai4cr

Based on:: Ai4r Author:: Sergio Fierens License:: MPL 1.1 Url:: http://ai4r.org

Included Modules

Defined in:

ai4cr/neural_network/backpropagation.cr

Constructors

Instance Method Summary

Instance methods inherited from module Ai4cr::Breed::Client

birth_id : Int32 birth_id, birth_id=(birth_id : Int32) birth_id=, breed_delta : Float64 breed_delta, breed_delta=(breed_delta : Float64) breed_delta=, clone clone, error_hist_stats(in_bw = false) error_hist_stats, error_stats error_stats, error_stats=(error_stats) error_stats=, history_size history_size, name : String name, name=(name : String) name=, parent_a_id : Int32 parent_a_id, parent_a_id=(parent_a_id : Int32) parent_a_id=, parent_b_id : Int32 parent_b_id, parent_b_id=(parent_b_id : Int32) parent_b_id=

Constructor Detail

def self.new(pull : JSON::PullParser) #

[View source]
def self.new(structure : Array(Int32), bias_disabled : Bool | Nil = nil, learning_rate : Float64 | Nil = nil, momentum : Float64 | Nil = nil, history_size : Int32 = 10) #

[View source]

Instance Method Detail

def activation_nodes : Array(Array(Float64)) #

[View source]
def activation_nodes=(activation_nodes : Array(Array(Float64))) #

[View source]
def backpropagate #

Propagate error backwards


[View source]
def bias_disabled : Bool #

[View source]
def bias_disabled=(bias_disabled : Bool) #

[View source]
def calculate_error_distance #

Calculate quadratic error for a expected output value Error = 0.5 * sum( (expected_value[i] - output_value[i])**2 )


[View source]
def calculate_internal_deltas #

Calculate deltas for hidden layers


[View source]
def calculate_output_deltas #

Calculate deltas for output layer


[View source]
def check_input_dimension(inputs) #

[View source]
def check_output_dimension #

[View source]
def deltas : Array(Array(Float64)) #

[View source]
def derivative_propagation_function #

[View source]
def eval(input_values) #

Evaluates the input. E.g. net = Backpropagation.new([4, 3, 2]) net.eval([25, 32.3, 12.8, 1.5]) # => [0.83, 0.03]


[View source]
def eval_result(input_values) #

Evaluates the input and returns most active node E.g. net = Backpropagation.new([4, 3, 2]) net.eval_result([25, 32.3, 12.8, 1.5]) # eval gives [0.83, 0.03] # => 0


[View source]
def expected_outputs : Array(Float64) #

[View source]
def expected_outputs=(expected_outputs : Array(Float64)) #

[View source]
def feedforward(input_values) #

Propagate values forward


[View source]
def guesses_as_is #

To get the sorted/top/bottom n output results


[View source]
def guesses_best #

GUESSES


[View source]
def guesses_bottom_n(n = @activation_nodes.last.size) #

[View source]
def guesses_ceiled #

[View source]
def guesses_rounded #

[View source]
def guesses_sorted #

[View source]
def guesses_top_n(n = @activation_nodes.last.size) #

[View source]
def height #

[View source]
def hidden_qty #

[View source]
def init_activation_nodes #

Initialize neurons structure.


[View source]
def init_last_changes #

Momentum usage need to know how much a weight changed in the previous training. This method initialize the @last_changes structure with 0 values.


[View source]
def init_network #

Initialize (or reset) activation nodes and weights, with the provided net structure and parameters.


[View source]
def init_weights #

Initialize the weight arrays using function specified with the initial_weight_function parameter


[View source]
def initial_weight_function #

[View source]
def last_changes : Array(Array(Array(Float64))) #

[View source]
def last_changes=(last_changes : Array(Array(Array(Float64)))) #

[View source]
def learning_rate : Float64 #

[View source]
def learning_rate=(learning_rate : Float64) #

[View source]
def learning_styles #

[View source]
def load_expected_outputs(expected_outputs) #

[View source]
def momentum : Float64 #

[View source]
def momentum=(momentum : Float64) #

[View source]
def propagation_function #

[View source]
def structure : Array(Int32) #

[View source]
def structure=(structure : Array(Int32)) #

[View source]
def train(inputs, outputs) #

This method trains the network using the backpropagation algorithm.

input: Networks input

output: Expected output for the given input.

This method returns the network error: => 0.5 * sum( (expected_value[i] - output_value[i])**2 )


[View source]
def update_weights #

Update weights after @deltas have been calculated.


[View source]
def weights : Array(Array(Array(Float64))) #

[View source]
def weights=(weights : Array(Array(Array(Float64)))) #

[View source]
def width #

[View source]