class SHAInet::Network
- SHAInet::Network
- Reference
- Object
Defined in:
shainet/basic/exprimental.crshainet/basic/network_run.cr
shainet/basic/network_setup.cr
Constant Summary
-
CONNECTION_TYPES =
["full", "ind_to_ind", "random"]
-
COST_FUNCTIONS =
["mse", "c_ent"]
-
LAYER_TYPES =
["input", "hidden", "output"]
-
Log =
::Log.for(self)
Constructors
-
.new
First creates an empty shell of the entire network
Instance Method Summary
-
#add_layer(l_type : Symbol | String, l_size : Int32, n_type : Symbol | String = "memory", activation_function : ActivationFunction = SHAInet.sigmoid)
Create and populate a layer with neurons l_type is: :input, :hidden or :output l_size = how many neurons in the layer n_type = advanced option for different neuron types
-
#all_neurons : Array(SHAInet::Neuron)
General network parameters
-
#all_synapses : Array(SHAInet::Synapse)
General network parameters
-
#alpha : Float64
Parameters for Adam
-
#alpha=(alpha : Float64)
Parameters for Adam
- #b_gradient : Array(Float64)
- #beta1 : Float64
- #beta2 : Float64
- #clean_dead_neurons
-
#connect_ltl(src_layer : Layer, dest_layer : Layer, connection_type : Symbol | String)
Connect two specific layers with synapses
-
#delta_max : Float64
Parameters for Rprop
-
#delta_max=(delta_max : Float64)
Parameters for Rprop
-
#delta_min : Float64
Parameters for Rprop
-
#delta_min=(delta_min : Float64)
Parameters for Rprop
- #epsilon : Float64
- #error_signal : Array(Float64)
-
#etah_minus : Float64
Parameters for Rprop
-
#etah_minus=(etah_minus : Float64)
Parameters for Rprop
-
#etah_plus : Float64
Parameters for Rprop
-
#etah_plus=(etah_plus : Float64)
Parameters for Rprop
-
#evaluate(input_data : Array(GenNum), expected_output : Array(GenNum), cost_function : CostFunction = SHAInet.quadratic_cost)
Quantifies how good the network performed for a single input compared to the expected output This function returns the actual output and updates the error gradient for the output layer
- #evaluate_exp(input_data : Array(GenNum), expected_output : Array(GenNum), cost_function : CostFunction = SHAInet.quadratic_cost, stealth : Bool = true)
-
#fully_connect
Connect all the layers in order (input and output don't connect between themselves): input, hidden, output
- #get_cost_proc(function_name : String) : CostFunction
-
#hidden_layers : Array(SHAInet::Layer)
General network parameters
-
#input_layers : Array(SHAInet::Layer)
General network parameters
-
#inspect
Returns an unambiguous and information-rich string representation of this object, typically intended for developers.
-
#learning_rate : Float64
Parameters for SGD + Momentum
-
#learning_rate=(learning_rate : Float64)
Parameters for SGD + Momentum
- #load_from_file(file_path : String)
- #log_summary(e)
-
#momentum : Float64
Parameters for SGD + Momentum
-
#momentum=(momentum : Float64)
Parameters for SGD + Momentum
- #mse : Float64
-
#output_layers : Array(SHAInet::Layer)
General network parameters
- #prev_mse : Float64
- #randomize_all_biases
- #randomize_all_weights
-
#run(input : Array(GenNum), stealth : Bool = false) : Array(Float64)
Run an input throught the network to get an output (weights & biases do not change)
- #run_exp(input : Array(GenNum), stealth : Bool = false) : Array(Float64)
- #save_to_file(file_path : String)
-
#test(test_set)
Evaluate the network performance on a test set
- #time_step : Int32
- #total_error : Float64
-
#train(data : Array(Array(Array(GenNum))) | SHAInet::TrainingData, training_type : Symbol | String, cost_function : Symbol | String | CostFunction = :mse, epochs : Int32 = 1, error_threshold : Float64 = 0.00000001, mini_batch_size : Int32 = 1, log_each : Int32 = 1000, show_slice : Bool = false, autosave : NamedTuple(freq: Int32, path: String) | Nil = nil)
Training the model ameba:disable Metrics/CyclomaticComplexity
-
#train_batch(data : Array(Array(Array(GenNum))) | SHAInet::TrainingData, training_type : Symbol | String = :sgdm, cost_function : Symbol | String | CostFunction = :mse, epochs : Int32 = 1, error_threshold : Float64 = 0.00000001, mini_batch_size : Int32 = 1, log_each : Int32 = 1, show_slice : Bool = false, autosave : NamedTuple(freq: Int32, path: String) | Nil = nil)
This method is kept for matching syntax of previous versions.
-
#train_es(data : Array(Array(Array(GenNum))) | SHAInet::TrainingData, pool_size : Int32, learning_rate : Float64, sigma : Float64, cost_function : Symbol | String | CostFunction = :c_ent, epochs : Int32 = 1, mini_batch_size : Int32 = 1, error_threshold : Float64 = 0.0, log_each : Int32 = 1, show_slice : Bool = false, autosave : NamedTuple(freq: Int32, path: String) | Nil = nil)
Use evolutionary strategies for network optimization instread of gradient based approach ameba:disable Metrics/CyclomaticComplexity
- #train_es_exp(data : Array(Array(Array(GenNum))) | SHAInet::TrainingData, pool_size : Int32, learning_rate : Float64, sigma : Float64, cost_function : Symbol | String | CostFunction = :c_ent, epochs : Int32 = 1, mini_batch_size : Int32 = 1, error_threshold : Float64 = 0.0, log_each : Int32 = 1, show_slice : Bool = false, autosave : NamedTuple(freq: Int32, path: String) | Nil = nil)
-
#update_biases(learn_type : Symbol | String, batch : Bool = false)
Update biases based on the learning type chosen
-
#update_mse
Calculate MSE from the error signal of the output layer
-
#update_weights(learn_type : Symbol | String, batch : Bool = false)
Update weights based on the learning type chosen
- #validate_values(array : Array(Float64), location : String)
- #verify_data(data : Array(Array(Array(GenNum))))
- #verify_net_before_train
- #w_gradient : Array(Float64)
Constructor Detail
Instance Method Detail
Create and populate a layer with neurons l_type is: :input, :hidden or :output l_size = how many neurons in the layer n_type = advanced option for different neuron types
Connect two specific layers with synapses
Quantifies how good the network performed for a single input compared to the expected output This function returns the actual output and updates the error gradient for the output layer
Connect all the layers in order (input and output don't connect between themselves): input, hidden, output
Returns an unambiguous and information-rich string representation of this object, typically intended for developers.
This method should usually not be overridden. It delegates to
#inspect(IO)
which can be overridden for custom implementations.
Also see #to_s
.
Run an input throught the network to get an output (weights & biases do not change)
Training the model ameba:disable Metrics/CyclomaticComplexity
This method is kept for matching syntax of previous versions. It is possible to use the "train" method instead
Use evolutionary strategies for network optimization instread of gradient based approach ameba:disable Metrics/CyclomaticComplexity
Update biases based on the learning type chosen
Update weights based on the learning type chosen