module Ann_backprop:Forward and backward propagation through the neural network.sig
..end
The functions computing the outputs of the neural network
(forward propagation) are also provided in this module.
val forward : Ann_func.nn_func ->
Ann_topology.nn_topology ->
float array ->
float array -> Ann_func.vector * float array * float array * float array
forward fn nn w x
returns a tuple (y,a_out,z,a)
, where y
is the
output vector of the network with functions fn
, connections nn
,
and weights w
, a_out
is the weighted sum of the inputs of the units
belonging to the output layer, z
is the output of the hidden units,
and a
is the weighted sum of the inputs of the hidden units.val nn_predict : Ann_func.nn_func ->
Ann_topology.nn_topology -> float array -> float array -> Ann_func.vector
forward fn nn w x
returns the output vector y
of the network with
functions fn
, connections nn
, and weights w
.val backprop : Ann_func.nn_func ->
Ann_topology.nn_topology ->
float array ->
Ann_func.vector ->
Ann_func.vector * Ann_func.vector * Ann_func.vector * Ann_func.vector ->
Ann_func.vector
backprop fn nn w t (y,a_out,z,a)
computes the gradient of the error
function, using backpropagation of the error through the network.
It returns a vector of partial derivatives of the error with respect
to each weight of the network. This gradient vector is computed at point
w
of the weights space. The network is described by fn
(error and
transfer functions), and nn
(units and connections).
t
is the target vector, y
is the output vector, a_out
is the
weighted sum of the inputs of the units belonging to the output layer,
z
is the output of the hidden units, and a
is the weighted sum of
the inputs of the hidden units.val central_diff : float ->
Ann_func.nn_func ->
Ann_topology.nn_topology ->
float array -> float array -> Ann_func.vector -> float array
central_diff epsilon fn nn w x t
computes the gradient of the
error function using central differences instead of backpropagation.
This method is slower than backpropagation and is used only to test
and debug the backprop
function. epsilon
is the size of the
perturbation applied to the weights.