| Neural Network Toolbox | ![]() |
Random order incremental training with learning functions.
Syntax
[net,TR,Ac,El] = trainr(net,Pd,Tl,Ai,Q,TS,VV,TV)
Description
trainr is not called directly. Instead it is called by train for networks whose net.trainFcn property is set to 'trainr'.
trainr trains a network with weight and bias learning rules with incremental updates after each presentation of an input. Inputs are presented in random order.
trainr(net,Pd,Tl,Ai,Q,TS,VV) takes these inputs,
net - Neural network.
Pd - Delayed inputs.
Tl - Layer targets.
Ai - Initial input conditions.
Q - Batch size.
TS - Time steps.
VV - Ignored.
TV - Ignored.
net - Trained network.
TR - Training record of various values over each epoch:
Ac - Collective layer outputs.
El - Layer errors.
Training occurs according to trainr's training parameters shown here with their default values:
net.trainParam.epochs 100 Maximum number of epochs to train
net.trainParam.goal 0 Performance goal
net.trainParam.show 25 Epochs between displays (NaN for no displays)
net.trainParam.time inf Maximum time to train in seconds
Dimensions for these variables are:
Pd - No x Ni x TS cell array, each element Pd{i,j,ts} is a Dij x Q matrix.
Tl - Nl x TS cell array, each element P{i,ts} is a Vi x Q matrix or [].
Ai - Nl x LD cell array, each element Ai{i,k} is an Si x Q matrix.
Ni = net.numInputs
Nl = net.numLayers
LD = net.numLayerDelays
Ri = net.inputs{i}.size
Si = net.layers{i}.size
Vi = net.targets{i}.size
Dij = Ri * length(net.inputWeights{i,j}.delays)
trainr does not implement validation or test vectors, so arguments VV and TV are ignored.
trainr(code) returns useful information for each code string:
Network Use
You can create a standard network that uses trainr by calling newc or newsom.
To prepare a custom network to be trained with trainr:
net.trainFcn to 'trainr'.
net.inputWeights{i,j}.learnFcn to a learning function.
net.layerWeights{i,j}.learnFcn to a learning function.
net.biases{i}.learnFcn to a learning function. (Weight and bias learning parameters will automatically be set to default values for the given learning function.)
net.trainParam properties to desired values.
train.
See newc and newsom for training examples.
Algorithm
For each epoch, all training vectors (or sequences) are each presented once in a different random order with the network and weight and bias values updated accordingly after each individual presentation.
Training stops when any of these conditions are met:
epochs (repetitions) is reached.
goal.
time has been exceeded.
See Also
| trainoss | trainrp | ![]() |