Neural Network Toolbox | ![]() ![]() |
Gradient descent with momentum weight and bias learning function
Syntax
[dW,LS] = learngdm(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
[db,LS] = learngdm(b,ones(1,Q),Z,N,A,T,E,gW,gA,D,LP,LS)
Description
learngdm
is the gradient descent with momentum weight and bias learning function.
learngdm(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
takes several inputs,
W - S
x R
weight matrix (or S
x 1
bias vector).
P - R
x Q
input vectors (or ones(1,Q)
).
Z - S
x Q
weighted input vectors.
N - S
x Q
net input vectors.
A - S
x Q
output vectors.
T - S
x Q
layer target vectors.
E - S
x Q
layer error vectors.
gW - S
x R
gradient with respect to performance.
gA - S
x Q
output gradient with respect to performance.
D - S
x S
neuron distances.
LP -
Learning parameters, none, LP = [].
LS -
Learning state, initially should be = []
.
Learning occurs according to learngdm
's learning parameters, shown here with their default values.
learngdm(code)
returns useful information for each code
string:
pnames
' - Names of learning parameters.
'pdefaults
' - Default learning parameters.
'needg
' - Returns 1 if this function uses gW
or gA
.
Examples
Here we define a random gradient G
for a weight going to a layer with 3 neurons, from an input with 2 elements. We also define a learning rate of 0.5 and momentum constant of 0.8;
Since learngdm
only needs these values to calculate a weight change (see algorithm below), we will use them to do so. We will use the default initial learning state.
learngdm
returns the weight change and a new learning state.
Network Use
You can create a standard network that uses learngdm
with newff
, newcf
, or newelm
.
To prepare the weights and the bias of layer i
of a custom network to adapt with learngdm
:
net.adaptFcn
to 'trains
'. net.adaptParam
will automatically become trains
's default parameters.
net.inputWeights{i,j}.learnFcn
to 'learngdm
'. Set each net.layerWeights{i,j}.learnFcn
to 'learngdm
'. Set net.biases{i}.learnFcn
to 'learngdm
'. Each weight and bias learning parameter property will automatically be set to learngdm
's default parameters.
To allow the network to adapt:
See newff
or newcf
for examples.
Algorithm
learngdm
calculates the weight change dW
for a given neuron from the neuron's input P
and error E
, the weight (or bias) W
, learning rate LR
, and momentum constant MC
, according to gradient descent with momentum:
The previous weight change dWprev
is stored and read from the learning state LS
.
See Also
learngd
,
newff
,
newcf
,
adapt
,
train
![]() | learngd | learnh | ![]() |