| Neural Network Toolbox | ![]() |
Syntax
[dW,LS] = learnlv1(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
Description
learnlv1 is the LVQ1 weight learning function.
learnlv1(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,
W - S x R weight matrix (or S x 1 bias vector).
P - R x Q input vectors (or ones(1,Q)).
Z - S x Q weighted input vectors.
N - S x Q net input vectors.
A - S x Q output vectors.
T - S x Q layer target vectors.
E - S x Q layer error vectors.
gW - S x R weight gradient with respect to performance.
gA - S x Q output gradient with respect to performance.
D - S x R neuron distances.
LP - Learning parameters, none, LP = [].
LS - Learning state, initially should be = [].
Learning occurs according to learnlv1's learning parameter shown here with its default value.
LP.lr - 0.01 - Learning rate.
learnlv1(code) returns useful information for each code string:
pnames' - Names of learning parameters.
'pdefaults' - Default learning parameters.
needg' - Returns 1 if this function uses gW or gA.
Examples
Here we define a random input P, output A, weight matrix W, and output gradient gA for a layer with a two-element input and three neurons.
We also define the learning rate LR.
Since learnlv1 only needs these values to calculate a weight change (see algorithm below), we will use them to do so.
Network Use
You can create a standard network that uses learnlv1 with newlvq. To prepare the weights of layer i of a custom network to learn with learnlv1:
net.trainFcn to `trainr'. (net.trainParam will automatically become trainr's default parameters.)
net.adaptFcn to 'trains'. (net.adaptParam will automatically become trains's default parameters.)
net.inputWeights{i,j}.learnFcn to 'learnlv1'. Set each net.layerWeights{i,j}.learnFcn to 'learnlv1'. (Each weight learning parameter property will automatically be set to learnlv1's default parameters.)
To train the network (or enable it to adapt):
Algorithm
learnlv1 calculates the weight change dW for a given neuron from the neuron's input P, output A, output gradient gA and learning rate LR, according to the LVQ1 rule, given i the index of the neuron whose output a(i) is 1:
dw(i,:) = +lr*(p-w(i,:)) if gA(i) = 0;= -lr*(p-w(i,:)) if gA(i) = -1
See Also
| learnk | learnlv2 | ![]() |