ADALINE network
    decision boundary <1> <2>
adaption
    custom function
    function
    parameters
adaptive filter
    example
    noise cancellation example
    prediction application
    prediction example
    training
adaptive linear networks <1> <2>
amplitude detection
applications
    adaptive filtering
    aerospace
    automotive
    banking
    defense
    electronics
    entertainment
    financial
    insurance
    manufacturing
    medical <1> <2>
    oil and gas exploration
    robotics
    speech
    telecommunications
    transportation
architecture
    bias connection <1> <2>
    input connection <1> <2>
    input delays
    layer connection <1> <2>
    layer delays
    number of inputs <1> <2>
    number of layers <1> <2>
    number of outputs <1> <2>
    number of targets <1> <2>
    output connection <1> <2>
    target connection <1> <2>

backpropagation <1> <2>
    algorithm
    example
backtracking search
batch training <1> <2> <3>
    dynamic networks
    static networks
Bayesian framework
benchmark <1> <2>
BFGS quasi-Newton algorithm
bias
    connection
    definition
    initialization function
    learning
    learning function
    learning parameters
    subobject <1> <2>
    value <1> <2>
box distance
Brent's search

cell array
    derivatives
    errors
    initial layer delay states
    input P
    input vectors
    inputs and targets
    inputs property
    layer targets
    layers property
    matrix of concurrent vectors
    matrix of sequential vectors
    sequence of outputs
    sequential inputs
    tap delayed inputs
    weight matrices and bias vectors
Charalambous' search
classification
    input vectors
    linear
    regions
code
    mathematical equivalents
    perceptron network
    writing
competitive layer
competitive neural network
    example
competitive transfer function <1> <2> <3>
concurrent inputs <1> <2>
conjugate gradient algorithm
    Fletcher-Reeves update
    Polak-Ribiere update
    Powell-Beale restarts
    scaled
continuous stirred tank reactor
control
    control design
    electromagnet <1> <2>
    feedback linearization <1> <2>
    model predictive control <1> <2> <3> <4> <5> <6>
    model reference control <1> <2> <3> <4> <5> <6>
    NARMA-L2 <1> <2> <3> <4> <5> <6>
    plant <1> <2> <3>
    robot arm
    time horizon
    training data
CSTR
custom neural network

dead neurons
decision boundary <1> <2>
    definition
demonstrations
    appelm1
    applin3
    definition
    demohop1
    demohop2
    demorb4
    nnd10lc
    nnd11gn
    nnd12cg
    nnd12m
    nnd12mo
    nnd12sd1 <1> <2>
    nnd12vl
distance <1> <2>
    box
    custom function
    Euclidean
    link
    Manhattan
    tuning phase
dynamic networks <1> <2>
    training <1> <2>

early stopping <1> <2>
electromagnet <1> <2>
Elman network
    recurrent connection
Euclidean distance
export
    networks <1> <2>
    training data

feedback linearization <1> <2>
feedforward network
finite impulse response filter <1> <2>
Fletcher-Reeves update

generalization
    regularization
generalized regression network
golden section search
gradient descent algorithm <1> <2>
    batch
    with momentum <1> <2>
graphical user interface <1> <2>
gridtop topology

Hagan, Martin <1> <2>
hard limit transfer function <1> <2> <3>
heuristic techniques
hidden layer
    definition
home neuron
Hopfield network
    architecture
    design equilibrium point
    solution trajectories
    stable equilibrium point
    target equilibrium points
horizon
hybrid bisection-cubic search

import
    networks <1> <2>
    training data <1> <2>
incremental training
initial step size function
initialization
    additional functions
    custom function
    definition
    function
    parameters <1> <2>
input
    connection
    number
    range
    size
    subobject <1> <2> <3>
input vector
    outlier
input vectors
    classification
    dimension reduction
    distance
    topology
input weights
    definition
inputs
    concurrent <1> <2>
    sequential <1> <2>
installation guide

Jacobian matrix

Kohonen learning rule

lambda parameter
layer
    connection
    dimensions
    distance function
    distances
    initialization function
    net input function
    number
    positions
    size
    subobject
    topology function
    transfer function
layer weights
    definition
learning rate
    adaptive
    maximum stable
    optimal
    ordering phase
    too large
    tuning phase
learning rules
    custom
    Hebb
    Hebb with decay
    instar
    Kohonen
    outstar
    supervised learning
    unsupervised learning
    Widrow-Hoff <1> <2> <3> <4> <5> <6>
learning vector quantization
    creation
    learning rule <1> <2>
    LVQ network
    subclasses
    target classes
    union of two subclasses
least mean square error <1> <2>
Levenberg-Marquardt algorithm
    reduced memory
line search functions
    backtracking search
    Brent's search
    Charalambous' search
    golden section search
    hybrid bisection-cubic search
linear networks
    design
linear transfer function <1> <2> <3> <4>
linear transfer functions
linearly dependent vectors
link distance
log-sigmoid transfer function <1> <2> <3>

MADALINE
magnet <1> <2>
Manhattan distance
maximum performance increase
maximum step size
mean square error function
    least <1> <2>
memory reduction
model predictive control <1> <2> <3> <4> <5> <6>
model reference control <1> <2> <3> <4> <5> <6>
momentum constant
mu parameter

NARMA-L2 <1> <2> <3> <4> <5> <6>
neighborhood
net input function
    custom
network
    definition
    dynamic <1> <2>
    static
network function
network layer
    competitive
    definition
Network/Data Manager window
neural network
    adaptive linear <1> <2>
    competitive
    custom
    definition
    feedforward
    generalized regression
    multiple layer <1> <2> <3>
    one layer <1> <2> <3> <4> <5>
    probabilistic
    radial basis
    self organizing
    self-organizing feature map
Neural Network Design
    Instructor's Manual
    overheads
neuron
    dead (not allocated)
    definition
    home
Newton's method
NN predictive control <1> <2> <3> <4> <5> <6>
normalization
    inputs and targets
    mean and standard deviation
notation
    abbreviated <1> <2>
    layer
    transfer function symbols <1> <2>
numerical optimization

one step secant algorithm
ordering phase learning rate
outlier input vector
output
    connection
    number
    size
    subobject <1> <2>
output layer
    definition
    linear
overdetermined systems
overfitting

pass
    definition
pattern recognition
perceptron learning rule <1> <2>
    normalized
perceptron network
    code
    creation
    limitations
performance function
    custom
    modified
    parameters
plant <1> <2> <3>
plant identification <1> <2> <3> <4>
Polak-Ribiere update
postprocessing
post-training analysis
Powell-Beale restarts
predictive control <1> <2> <3> <4> <5> <6>
preprocessing
principal component analysis
probabilistic neural network
    design

quasi-Newton algorithm
    BFGS

radial basis
    design
    efficient network
    function
    network
    network design
radial basis transfer function
recurrent connection
recurrent networks
regularization
    automated
resilient backpropagation
robot arm

self-organizing feature map (SOFM) network
    neighborhood
    one-dimensional example
    two-dimensional example
self-organizing networks
sequential inputs <1> <2>
S-function
sigma parameter
simulation
    definition
Simulink
    generating networks
    NNT blockset <1> <2>
spread constant
squashing functions
static networks
    batch training
    training
subobject
    bias <1> <2> <3>
    input <1> <2> <3> <4>
    layer <1> <2>
    output <1> <2> <3>
    target <1> <2> <3>
    weight <1> <2> <3> <4> <5>
supervised learning
    target output
    training set
system identification <1> <2> <3> <4> <5> <6>

tan-sigmoid transfer function
tapped delay line <1> <2>
target
    connection
    number
    size
    subobject <1> <2>
target output
time horizon
topologies
    custom function
    gridtop
topologies for SOFM neuron locations
training
    batch <1> <2>
    competitive networks
    custom function
    definition <1> <2>
    efficient
    faster
    function
    incremental
    ordering phase
    parameters
    post-training analysis
    self organizing feature map
    styles
    tuning phase
training data
training set
training styles
training with noise
transfer functions
    competitive <1> <2> <3>
    custom
    definition
    derivatives
    hard limit <1> <2>
    linear <1> <2> <3>
    log-sigmoid <1> <2> <3>
    radial basis
    saturating linear
    soft maximum
    tan-sigmoid
    triangular basis
transformation matrix
tuning phase learning rate
tuning phase neighborhood distance

underdetermined systems
unsupervised learning

variable learning rate algorithm
vectors
    linearly dependent

weight
    definition
    delays <1> <2>
    initialization function <1> <2>
    learning <1> <2>
    learning function <1> <2>
    learning parameters <1> <2>
    size <1> <2>
    subobject <1> <2> <3>
    value <1> <2> <3>
    weight function <1> <2>
weight function
    custom
weight matrix
    definition
Widrow-Hoff learning rule <1> <2> <3> <4> <5> <6>
workspace (command line)