
  Multilayer Perceptron Networks

    Outline

 A. MLP Network Limitations and Characteristics
 B. MLP Sizing Program
 C. Fast training of MLP network
 D. Network Pruning Program
 E. Automated MLP Design
 F. Fast testing program
 G. Processing Data with a Trained MLP
 H. Error Functions for Training MLP Networks


 A. MLP Network Limitations and Characteristics
  1. There is no limitation on data file size.
  2. MLP neural nets are limited to 40 or fewer units for hidden or 
     output layers and 100 units for the input layer. 
  3. Activation Functions; Sigmoidal ( Out = 1/(1 + exp(-Net)) ) hidden
        units and output units.
  4. Hidden layers allowed only 1. 
  5. Connectivity: full connectivity which means output layer connects fully to the 
     hidden layer and input layer, hidden layer connects fully to the input layer.
     each unit in hidden layer and output layer has a threthod.

 B.MLP Network Sizing
     Network Sizing. Estimates attainable training error for an
     MLP network, Estimates the numbers of hidden units vs MSE, given a training data file.

 C. Fast training of MLP networks. HWO_OWO algorithm Trains MLP networks one or two
    orders of magnitude faster than BP.

 D. Given a training data file.Analyze and prune trained MLPs from HWO_OWO fast training.
    Produces weight and network structure files for the pruned network,
    which can be saved to disk.

 E. Automated MLP Design. Given a training data file and its number of
    inputs and desired outputs, an MLP is sized, designed and pruned.

 F. Fast testing of MLP networks. HWO_OWO algorithm Tests MLP networks one or two
    orders of magnitude faster than BP.  

 G. Process data using a trained MLP. Data may or may not include
    desired outputs.

 G. Error Functions for Training MLP Networks
    The error function that is being minimized during fast training is

                   Nout      
    MSE = (1/Npat) SUM MSE(k)     where
                   k=1  

              Npat              2
    MSE(k) =  SUM [ Tpk - Opk ]
              p=1  

     where Npat is the number of training patterns, Nout is the number 
     of network output nodes, Tpk is the desired output for the pth
     training pattern and the kth output, and Opk is the actual output 
     for the pth training pattern and the kth output. MSE is printed
     for each iteration.
