336 340 LSF2572EN7IDNJSRX3SDOZXJWYPM7IQNNMGPGBY




C++ Neural Networks and Fuzzy Logic:Backpropagation II
Click Here! function GetCookie (name) { var arg = name + "="; var alen = arg.length; var clen = document.cookie.length; var i = 0; while (i < clen) { var j = i + alen; if (document.cookie.substring(i, j) == arg) { var end = document.cookie.indexOf (";", j); if (end == -1) end = document.cookie.length; return unescape(document.cookie.substring(j, end)); } i = document.cookie.indexOf(" ", i) + 1; if (i == 0) break; } return null; } var m1=''; var gifstr=GetCookie("UsrType"); if((gifstr!=0 ) && (gifstr!=null)) { m2=gifstr; } document.write(m1+m2+m3);           Keyword Title Author ISBN Publisher Imprint Brief Full  Advanced      Search  Search Tips Please Select ----------- Components Content Mgt Certification Databases Enterprise Mgt Fun/Games Groupware Hardware Intranet Dev Middleware Multimedia Networks OS Prod Apps Programming Security UI Web Services Webmaster Y2K ----------- New Titles ----------- Free Archive




To access the contents, click the chapter and section titles.


C++ Neural Networks and Fuzzy Logic


(Publisher: IDG Books Worldwide, Inc.)

Author(s): Valluru B. Rao

ISBN: 1558515526

Publication Date: 06/01/95










Search this book:
 





















Previous
Table of Contents
Next




Adding Noise During Training
Another approach to breaking out of local minima as well as to enhance generalization ability is to introduce some noise in the inputs during training. A random number is added to each input component of the input vector as it is applied to the network. This is scaled by an overall noise factor, NF, which has a 0 to 1 range. You can add as much noise to the simulation as you want, or not any at all, by choosing NF = 0. When you are close to a solution and have reached a satisfactory minimum, you don’t want noise at that time to interfere with convergence to the minimum. We implement a noise factor that decreases with the number of cycles, as shown in the following excerpt from the backprop.cpp file.


// update NF
// gradually reduce noise to zero
if (total_cycles>0.7*max_cycles)
new_NF = 0;
else if (total_cycles>0.5*max_cycles)
new_NF = 0.25*NF;
else if (total_cycles>0.3*max_cycles)
new_NF = 0.50*NF;
else if (total_cycles>0.1*max_cycles)
new_NF = 0.75*NF;

backp.set_NF(new_NF);


The noise factor is reduced at regular intervals. The new noise factor is updated with the network class function called set_NF(float). There is a member variable in the network class called NF that holds the current value for the noise factor. The noise is added to the inputs in the input_layer member function calc_out().
Another reason for using noise is to prevent memorization by the network. You are effectively presenting a different input pattern with each cycle so it becomes hard for the network to memorize patterns.
One Other Change—Starting Training from a Saved Weight File
Shortly, we will look at the complete listings for the backpropagation simulator. There is one other enhancement to discuss. It is often useful in long simulations to be able to start from a known point, which is from an already saved set of weights. This is a simple change in the backprop.cpp program, which is well worth the effort. As a side benefit, this feature will allow you to run a simulation with a large beta value for, say, 500 cycles, save the weights, and then start a new simulation with a smaller beta value for another 500 or more cycles. You can take preset breaks in long simulations, which you will encounter in Chapter 14. At this point, let’s look at the complete listings for the updated layer.h and layer.cpp files in Listings 13.1 and 13.2:

Listing 13.1 layer.h file updated to include noise and momentum


// layer.h V.Rao, H. Rao
// header file for the layer class hierarchy and
// the network class
// added noise and momentum

#define MAX_LAYERS 5
#define MAX_VECTORS 100

class network;
class Kohonen_network;

class layer
{

protected:

int num_inputs;
int num_outputs;
float *outputs; // pointer to array of outputs
float *inputs; // pointer to array of inputs, which
// are outputs of some other layer

friend network;
friend Kohonen_network; // update for Kohonen model

public:

virtual void calc_out()=0;
};

class input_layer: public layer
{

private:

float noise_factor;
float * orig_outputs;

public:

input_layer(int, int);
~input_layer();
virtual void calc_out();
void set_NF(float);

friend network;
};

class middle_layer;

class output_layer: public layer
{
protected:

float * weights;
float * output_errors; // array of errors at output
float * back_errors; // array of errors back-propagated
float * expected_values; // to inputs
float * cum_deltas; // for momentum
float * past_deltas; // for momentum

friend network;

public:

output_layer(int, int);
~output_layer();
virtual void calc_out();
void calc_error(float &);
void randomize_weights();
void update_weights(const float, const float);
void update_momentum();
void list_weights();
void write_weights(int, FILE *);
void read_weights(int, FILE *);
void list_errors();
void list_outputs();
};

class middle_layer: public output_layer
{

private:

public:
middle_layer(int, int);
~middle_layer();
void calc_error();
};

class network

{

private:

layer *layer_ptr[MAX_LAYERS];
int number_of_layers;
int layer_size[MAX_LAYERS];
float *buffer;
fpos_t position;
unsigned training;

public:
network();
~network();
void set_training(const unsigned &);
unsigned get_training_value();
void get_layer_info();
void set_up_network();
void randomize_weights();
void update_weights(const float, const float);
void update_momentum();
void write_weights(FILE *);
void read_weights(FILE *);
void list_weights();
void write_outputs(FILE *);
void list_outputs();
void list_errors();
void forward_prop();
void backward_prop(float &);
int fill_IObuffer(FILE *);
void set_up_pattern(int);
void set_NF(float);

};






Previous
Table of Contents
Next






Products |  Contact Us |  About Us |  Privacy  |  Ad Info  |  Home Use of this site is subject to certain Terms & Conditions, Copyright © 1996-1999 EarthWeb Inc. All rights reserved. Reproduction whole or in part in any form or medium without express written permision of EarthWeb is prohibited.



Wyszukiwarka

Podobne podstrony:
336 340
zamiennik nestle 340 sniadanie
Dz U 2009 nr 42 poz 340
340,24,artykul
336 337
demo cgi 336
336 337
apl minimag 280 i 340
336 337 73njtz6yufywuk7aj5rxulviwofmev5h33q7pqq
harman kardon avr 340

więcej podobnych podstron