C++ Neural Networks and Fuzzy Logic:Adaptive Resonance Theory (ART)
Click Here!
function GetCookie (name)
{
var arg = name + "=";
var alen = arg.length;
var clen = document.cookie.length;
var i = 0;
while (i < clen)
{
var j = i + alen;
if (document.cookie.substring(i, j) == arg) {
var end = document.cookie.indexOf (";", j);
if (end == -1)
end = document.cookie.length;
return unescape(document.cookie.substring(j, end));
}
i = document.cookie.indexOf(" ", i) + 1;
if (i == 0) break;
}
return null;
}
var m1='';
var gifstr=GetCookie("UsrType");
if((gifstr!=0 ) && (gifstr!=null)) { m2=gifstr; }
document.write(m1+m2+m3);
Keyword
Title
Author
ISBN
Publisher
Imprint
Brief
Full
Advanced Search
Search Tips
Please Select
-----------
Components
Content Mgt
Certification
Databases
Enterprise Mgt
Fun/Games
Groupware
Hardware
Intranet Dev
Middleware
Multimedia
Networks
OS
Prod Apps
Programming
Security
UI
Web Services
Webmaster
Y2K
-----------
New Titles
-----------
Free Archive
To access the contents, click the chapter and section titles.
C++ Neural Networks and Fuzzy Logic
(Publisher: IDG Books Worldwide, Inc.)
Author(s): Valluru B. Rao
ISBN: 1558515526
Publication Date: 06/01/95
Search this book:
Previous
Table of Contents
Next
Chapter 10Adaptive Resonance Theory (ART)
Introduction
Grossbergs Adaptive Resonance Theory, developed further by Grossberg and Carpenter, is for the categorization of patterns using the competitive learning paradigm. It introduces a gain control and a reset to make certain that learned categories are retained even while new categories are learned and thereby addresses the plasticitystability dilemma.
Adaptive Resonance Theory makes much use of a competitive learning paradigm. A criterion is developed to facilitate the occurrence of winner-take-all phenomenon. A single node with the largest value for the set criterion is declared the winner within its layer, and it is said to classify a pattern class. If there is a tie for the winning neuron in a layer, then an arbitrary rule, such as the first of them in a serial order, can be taken as the winner.
The neural network developed for this theory establishes a system that is made up of two subsystems, one being the attentional subsystem, and this contains the unit for gain control. The other is an orienting subsystem, and this contains the unit for reset. During the operation of the network modeled for this theory, patterns emerge in the attentional subsystem and are called traces of STM (short-term memory). Traces of LTM (long-term memory) are in the connection weights between the input layer and output layer.
The network uses processing with feedback between its two layers, until resonance occurs. Resonance occurs when the output in the first layer after feedback from the second layer matches the original pattern used as input for the first layer in that processing cycle. A match of this type does not have to be perfect. What is required is that the degree of match, measured suitably, exceeds a predetermined level, termed vigilance parameter. Just as a photograph matches the likeness of the subject to a greater degree when the granularity is higher, the pattern match gets finer when the vigilance parameter is closer to 1.
The Network for ART1
The neural network for the adaptive resonance theory or ART1 model consists of the following:
A layer of neurons, called the F1 layer (input layer or comparison layer)
A node for each layer as a gain control unit
A layer of neurons, called the F2 layer (output layer or recognition layer)
A node as a reset unit
Bottom-up connections from F1 layer to F2 layer
Top-down connections from F2 layer to F1 layer
Inhibitory connection (negative weight) form F2 layer to gain control
Excitatory connection (positive weight) from gain control to a layer
Inhibitory connection from F1 layer to reset node
Excitatory connection from reset node to F2 layer
A Simplified Diagram of Network Layout
Figure 10.1 simplified diagram of the neural network for an ART1 model.
Processing in ART1
The ART1 paradigm, just like the Kohonen Self-Organizing Map to be introduced in Chapter 11, performs data clustering on input data; like inputs are clustered together into a category. As an example, you can use a data clustering algorithm such as ART1 for Optical Character Recognition (OCR), where you try to match different samples of a letter to its ASCII equivalent. Particular attention is made in the ART1 paradigm to ensure that old information is not thrown away while new information is assimilated.
An input vector, when applied to an ART1 system, is first compared to existing patterns in the system. If there is a close enough match within a specified tolerance (as indicated by a vigilance parameter), then that stored pattern is made to resemble the input pattern further and the classification operation is complete. If the input pattern does not resemble any of the stored patterns in the system, then a new category is created with a new stored pattern that resembles the input pattern.
Special Features of the ART1 Model
One special feature of an ART1 model is that a two-thirds rule is necessary to determine the activity of neurons in the F1 layer. There are three input sources to each neuron in layer F1. They are the external input, the output of gain control, and the outputs of F2 layer neurons. The F1neurons will not fire unless at least two of the three inputs are active. The gain control unit and the two-thirds rule together ensure proper response from the input layer neurons. A second feature is that a vigilance parameter is used to determine the activity of the reset unit, which is activated whenever there is no match found among existing patterns during classification.
Notation for ART1 Calculations
Let us list the various symbols we will use to describe the operation of a neural network for an ART1 model:
wij
Weight on the connection from the ith neuron in the F1 layer to the jth neuron in the F2 layer
vji
Weight on the connection from the jth neuron in the F2 layer to the ith neuron on the F1 layer
ai
Activation of the ith neuron in the F1 layer
bj
Activation of the jth neuron in the F2 layer
xi
Output of the ith neuron in the F1 layer
yj
Output of the jth neuron in the F2 layer
zi
Input to the ith neuron in F1 layer from F2 layer
ρ
Vigilance parameter, positive and no greater than 1 (0<ρ ≤ 1)
m
Number of neurons in the F1 layer
n
Number of neurons in the F2 layer
I
Input vector
Si
Sum of the components of the input vector
Sx
Sum of the outputs of neurons in the F1 layer
A, C, D
Parameters with positive values or zero
L
Parameter with value greater than 1
B
Parameter with value less than D + 1 but at least as large as either D or 1
r
Index of winner of competition in the F2 layer
Previous
Table of Contents
Next
Products | Contact Us | About Us | Privacy | Ad Info | Home
Use of this site is subject to certain Terms & Conditions, Copyright © 1996-1999 EarthWeb Inc.
All rights reserved. Reproduction whole or in part in any form or medium without express written permision of EarthWeb is prohibited.
Wyszukiwarka
Podobne podstrony:
243 246243 246243 246NAPĘD POMPY WTRYSKOWEJ Z CIĘGŁEM „STOP”W SILNIKACH D 243, D 245 I ICH (2)243 NKEDGEEHGAQASX4INZHBF7KUYXUK47YICSMJAAY243 atyt239 243leach ll in one 246 osadenie246 247więcej podobnych podstron