C++ Neural Networks and Fuzzy Logic:Application to Nonlinear Optimization
Click Here!
function GetCookie (name)
{
var arg = name + "=";
var alen = arg.length;
var clen = document.cookie.length;
var i = 0;
while (i < clen)
{
var j = i + alen;
if (document.cookie.substring(i, j) == arg) {
var end = document.cookie.indexOf (";", j);
if (end == -1)
end = document.cookie.length;
return unescape(document.cookie.substring(j, end));
}
i = document.cookie.indexOf(" ", i) + 1;
if (i == 0) break;
}
return null;
}
var m1='';
var gifstr=GetCookie("UsrType");
if((gifstr!=0 ) && (gifstr!=null)) { m2=gifstr; }
document.write(m1+m2+m3);
Keyword
Title
Author
ISBN
Publisher
Imprint
Brief
Full
Advanced Search
Search Tips
Please Select
-----------
Components
Content Mgt
Certification
Databases
Enterprise Mgt
Fun/Games
Groupware
Hardware
Intranet Dev
Middleware
Multimedia
Networks
OS
Prod Apps
Programming
Security
UI
Web Services
Webmaster
Y2K
-----------
New Titles
-----------
Free Archive
To access the contents, click the chapter and section titles.
C++ Neural Networks and Fuzzy Logic
(Publisher: IDG Books Worldwide, Inc.)
Author(s): Valluru B. Rao
ISBN: 1558515526
Publication Date: 06/01/95
Search this book:
Previous
Table of Contents
Next
Chapter 15Application to Nonlinear Optimization
Introduction
Nonlinear optimization is an area of operations research, and efficient algorithms for some of the problems in this area are hard to find. In this chapter, we describe the traveling salesperson problem and discuss how this problem is formulated as a nonlinear optimization problem in order to use neural networks (Hopfield and Kohonen) to find an optimum solution. We start with an explanation of the concepts of linear, integer linear and nonlinear optimization.
An optimization problem has an objective function and a set of constraints on the variables. The problem is to find the values of the variables that lead to an optimum value for the objective function, while satisfying all the constraints. The objective function may be a linear function in the variables, or it may be a nonlinear function. For example, it could be a function expressing the total cost of a particular production plan, or a function giving the net profit from a group of products that share a given set of resources. The objective may be to find the minimum value for the objective function, if, for example, it represents cost, or to find the maximum value of a profit function. The resources shared by the products in their manufacturing are usually in limited supply or have some other restrictions on their availability. This consideration leads to the specification of the constraints for the problem.
Each constraint is usually in the form of an equation or an inequality. The left side of such an equation or inequality is an expression in the variables for the problem, and the right-hand side is a constant. The constraints are said to be linear or nonlinear depending on whether the expression on the left-hand side is a linear function or nonlinear function of the variables. A linear programming problem is an optimization problem with a linear objective function as well as a set of linear constraints. An integer linear programming problem is a linear programming problem where the variables are required to have integer values. A nonlinear optimization problem has one or more of the constraints nonlinear and/or the objective function is nonlinear.
Here are some examples of statements that specify objective functions and constraints:
Linear objective function: Maximize Z = 3X1 + 4X2 + 5.7X3
Linear equality constraint: 13X1 - 4.5X2 + 7X3 = 22
Linear inequality constraint: 3.6X1 + 8.4X2 - 1.7X3 ≤ 10.9
Nonlinear objective function: Minimize Z = 5X2 + 7XY + Y2
Nonlinear equality constraint: 4X + 3XY + 7Y + 2Y2 = 37.6
Nonlinear inequality constraint: 4.8X + 5.3XY + 6.2Y2 ≥ 34.56
An example of a linear programming problem is the blending problem. An example of a blending problem is that of making different flavors of ice cream blending different ingredients, such as sugar, a variety of nuts, and so on, to produce different amounts of ice cream of many flavors. The objective in the problem is to find the amounts of individual flavors of ice cream to produce with given supplies of all the ingredients, so the total profit is maximized.
A nonlinear optimization problem example is the quadratic programming problem. The constraints are all linear but the objective function is a quadratic form. A quadratic form is an expression of two variables with 2 for the sum of the exponents of the two variables in each term.
An example of a quadratic programming problem, is a simple investment strategy problem that can be stated as follows. You want to invest a certain amount in a growth stock and in a speculative stock, achieving at least 25% return. You want to limit your investment in the speculative stock to no more than 40% of the total investment. You figure that the expected return on the growth stock is 18%, while that on the speculative stock is 38%. Suppose G and S represent the proportion of your investment in the growth stock and the speculative stock, respectively. So far you have specified the following constraints. These are linear constraints:
G + S = 1
This says the proportions add up to 1.
S ≤ 0.4
This says the proportion invested in speculative stock is no more than 40%.
1.18G + 1.38S ≥ 1.25
This says the expected return from these investments should be at least 25%.
Now the objective function needs to be specified. You have specified already the expected return you want to achieve. Suppose that you are a conservative investor and want to minimize the variance of the return. The variance works out as a quadratic form. Suppose it is determined to be:
2G2 + 3S2 - GS
This quadratic form, which is a function of G and S, is your objective function that you want to minimize subject to the (linear) constraints previously stated.
Neural Networks for Optimization Problems
It is possible to construct a neural network to find the values of the variables that correspond to an optimum value of the objective function of a problem. For example, the neural networks that use the Widrow-Hoff learning rule find the minimum value of the error function using the least mean squared error. Neural networks such as the feedforward backpropagation network use the steepest descent method for this purpose and find a local minimum of the error, if not the global minimum. On the other hand, the Boltzmann machine or the Cauchy machine uses statistical methods and probabilities and achieves success in finding the global minimum of an error function. So we have an idea of how to go about using a neural network to find an optimum value of a function. The question remains as to how the constraints of an optimization problem should be treated in a neural network operation. A good example in answer to this question is the traveling salesperson problem. Lets discuss this example next.
Previous
Table of Contents
Next
Products | Contact Us | About Us | Privacy | Ad Info | Home
Use of this site is subject to certain Terms & Conditions, Copyright © 1996-1999 EarthWeb Inc.
All rights reserved. Reproduction whole or in part in any form or medium without express written permision of EarthWeb is prohibited.
Wyszukiwarka
Podobne podstrony:
417 419porady;stylistki,kategoria,419PFM 419 instrukcja obsĹ‚ugi419 422417 42001aff02a1cb3ae2b1d5a37b1d85453f7 zip (417 60 kB)II CR 419 89417 (B2007) Koszty na przełomie roku413 417417 acREADME (417)PFM 419 karta katalogowawięcej podobnych podstron