پایان نامه های شبکه عصبی مصنوعی(13)

A self-organizing neural network for representing sequences

Tolat, Viral Vipin . Stanford University, ProQuest Dissertations Publishing, 1989. 9011589.

ProQuest document link

 

 

 

 

ABSTRACT

A neural network model called the "representation network" is developed. The network is different from other networks because information is represented by the structure of the network as well as by the weights. Through an unsupervised learning method, the network forms a homemorphic (topology-preserving) mapping of the input pattern space. Absolute pattern information is represented by the weights of the network. Relational pattern information, e.g., pattern topology, is represented by the structure of the network. Arbitrary spatial topologies can be learned with the choice of an adequate similarity measure. The ability of the network to correctly form a map is proven analytically through an analysis in which the behavior of the network is described by a system of energy equations.

 For sequences of patterns, the topology of the patterns is determined by the temporal order of the patterns; the similarity measure is temporal. Extensions to the network are presented that allow it to form maps of an arbitrary sequence of n-dimensional patterns. Once such a map is formed, the output of the network can be used for sequence recognition and classification. An identification network model is developed for this purpose. The identification network examines the output of the representation network and computes a score that reflects the level of match between the input sequence and the learned sequence. A high score results if the input sequence is the learned sequence. When this model was tested on the task of speaker dependent isolated digit recognition obtained, close to 100% recognition was obtained.

 Because the topology of the input patterns is preserved by the output of the network, the output of the network can be used for nonlinear mapping with a minimal amount of supervised training. A mapping network is developed for this purpose. Supervised training is used to learn a small number of exemplar mappings. Unsupervised learning is then used to train the weights so that unknown mappings are interpolated from the exemplar mappings. Unsupervised learning algorithms are presented for linear and cubic interpolation. Example mapping problems are used to demonstrate the ability of this approach.

 

LINKS

DETAILS

 

Subject:

Electrical engineering; Computer science; Artificial intelligence

 

Classification:

0544: Electrical engineering; 0984: Computer science; 0800: Artificial intelligence

 

Identifier / keyword:

Applied sciences

 

Number of pages:

167

 

Publication year:

1989

 

Degree date:

1989

 

School code:

0212

 

Source:

DAI-B 50/12, Dissertation Abstracts International

 

Place of publication:

Ann Arbor

 

Country of publication:

United States

 

Advisor:

Peterson, Allen M.

 

University/institution:

Stanford University

 

University location:

United States -- California

 

Degree:

Ph.D.

 

Source type:

Dissertations &Theses

 

Language:

English

 

Document type:

Dissertation/Thesis

 

Dissertation/thesis number:

9011589

 

ProQuest document ID:

303723351

 

Document URL:

https://search.proquest.com/docview/303723351?accountid=8243

 

Copyright:

Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

 

Database:

ProQuest Dissertations &Theses Global

 

 

 

A very high-performance neural network system architecture using grouped weight quantization

Karaali, Orhan . Florida Atlantic University, ProQuest Dissertations Publishing, 1989. 9013767.

ProQuest document link

 

 

 

 

ABSTRACT

Recently, Artificial Neural Network (ANN) computing systems have become one of the most active and challenging areas of information processing. The successes of experimental neural computing systems in the fields of pattern recognition, process control, robotics, signal processing, expert system, and functional analysis are most promising. However due to a number of serious problems, only small size fully connected neural networks have been implemented to run in real-time.

 The primary problem is that the execution time of neural networks increases exponentially as the neural network's size increases. This is because of the exponential increase in the number of multiplications and interconnections which makes it extremely difficult to implement medium or large scale ANNs in hardware. The Modular Grouped Weight Quantization (MGWQ) presented in this dissertation is an ANN design which assures that the number of multiplications and interconnections increase linearly as the neural network's size increases.

 The secondary problems are related to scale-up capability, modularity, memory requirements, flexibility, performance, fault tolerance, technological feasibility, and cost. The MGWQ architecture also resolves these problems.

 In this dissertation, neural network characteristics and existing implementations using different technologies are described. Their shortcomings and problems are addressed, and solutions to these problems using the MGWQ approach are illustrated. The theoretical and experimental justifications for MGWQ are presented. Performance calculations for the MGWQ architecture are given.

 The mappings of the most popular neural network models to the proposed architecture are demonstrated. System level architecture considerations are discussed.

 The proposed ANN computing system is a flexible and a realistic way to implement large fully connected networks. It offers very high performance using currently available technology. The performance of ANNs is measured in terms of interconnections per second (IC/S); the performance of the proposed system changes between 10$\sp{11}$ to 10$\sp{14}$ IC/S. In comparison, SAIC's DELTA II ANN system achieves 10$\sp7$ IC/S. A Cray X-MP achieves 5 $\*$ 10$\sp7$ IC/S.

 

LINKS

DETAILS

 

Subject:

Electrical engineering; Computer science; Artificial intelligence

 

Classification:

0544: Electrical engineering; 0984: Computer science; 0800: Artificial intelligence

 

Identifier / keyword:

Applied sciences

 

Number of pages:

298

 

Publication year:

1989

 

Degree date:

1989

 

School code:

0119

 

Source:

DAI-B 50/12, Dissertation Abstracts International

 

Place of publication:

Ann Arbor

 

Country of publication:

United States

 

Advisor:

Shankar, Ravi   Gluch, David P.

 

University/institution:

Florida Atlantic University

 

University location:

United States -- Florida

 

Degree:

Ph.D.

 

Source type:

Dissertations &Theses

 

Language:

English

 

Document type:

Dissertation/Thesis

 

Dissertation/thesis number:

9013767

 

ProQuest document ID:

303777622

 

Document URL:

https://search.proquest.com/docview/303777622?accountid=8243

 

Copyright:

Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

 

Database:

ProQuest Dissertations &Theses Global

 

 

 

Thermodynamic neural networks for the approximation of combinatorially hard packing problems

Hellstrom, Benjamin John . University of Maryland, College Park, ProQuest Dissertations Publishing, 1990. 9110306.

ProQuest document link

 

 

 

 

ABSTRACT

Thermodynamic neural networks for approximating solutions to combinatorial optimization problems were introduced in 1985 by J. J. Hopfield and D. W. Tank. Hopfield and Tank's method requires the specification of a mapping of the problem variable space to a set of 0-1 variables or neural activation states. The problem objective, as well as constraint-satisfaction objectives, must then be reduced to a Hamiltonian energy function in the activation states. The difficulty of satisfying these requirements restricts the applicability of Hopfield and Tank's method to simple problems. Furthermore, the resulting networks necessarily have symmetric, low-order synapses, and functionally homogeneous visible neurons.

 We introduce a program-constructive method for deriving efficient asymmetric neural networks for a class of difficult combinatorial optimization problems. Our method promotes the definition of necessary auxiliary hidden neurons as well as higher-order synapses. The method is based on the decomposition of arbitrary discrete gradients into piecewise linear functions. Each function is expressed as the output activation of a distinct hidden unit. The summed outputs of the set of hidden units reconstitutes the complete gradient which the network utilizes to perform stochastically-smoothed gradient descent.

 We address a subset of resource allocation problems known as packing problems. Efficient analog neural networks are derived for knapsack, multiprocessor scheduling, binpacking, and bin-minimization problems. The derived networks are simulated under a variety of well-known update modes including sequential asynchronous, parallel asynchronous, and parallel synchronous updating. To improve annealing efficiency, network phase orderings are examined. A variable chain-length simulated annealing technique based on the detection of network stabilization is developed. The use of a stabilization criterion for the termination of annealing chains is found to promote long chains during phase orderings. We show that improved results can be attained by balancing frustrated constraints over all temperatures.

 Knapsack and multiprocessor scheduling networks of 400-2400 neurons are shown to find very good, and often exact, solutions. The inherent sequential nature of bin-packing is shown to limit the efficiency of highly parallel neural network algorithms. Sequential bin-packing networks and bin-minimization networks are proposed as viable alternatives.

 

LINKS

DETAILS

 

Subject:

Computer science; Operations research; Artificial intelligence

 

Classification:

0984: Computer science; 0796: Operations research; 0800: Artificial intelligence

 

Identifier / keyword:

Applied sciences neural networks

 

Number of pages:

329

 

Publication year:

1990

 

Degree date:

1990

 

School code:

0117

 

Source:

DAI-B 51/11, Dissertation Abstracts International

 

Place of publication:

Ann Arbor

 

Country of publication:

United States

 

Advisor:

Kanal, Laveen N.

 

University/institution:

University of Maryland, College Park

 

University location:

United States -- Maryland

 

Degree:

Ph.D.

 

Source type:

Dissertations &Theses

 

Language:

English

 

Document type:

Dissertation/Thesis

 

Dissertation/thesis number:

9110306

 

ProQuest document ID:

303845753

 

Document URL:

https://search.proquest.com/docview/303845753?accountid=8243

 

Copyright:

Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

 

Database:

ProQuest Dissertations &Theses Global

 

 

 

Adaptive control of nonlinear systems using neural networks

Chen, Fu-Chuang . Michigan State University, ProQuest Dissertations Publishing, 1990. 9117798.

ProQuest document link

 

 

 

 

ABSTRACT (ENGLISH)

Layered neural networks are used in the adaptive control of nonlinear discrete-time systems. The control algorithm is described and two convergence results are provided. The first result shows that the plant output converges to zero in the adaptive regulation system. The second result shows that the error between the plant output and the reference command converges to a bounded ball in the adaptive tracking system. Computer simulations verify the theoretical results at the end of this thesis.

 

LINKS

DETAILS

 

Subject:

Electrical engineering

 

Classification:

0544: Electrical engineering

 

Identifier / keyword:

Applied sciences

 

Number of pages:

113

 

Publication year:

1990

 

Degree date:

1990

 

School code:

0128

 

Source:

DAI-B 52/01, Dissertation Abstracts International

 

Place of publication:

Ann Arbor

 

Country of publication:

United States

 

University/institution:

Michigan State University

 

/ 0 نظر / 201 بازدید