پایان نامه های شبکه عصبی مصنوعی(8)

Analysis and applications of general classes of dynamic neural networks

Farotimi, Oluseyi Oladele . Stanford University, ProQuest Dissertations Publishing, 1990. 9102260.

ProQuest document link

 

 

 

 

ABSTRACT

Research interest in neural networks has grown over the past few years in the hope that they may offer more efficient alternatives to conventional algorithms. Generally speaking, along the path from research to development two main issues arise, namely (i) qualitative behavior of the systems, and (ii) training rules. Qualitative analysis of first order networks has been carried out by Cohen and Grossberg, among others. Widrow, Rumelhart, Hopfield, and others have proposed various training rules for different network structures.

 In this thesis results pertaining to training as well as to qualitative analysis of neural networks are presented. In some cases they represent generalizations of existing results, and in other cases they introduce entirely novel concepts.

 First, a new technique for training neural networks based on optimal control theory is presented. This method is different from many existing rules in that it places very few constraints on the order or architecture of the network. The method yields an optimal weight matrix that is a function of time.

 The optimal control technique is applied to train the weights in an associative memory. For this problem, a common weight rule is the outer product rule, introduced by Hopfield. By considering special cases of the performance index, optimal rules for the problem are derived, and encouraging simulation results are presented.

 Still addressing the issue of neural network training, the optimal control technique above is applied to determine the weights in a Probabilistic Cellular Automaton (PCA) for pattern recognition. Two ways of determining the weights in this structure are examined, and simulation results are presented for some simple examples.

 Finally, a qualitative analysis of a class of arbitrary order dynamic neural networks is presented. Such networks at steady state can give rise to polynomial threshold functions (Bruck 1989). Other applications for such networks include higher order associative memories and nonlinear programming. All these applications place certain constraints on the nature of the equilibrium points of the neural network. The analysis characterizes these equilibrium points.

 

LINKS

DETAILS

 

Subject:

Electrical engineering; Computer science; Neurology

 

Classification:

0544: Electrical engineering; 0984: Computer science; 0317: Neurology

 

Identifier / keyword:

Applied sciences Biological sciences pattern recognition

 

Number of pages:

161

 

Publication year:

1990

 

Degree date:

1990

 

School code:

0212

 

Source:

DAI-B 51/08, Dissertation Abstracts International

 

Place of publication:

Ann Arbor

 

Country of publication:

United States

 

Advisor:

Kailath, Thomas

 

University/institution:

Stanford University

 

University location:

United States -- California

 

Degree:

Ph.D.

 

Source type:

Dissertations &Theses

 

Language:

English

 

Document type:

Dissertation/Thesis

 

Dissertation/thesis number:

9102260

 

ProQuest document ID:

303871572

 

Document URL:

https://search.proquest.com/docview/303871572?accountid=8243

 

Copyright:

Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

 

Database:

ProQuest Dissertations &Theses Global

 

 

 

Multiprocessor realization of neural networks

Bennington, Robert William . University of Maryland, College Park, ProQuest Dissertations Publishing, 1990. 9030851.

ProQuest document link

 

 

 

 

ABSTRACT

This research provides a foundation for implementing neural networks on multiprocessor systems in order to increase simulation speeds and to accommodate more complex neural networks. The emphasis is on the use of affordable coarse grain multiprocessors to implement commerically available neural network simulators currently being run on single processor systems. A conceptual framework is presented based on the concepts of program decomposition, load balancing, communication overhead, and process synchronization. Four methodologies are then presented for optimizing execution times. A set of metrics are also introduced which make it possible to measure the performance enhancements over single processor systems, and analyze the effects of communication overhead, load balancing, and synchronization for various network decompositions.

 The application of these four methodologies to two neural network simulators on a multiprocessor computer system is discussed in detail. They are illustrated with practical implementations of networks ranging in size from six to twenty thousand connections. Two of the methodologies, the Pipeline and Hybrid approaches, exhibit speedups approaching the possible upper limits.

 The theoretical significance of this dissertation research is that it provides a basis for achieving efficient multiprocessor implementation of high complex and massive neural networks. Traditionally, neural network research and development requires a considerable amount of time be spent in repeatedly evaluating and modifying network architectures and algorithms. As such, the engineering value of this dissertation is that the time required to repeatedly execute networks in research and development can be significantly reduced.

 

LINKS

DETAILS

 

Subject:

Electrical engineering; Artificial intelligence

 

Classification:

0544: Electrical engineering; 0800: Artificial intelligence

 

Identifier / keyword:

Applied sciences

 

Number of pages:

346

 

Publication year:

1990

 

Degree date:

1990

 

School code:

0117

 

Source:

DAI-B 51/05, Dissertation Abstracts International

 

Place of publication:

Ann Arbor

 

Country of publication:

United States

 

Advisor:

DeClaris, Nicholas

 

University/institution:

University of Maryland, College Park

 

University location:

United States -- Maryland

 

Degree:

Ph.D.

 

Source type:

Dissertations &Theses

 

Language:

English

 

Document type:

Dissertation/Thesis

 

Dissertation/thesis number:

9030851

 

ProQuest document ID:

303877649

 

Document URL:

https://search.proquest.com/docview/303877649?accountid=8243

 

Copyright:

Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

 

Database:

ProQuest Dissertations &Theses Global

 

 

 

Neural network based model reference adaptive control

Hoskins, Douglas Alan . University of Washington, ProQuest Dissertations Publishing, 1990. 9117954.

ProQuest document link

 

 

 

 

ABSTRACT

Artificial neural networks show promise as elements in control systems. One feature of traditional control system design which has been largely lacking in the work with neural networks to date is the analysis of the closed loop stability of the controlled system. The principal aim of this work was to develop a control architecture for which a such analyses can be made. Theoretical results were developed for the stability of systems using controllers based on approximate controllers, such as artificial neural networks. These results are based on fairly strong assumptions concerning the ability of the artificial neural network to learn a model of the forward dynamics of the plant. The stability results are based on an application of the concepts of Liapunov stability to systems which are stable in the sense of Lagrange: that is, the stability is with respect to a region in the state space, rather than a point.

 These stability results motivate the selection of a model reference adaptive controller architecture. The controller incorporates a performance model, which provides a desired output trajectory in response to system inputs (commands), and a convergence model, which determines the desired perturbation dynamics of the true output about the model output trajectory. The control input is selected by on line minimization of a cost function, based on a Liapunov-like function derived from the convergence model. This minimization procedure uses the neural network model of the system dynamics to predict the response of the system to a candidate control input.

 The controller described was applied to the adaptive control of a single degree of freedom system, and to the control of the cart/inverted pendulum problem. The addition of a dither signal to the calculated control input was shown to enhance the ability of error back propagation to improve the artificial neural network model on line.

 

LINKS

DETAILS

 

Subject:

Aerospace materials; Artificial intelligence

 

Classification:

0538: Aerospace materials; 0800: Artificial intelligence

 

Identifier / keyword:

Applied sciences adaptive control

 

Number of pages:

258

 

Publication year:

1990

 

Degree date:

1990

 

School code:

0250

 

Source:

DAI-B 52/02, Dissertation Abstracts International

 

Place of publication:

Ann Arbor

 

Country of publication:

United States

 

Advisor:

Vagners, Juris

 

University/institution:

University of Washington

 

University location:

United States -- Washington

 

Degree:

Ph.D.

 

Source type:

Dissertations &Theses

 

Language:

English

 

Document type:

Dissertation/Thesis

 

Dissertation/thesis number:

9117954

 

ProQuest document ID:

303905356

 

Document URL:

https://search.proquest.com/docview/303905356?accountid=8243

 

Copyright:

Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

 

Database:

ProQuest Dissertations &Theses Global

 

 

 

A "neural-RISC" processor and parallel architecture for neural networks

Pacheco, Marco Aurelio Cavalcanti . University of London, University College London (United Kingdom), ProQuest Dissertations Publishing, 1991. 10608847.

ProQuest document link

 

 

 

 

ABSTRACT (ENGLISH)

This thesis investigates a RISC microprocessor and a parallel architecture designed to optimise the computation of neural network models. The "Neural-RISC" is a primitive transputer-like microprocessor for building a parallel MIMD (multiple instruction, multiple data) general-purpose neurocomputer. The thesis covers four major parts: the design of the Neural-RISC system architecture, the design of the Neural-RISC node architecture, the architecture simulation studies, and the VLSI implementation of a microchip prototype. The Neural-RISC system architecture consists of linear arrays of microprocessors connected in rings. Ri

/ 0 نظر / 107 بازدید