پایان نامه های شبکه عصبی مصنوعی(10)

A very high-performance neural network system architecture using grouped weight quantization

Karaali, Orhan . Florida Atlantic University, ProQuest Dissertations Publishing, 1989. 9013767.

ProQuest document link

 

 

 

 

ABSTRACT

Recently, Artificial Neural Network (ANN) computing systems have become one of the most active and challenging areas of information processing. The successes of experimental neural computing systems in the fields of pattern recognition, process control, robotics, signal processing, expert system, and functional analysis are most promising. However due to a number of serious problems, only small size fully connected neural networks have been implemented to run in real-time.

 The primary problem is that the execution time of neural networks increases exponentially as the neural network's size increases. This is because of the exponential increase in the number of multiplications and interconnections which makes it extremely difficult to implement medium or large scale ANNs in hardware. The Modular Grouped Weight Quantization (MGWQ) presented in this dissertation is an ANN design which assures that the number of multiplications and interconnections increase linearly as the neural network's size increases.

 The secondary problems are related to scale-up capability, modularity, memory requirements, flexibility, performance, fault tolerance, technological feasibility, and cost. The MGWQ architecture also resolves these problems.

 In this dissertation, neural network characteristics and existing implementations using different technologies are described. Their shortcomings and problems are addressed, and solutions to these problems using the MGWQ approach are illustrated. The theoretical and experimental justifications for MGWQ are presented. Performance calculations for the MGWQ architecture are given.

 The mappings of the most popular neural network models to the proposed architecture are demonstrated. System level architecture considerations are discussed.

 The proposed ANN computing system is a flexible and a realistic way to implement large fully connected networks. It offers very high performance using currently available technology. The performance of ANNs is measured in terms of interconnections per second (IC/S); the performance of the proposed system changes between 10$\sp{11}$ to 10$\sp{14}$ IC/S. In comparison, SAIC's DELTA II ANN system achieves 10$\sp7$ IC/S. A Cray X-MP achieves 5 $\*$ 10$\sp7$ IC/S.

 

LINKS

DETAILS

 

Subject:

Electrical engineering; Computer science; Artificial intelligence

 

Classification:

0544: Electrical engineering; 0984: Computer science; 0800: Artificial intelligence

 

Identifier / keyword:

Applied sciences

 

Number of pages:

298

 

Publication year:

1989

 

Degree date:

1989

 

School code:

0119

 

Source:

DAI-B 50/12, Dissertation Abstracts International

 

Place of publication:

Ann Arbor

 

Country of publication:

United States

 

Advisor:

Shankar, Ravi   Gluch, David P.

 

University/institution:

Florida Atlantic University

 

University location:

United States -- Florida

 

Degree:

Ph.D.

 

Source type:

Dissertations &Theses

 

Language:

English

 

Document type:

Dissertation/Thesis

 

Dissertation/thesis number:

9013767

 

ProQuest document ID:

303777622

 

Document URL:

https://search.proquest.com/docview/303777622?accountid=8243

 

Copyright:

Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

 

Database:

ProQuest Dissertations &Theses Global

 

 

 

Thermodynamic neural networks for the approximation of combinatorially hard packing problems

Hellstrom, Benjamin John . University of Maryland, College Park, ProQuest Dissertations Publishing, 1990. 9110306.

ProQuest document link

 

 

 

 

ABSTRACT

Thermodynamic neural networks for approximating solutions to combinatorial optimization problems were introduced in 1985 by J. J. Hopfield and D. W. Tank. Hopfield and Tank's method requires the specification of a mapping of the problem variable space to a set of 0-1 variables or neural activation states. The problem objective, as well as constraint-satisfaction objectives, must then be reduced to a Hamiltonian energy function in the activation states. The difficulty of satisfying these requirements restricts the applicability of Hopfield and Tank's method to simple problems. Furthermore, the resulting networks necessarily have symmetric, low-order synapses, and functionally homogeneous visible neurons.

 We introduce a program-constructive method for deriving efficient asymmetric neural networks for a class of difficult combinatorial optimization problems. Our method promotes the definition of necessary auxiliary hidden neurons as well as higher-order synapses. The method is based on the decomposition of arbitrary discrete gradients into piecewise linear functions. Each function is expressed as the output activation of a distinct hidden unit. The summed outputs of the set of hidden units reconstitutes the complete gradient which the network utilizes to perform stochastically-smoothed gradient descent.

 We address a subset of resource allocation problems known as packing problems. Efficient analog neural networks are derived for knapsack, multiprocessor scheduling, binpacking, and bin-minimization problems. The derived networks are simulated under a variety of well-known update modes including sequential asynchronous, parallel asynchronous, and parallel synchronous updating. To improve annealing efficiency, network phase orderings are examined. A variable chain-length simulated annealing technique based on the detection of network stabilization is developed. The use of a stabilization criterion for the termination of annealing chains is found to promote long chains during phase orderings. We show that improved results can be attained by balancing frustrated constraints over all temperatures.

 Knapsack and multiprocessor scheduling networks of 400-2400 neurons are shown to find very good, and often exact, solutions. The inherent sequential nature of bin-packing is shown to limit the efficiency of highly parallel neural network algorithms. Sequential bin-packing networks and bin-minimization networks are proposed as viable alternatives.

 

LINKS

DETAILS

 

Subject:

Computer science; Operations research; Artificial intelligence

 

Classification:

0984: Computer science; 0796: Operations research; 0800: Artificial intelligence

 

Identifier / keyword:

Applied sciences neural networks

 

Number of pages:

329

 

Publication year:

1990

 

Degree date:

1990

 

School code:

0117

 

Source:

DAI-B 51/11, Dissertation Abstracts International

 

Place of publication:

Ann Arbor

 

Country of publication:

United States

 

Advisor:

Kanal, Laveen N.

 

University/institution:

University of Maryland, College Park

 

University location:

United States -- Maryland

 

Degree:

Ph.D.

 

Source type:

Dissertations &Theses

 

Language:

English

 

Document type:

Dissertation/Thesis

 

Dissertation/thesis number:

9110306

 

ProQuest document ID:

303845753

 

Document URL:

https://search.proquest.com/docview/303845753?accountid=8243

 

Copyright:

Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

 

Database:

ProQuest Dissertations &Theses Global

 

 

 

Computational neural learning formalisms for perceptual manipulation: Singularity interaction dynamics model

Gulati, Sandeep . Louisiana State University and Agricultural & Mechanical College, ProQuest Dissertations Publishing, 1990. 9123194.

ProQuest document link

 

 

 

 

ABSTRACT

This dissertation addresses a fundamental problem in computational AI--developing a class of massively parallel, neural algorithms for learning robustly, and in real-time, complex nonlinear transformations from representative exemplars. Provision of such a capability is at the core of many real-life problems in robotics, signal processing and control. The concepts of terminal attractors in dynamical systems theory and adjoint operators in nonlinear sensitivity theory are exploited to provide a firm mathematical foundation for learning such mappings with dynamical neural networks, while achieving a dramatic reduction in the overall computational costs. Further, we derive an efficient methodology for handling a multiplicity of application-specific constraints during run-time, that precludes additional retraining or disturbing the synaptic structure of the "learned" network.

 The scalability of proposed theoretical models to large-scale embodiments in neural hardware is analyzed. Neurodynamical parameters, e.g., decay constants, response gains, etc., are systematically analyzed to understand their implications on network scalability, convergence, throughput and fault tolerance, during both concurrent simulations and implementation in concurrently asynchronous VLSI, optical and opto-electronic hardware. Dynamical diagnostics, e.g., Lyapunov exponents, are used to formally characterize the widely observed dynamical instability in neural networks as "emergent computational chaos". Using contracting operators and nonconstructive theorems from fixed point theory, we rigorously derive necessary and sufficient conditions for eliminating all oscillatory and chaotic behavior in additive-type networks. Extensive benchmarking experiments are conducted with arbitrarily large neural networks (over 100 million interconnects) to verify the methodological robustness of our network "conditioning" formalisms.

 Finally, we provide insight for exploiting our proposed repertoire of neural learning formalisms in addressing a fundamental problem in robotics--manipulation controller design for robots operating in unpredictable environments. Using some recent results in task analysis and dynamic modeling we develop the "Perceptual Manipulation Architecture". The architecture, conceptualized within a perceptual framework, is shown to be well beyond the state-of-the-art model-directed robotics. For a stronger physical interpretation of its implications, our discussions are embedded in context of a novel systems' concept for automated space operations.

 

LINKS

DETAILS

 

Subject:

Computer science; Mathematics; Artificial intelligence

 

Classification:

0984: Computer science; 0405: Mathematics; 0800: Artificial intelligence

 

Identifier / keyword:

Applied sciences Pure sciences constrained optimization dynamical neural networks

 

Number of pages:

257

 

Publication year:

1990

 

Degree date:

1990

 

School code:

0107

 

Source:

DAI-B 52/03, Dissertation Abstracts International

 

Place of publication:

Ann Arbor

 

Country of publication:

United States

 

Advisor:

Iyengar, Sitharama

 

University/institution:

Louisiana State University and Agricultural &Mechanical College

 

University location:

United States -- Louisiana

 

Degree:

Ph.D.

 

Source type:

Dissertations &Theses

 

Language:

English

 

Document type:

Dissertation/Thesis

 

Dissertation/thesis number:

9123194

 

ProQuest document ID:

303885101

 

Document URL:

https://search.proquest.com/docview/303885101?accountid=8243

 

Copyright:

Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

 

Database:

ProQues

/ 0 نظر / 144 بازدید