پایان نامه های شبکه عصبی مصنوعی(6)

Computers closer to brain simulation

Coleman, Donald . The Tribune ; San Diego, Calif. [San Diego, Calif]06 Apr 1987: A-15.

ProQuest document link

 

 

 

 

ABSTRACT (ABSTRACT)

David E. Rumelhardt, a professor of psychology who heads UCSD's Institute for Cognitive Science, said he's concerned that developers of neural networks don't fall into the hyperbole of some AI promotions.

[Kevin J. Kinsella] noted that Hecht-Nielsen's neurocomputer is a systems product and simulates a neural network as compared to an actual neural network chip that Synaptics will produce.

[Robert Hecht-Nielsen] recently conducted through the Extension a week- long lecture on neural networks that was attended by nearly 75 scientists and engineers from across the country, representing such entities as General Dynamics, Logicon, Honeywell, Grumman, Teledyne, GA Technologies and several universities and branches of the Department of Defense.

 

LINKS

DETAILS

 

Company / organization:

Name: Synaptics Inc; NAICS: 334119, 541512

 

Publication title:

The Tribune; San Diego, Calif.

 

Pages:

A-15

 

Number of pages:

0

 

Publication year:

1987

 

Publication date:

Apr 6, 1987

 

column:

PORTFOLIO: NEURAL NETWORKS. One in a series.

 

Section:

BUSINESS

 

Publisher:

The San Diego Union-Tribune, LLC.

 

Place of publication:

San Diego, Calif.

 

Country of publication:

United States

 

Publication subject:

General Interest Periodicals--United States

 

Source type:

Newspapers

 

Language of publication:

English

 

Document type:

SERIES

 

ProQuest document ID:

422391422

 

Document URL:

https://search.proquest.com/docview/422391422?accountid=8243

 

Copyright:

Copyright Union-Tribune Publishing Co. Apr 6, 1987

 

Last updated:

2010-07-21

 

Database:

Global Newsstream

 

 

 

Sampling issues for classification using neural networks

Subramanian, Venkat . Kent State University, ProQuest Dissertations Publishing, 1990. 9114514.

ProQuest document link

 

 

 

 

ABSTRACT

Neural networks are information processing systems patterned after the highly interconnected neural systems of the human brain. They are trained through examples rather than being programmed. The currently popular training algorithm, called back propagation, suffers from serious drawbacks such as lack of robustness, slow convergence, especially for large networks, and lack of reliability. Since neural network training is an optimization problem, this dissertation establishes the suitability of proven nonlinear optimization methods and the superiority of these methods over back propagation.

 Neural networks, by their nature of being able to generalize and resist noisy data, are particularly appropriate for pattern recognition problems. One prominent pattern recognition problem is the classification problem which assigns an observation, based on a set of attributes, to one of finite groups. For example, the classification problem is used in accepting or rejecting credit application based on an applicant's personal and financial data. This dissertation compares the performance, in terms of correct classifications, of neural networks against that of the traditional multidimensional discriminant analysis methods. The results show that even under the perfect assumptions for the traditional methods, neural networks compared favorably.

 The question of how to design neural networks for classification is the next main topic of this dissertation. The issues investigated include sampling strategy and network architecture. Sampling strategy refers to the decisions on sample size, sample composition, and variance-covariance matrices of attributes. Network architecture refers to the number of nodes and the interconnections among the nodes in a network. A rigorous and extensive experiment was conducted to answer questions on these issues. Design principles based on these empirical results were established.

 

LINKS

DETAILS

 

Subject:

Management; Information Systems; Computer science; Artificial intelligence

 

Classification:

0454: Management; 0723: Information Systems; 0984: Computer science; 0800: Artificial intelligence

 

Identifier / keyword:

Communication and the arts Social sciences Applied sciences

 

Number of pages:

131

 

Publication year:

1990

 

Degree date:

1990

 

School code:

0101

 

Source:

DAI-A 51/12, Dissertation Abstracts International

 

Place of publication:

Ann Arbor

 

Country of publication:

United States

 

Advisor:

Hung, Ming S.

 

University/institution:

Kent State University

 

University location:

United States -- Ohio

 

Degree:

Ph.D.

 

Source type:

Dissertations &Theses

 

Language:

English

 

Document type:

Dissertation/Thesis

 

Dissertation/thesis number:

9114514

 

ProQuest document ID:

303876868

 

Document URL:

https://search.proquest.com/docview/303876868?accountid=8243

 

Copyright:

Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

 

Database:

ProQuest Dissertations &Theses Global

 

 

 

Multiprocessor realization of neural networks

Bennington, Robert William . University of Maryland, College Park, ProQuest Dissertations Publishing, 1990. 9030851.

ProQuest document link

 

 

 

 

ABSTRACT

This research provides a foundation for implementing neural networks on multiprocessor systems in order to increase simulation speeds and to accommodate more complex neural networks. The emphasis is on the use of affordable coarse grain multiprocessors to implement commerically available neural network simulators currently being run on single processor systems. A conceptual framework is presented based on the concepts of program decomposition, load balancing, communication overhead, and process synchronization. Four methodologies are then presented for optimizing execution times. A set of metrics are also introduced which make it possible to measure the performance enhancements over single processor systems, and analyze the effects of communication overhead, load balancing, and synchronization for various network decompositions.

 The application of these four methodologies to two neural network simulators on a multiprocessor computer system is discussed in detail. They are illustrated with practical implementations of networks ranging in size from six to twenty thousand connections. Two of the methodologies, the Pipeline and Hybrid approaches, exhibit speedups approaching the possible upper limits.

 The theoretical significance of this dissertation research is that it provides a basis for achieving efficient multiprocessor implementation of high complex and massive neural networks. Traditionally, neural network research and development requires a considerable amount of time be spent in repeatedly evaluating and modifying network architectures and algorithms. As such, the engineering value of this dissertation is that the time required to repeatedly execute networks in research and development can be significantly reduced.

 

LINKS

DETAILS

 

Subject:

Electrical engineering; Artificial intelligence

 

Classification:

0544: Electrical engineering; 0800: Artificial intelligence

 

Identifier / keyword:

Applied sciences

 

Number of pages:

346

 

Publication year:

1990

 

Degree date:

1990

 

School code:

0117

 

Source:

DAI-B 51/05, Dissertation Abstracts International

 

Place of publication:

Ann Arbor

 

Country of publication:

United States

 

Advisor:

DeClaris, Nicholas

 

University/institution:

University of Maryland, College Park

 

University location:

United States -- Maryland

 

Degree:

Ph.D.

 

Source type:

Dissertations &Theses

 

Language:

English

 

Document type:

Dissertation/Thesis

 

Dissertation/thesis number:

9030851

 

ProQuest document ID:

303877649

 

Document URL:

https://search.proquest.com/docview/303877649?accountid=8243

 

Copyright:

Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

 

Database:

ProQuest Dissertations &Theses Global

 

 

 

First and higher-order optical neural networks based on parallel rank-one interconnections

Jeon, Ho-In . The University of Alabama in Huntsville, ProQuest Dissertations Publishing, 1990. 9113964.

ProQuest document link

 

 

 

 

ABSTRACT

In this dissertation, first and higher order optical neural networks based on parallel rank-one interconnections and time integration have been proposed, and experimental results are demonstrated. Optical parallel rank-one interconnections based on singular value decomposition combined with time-integration are useful in implementing real-time, programmable, and adaptive neural networks with arbitrary order. We can have input patterns as large as the size of the outer-product memory matrix at the cost of certain amount of time consumption. This time-integration usage, in fact, breaks down the barrier of the maximum space-bandwidth product of optical systems, and, consequently, allows an efficient utilization of the spatial parallelism that optics offers. An implementation of higher order neural networks with the rank-one interconnections require even more time usage with increased storage capacity. However, approximations of the memory matrix based on principal component analysis allows us to save a great amount of time without substantial degradation of the performance of the system.

 As a background, a brief history of artificial neural network models and their optical and electronic implementations are given. Specifically, an extensive work on parallel $N\sp4$ weighted interconnections using page-oriented holographic memory is presented and its application to direct storage neural network is described. After introducing the theoretical background on partially parallel interconnections, we show first that the rank-one interconnection system has great potential in implementing neural networks using computer simulation results. And then, the proposed system has been verified through the error analysis and experimental results that are demonstrated in this dissertation.

 Finally, the concept of rank-one interconnections has been applied to the implementation of higher-order neural networks. As a performance measure of the proposed system, the idea of the space-time product has been introduced for the purpose of comparison with other methods of neural network implementations. The time saving that is available due to the approximation of the memory matrix shows great potential for practical use of the proposed system. It is also shown that any higher (including first) order neural networks can be implemented with a small, low-cost, and rugged optical system.

 

LINKS

DETAILS

 

Subject:

Electrical engineering; Computer science; Neurology; Artificial

/ 0 نظر / 29 بازدید