پایان نامه های شبکه عصبی مصنوعی(12)

Computers that work like brains: flawed account of neural networks; COGNIZERS: NEURAL NETWORKS AND MACHINES THAT THINK by R. Colin Johnson and Chappell Brown, New York: John Wiley &Sons, 260 pp. $22.95

Garfinkel, Simson L . The Christian Science Monitor ; Boston, Mass. [Boston, Mass]21 Dec 1988.

ProQuest document link

 

 

 

 

ABSTRACT

 

[...]to support it, they overstate the problems of conventional computers and artificial intelligence, while they speak of conjectures in the field of neural networks as if they were established scientific truths ''Cognizers'' shares this failing with many other popular reports of the field: ''A lot of stuff on neural networks is either wrong or unfounded hype,'' says Tomaso Poggio, a neural network researcher at MIT's department of brain and cognitive science.

 

 

LINKS

DETAILS

 

Subject:

Neural networks; Artificial intelligence; Computers; Studies

 

Publication title:

The Christian Science Monitor; Boston, Mass.

 

Publication year:

1988

 

Publication date:

Dec 21, 1988

 

Section:

NEWS

 

Publisher:

The Christian Science Publishing Society (d/b/a "The Christian Science Monitor"), trusteeship under the laws of the Commonwealth of Massachusetts

 

Place of publication:

Boston, Mass.

 

Country of publication:

United States

 

Publication subject:

General Interest Periodicals--United States

 

ISSN:

08827729

 

Source type:

Newspapers

 

Language of publication:

English

 

Document type:

News

 

ProQuest document ID:

1034605373

 

Document URL:

https://search.proquest.com/docview/1034605373?accountid=8243

 

Copyright:

Copyright 1988 The Christian Science Publishing Society

 

Last updated:

2017-11-19

 

Database:

Global Newsstream

 

 

 

Neural nets for scene analysis

Barnard, Etienne . Carnegie Mellon University, ProQuest Dissertations Publishing, 1989. 9011845.

ProQuest document link

 

 

 

 

ABSTRACT

Various issues related to neural-net classifiers, their application to scene-analysis problems, and their optical implementation are discussed. The first issue investigated is the role of criterion functions in the design of linear classifiers; it is found that two criterion functions (the perceptron and sigmoid criterion functions) are particularly suitable for the design of non-parametric classifiers. These results of this investigation can be applied to the design of neural-net classifiers. We thereafter study optimization methods, and motivate the choice of conjugate-gradient optimization for training neural nets. A new stochastic optimization technique is also introduced, and found to be most useful when very large training sets are available. The next topic studied is the choice of feature spaces to be used in conjunction with neural-net classifiers. It is found that invariant feature spaces should be used whenever possible, and a new method of creating such feature spaces is introduced. A new classifier, the adaptive-clustering classifier, is then described; this classifier combines pattern-recognition and neural-net concepts and is more suitable for optical implementation than the standard classifier neural nets. Simulation results show the adaptive-clustering classifier to be a powerful piecewise-linear classifier. It is proposed that only the classification stage of neural classifiers should be implemented in optics, and simulation results are presented which demonstrate that such an implementation can perform satisfactorily with analog-accuracy components.

 

LINKS

DETAILS

 

Subject:

Civil engineering

 

Classification:

0543: Civil engineering

 

Identifier / keyword:

Applied sciences

 

Number of pages:

141

 

Publication year:

1989

 

Degree date:

1989

 

School code:

0041

 

Source:

DAI-B 50/12, Dissertation Abstracts International

 

Place of publication:

Ann Arbor

 

Country of publication:

United States

 

University/institution:

Carnegie Mellon University

 

University location:

United States -- Pennsylvania

 

Degree:

Ph.D.

 

Source type:

Dissertations &Theses

 

Language:

English

 

Document type:

Dissertation/Thesis

 

Dissertation/thesis number:

9011845

 

ProQuest document ID:

303683036

 

Document URL:

https://search.proquest.com/docview/303683036?accountid=8243

 

Copyright:

Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

 

Database:

ProQuest Dissertations &Theses Global

 

 

 

Optical image classification using optical/digital hybrid image processing systems

Li, Xiaoyang . The Pennsylvania State University, ProQuest Dissertations Publishing, 1990. 9117710.

ProQuest document link

 

 

 

 

ABSTRACT

Offering parallel and real-time operations, optical image classification is becoming a general technique in the solution of real-life image classification problems. This thesis investigates several algorithms for optical realization.

 Compared to other statistical pattern recognition algorithms, the Kittler-Young transform can provide more discriminative feature spaces for image classification. We shall apply the Kittler-Young transform to image classification and implement it on optical systems. A feature selection criterion is designed for the application of the Kittler-Young transform to image classification. The realizations of the Kittler-Young transform on both a joint transform correlator and a matrix multiplier are successively conducted. Experiments of applying this technique to two-category and three-category problems are demonstrated.

 To combine the advantages of the statistical pattern recognition algorithms and the neural network models, processes using the two methods are studied. The Karhunen-Loeve Hopfield model is developed for image classification. This model has significant improvement in the system capacity and the capability of using image structures for more discriminative classification processes.

 As another such hybrid process, we propose the feature extraction perceptron. The application of feature extraction techniques to the perceptron shortens its learning time. An improved activation function of neurons (dynamic activation function), its design and updating rule for fast learning process and high space-bandwidth product image classification are also proposed. We have shortened by two-thirds the learning time on the feature extraction perceptron as compared with the original perceptron. By using this architecture, we have shown that the classification performs better than both the Kittler-Young transform and the original perceptron.

 

LINKS

DETAILS

 

Subject:

Optics

 

Classification:

0752: Optics

 

Identifier / keyword:

Pure sciences

 

Number of pages:

134

 

Publication year:

1990

 

Degree date:

1990

 

School code:

0176

 

Source:

DAI-B 52/01, Dissertation Abstracts International

 

Place of publication:

Ann Arbor

 

Country of publication:

United States

 

Advisor:

Yu, Francis T. S.

 

University/institution:

The Pennsylvania State University

 

University location:

United States -- Pennsylvania

 

Degree:

Ph.D.

 

Source type:

Dissertations &Theses

 

Language:

English

 

Document type:

Dissertation/Thesis

 

Dissertation/thesis number:

9117710

 

ProQuest document ID:

303867373

 

Document URL:

https://search.proquest.com/docview/303867373?accountid=8243

 

Copyright:

Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

 

Database:

ProQuest Dissertations &Theses Global

 

 

 

Learning, forgetting, and unlearning in associative memories: The eigenstructure method and the pseudo inverse method with stability constraints

Yen, Gune . University of Notre Dame, ProQuest Dissertations Publishing, 1991. 9137073.

ProQuest document link

 

 

 

 

ABSTRACT

Analysis, synthesis and applications of associative memories via artificial feedback neural networks are the central topic of this dissertation.

 In order to design effective associative memories, we begin with studies of two classes of discrete-time neural networks. In one of these models, the neurons are endowed with infinite gain. The second model for the neural networks considered is described on hypercubes in the state space. We utilize two design procedures for these networks: the eigenstructure method and the pseudo inverse method with stability constraints. Networks synthesized by the eigenstructure method possess the following attractive features: (i) the method guarantees to store each desired pattern as an asymptotically stable equilibrium point, (ii) the algorithm provides a mechanism of controlling the extent of the basins of attraction of each desired stored pattern, (iii) the network is proved to be globally stable, (iv) this technique is capable of minimizing the number of extraneous stored patterns, and (v) systems designed by this method possess large storing capacities.

 Learning and forgetting algorithms for the eigenstructure method are investigated next. In many realistic situations, desired patterns to be stored are unknown a priori. Also, dynamical updating of the stored patterns is frequently required. Neural networks generated by the present method are capable of learning new patterns as well as forgetting learned patterns without the necessity of recomputing the entire interconnection weights and external inputs. In many other respects, the algorithm developed herein provides significant improvements over existing techniques.

 The inability to implement symmetric interconnections precisely in neural networks may result in spurious states and unstable systems. A class of nonsymmetric neural networks is studied and synthesized by utilizing the qualitative theory of large-scale dynamical systems, and by employing pseudo-inverse techniques. Learning and forgetting algorithms are derived for this method which are in the same spirit as the algorithms developed for the eigenstructure method, and which are highly efficient.

 Paralleling simulation studies made by Hopfield, an unlearning algorithm is proposed for the eigenstructure method. The suggested technique appears to increase storing capacity while maximizing the domain of attraction of each desired pattern to be stored.

 

LINKS

DETAILS

 

Subject:

Electrical engineering

 

Classification:

0544: Electrical engineering

 

Identifier / keyword:

Applied sciences

 

Number of pages:

168

 

Publication year:

1991

/ 0 نظر / 117 بازدید