پایان نامه های شبکه عصبی مصنوعی(9)

Dawn glimmers for day of the man-made brain
Froelich, Warren . The San Diego Union ; San Diego, Calif. [San Diego, Calif]06 July 1986: A-1.
ProQuest document link




ABSTRACT (ABSTRACT)
What [Robert Hecht-Nielsen] and others are talking about are networks that are capable of learning, whose architecture -- or operation design - - consists of vast interconnections similar to those found in the brain. In many ways, these networks are similar to what people think of as artificial intelligence -- or a system of machines that think.
At UCSD, researchers led by Dr. David Rumelhart, a psychologist, are using neural networks to study how humans process language, teaching it grammar such as changing verbs from present to past tense. Neural networks also could have several other practical applications, ranging from industrial inspection to the diagnosis of medical disorders.
Both [David Zipser] and Hecht-Nielsen emphasized that the present generation of neural networks may or may not do what the brain does. Critics have challenged the work in the past, saying the brain is too complex to emulate.

LINKS
DETAILS

Company / organization: Name: TRW Inc; Ticker: TRW; NAICS: 334419, 336399, 333911, 561450, 336414; SIC: 3675; DUNS: 00-417-9453

Publication title: The San Diego Union; San Diego, Calif.

Pages: A-1

Number of pages: 0

Publication year: 1986

Publication date: Jul 6, 1986

Dateline: RANCHO CARMEL

Section: NEWS

Publisher: The San Diego Union-Tribune, LLC.

Place of publication: San Diego, Calif.

Country of publication: United States

Publication subject: General Interest Periodicals--United States

Source type: Newspapers

Language of publication: English

Document type: NEWSPAPER

ProQuest document ID: 422526896

Document URL: https://search.proquest.com/docview/422526896?accountid=8243

Copyright: Copyright Union-Tribune Publishing Co. Jul 6, 1986

Last updated: 2010-08-20

Database: Global Newsstream



'Wise' Computer To Have Impact In Workplace
Anderson, Julie . Omaha World - Herald ; Omaha, Neb. [Omaha, Neb]12 Mar 1989: 1g.
ProQuest document link




ABSTRACT (ABSTRACT)
Alvin Surkan, a University of Nebraska-Lincoln computer science professor, is working to transform that dream from science fiction to science fact.
Surkan said computers one day could change the way employees are hired, by better matching "the right people to the right positions" and watching for those people who could prove to be security risks if placed in sensitive jobs.
Surkan said that using networks in hiring could benefit the hiring firm and ensure greater job satisfaction among workers. Networks also could be used to identify possible security risks in banking and analyze workers in hospitals and institutions where lives are placed in the hands of medical personnel. Entering Industry

LINKS
DETAILS

Publication title: Omaha World - Herald; Omaha, Neb.

Pages: 1g

Number of pages: 0

Publication year: 1989

Publication date: Mar 12, 1989

Section: Working

Publisher: Omaha World-Herald Company

Place of publication: Omaha, Neb.

Country of publication: United States

Publication subject: General Interest Periodicals--United States

Source type: Newspapers

Language of publication: English

Document type: NEWSPAPER

ProQuest document ID: 400631952

Document URL: https://search.proquest.com/docview/400631952?accountid=8243

Copyright: (Copyright 1989 Omaha World-Herald Company)

Last updated: 2011-10-19

Database: Global Newsstream



Classifying organic compounds using expert system and neural networks
Ambro, Judit . University of Montana, ProQuest Dissertations Publishing, 1991. EP40568.
ProQuest document link




ABSTRACT
Abstract not available.

LINKS
DETAILS

Subject: Computer science

Classification: 0984: Computer science

Identifier / keyword: Applied sciences

Number of pages: 80

Publication year: 1991

Degree date: 1991

School code: 0136

Source: MAI 52/06M(E), Masters Abstracts International

Place of publication: Ann Arbor

Country of publication: United States

University/institution: University of Montana

University location: United States -- Montana

Degree: M.S.

Source type: Dissertations &Theses

Language: English

Document type: Dissertation/Thesis

Dissertation/thesis number: EP40568

ProQuest document ID: 1549219388

Document URL: https://search.proquest.com/docview/1549219388?accountid=8243

Copyright: Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

Database: ProQuest Dissertations &Theses Global



Generalization in neural networks: Experiments in speech recognition
Richards, Elizabeth Lake . University of Colorado at Boulder, ProQuest Dissertations Publishing, 1991. 9206643.
ProQuest document link




ABSTRACT
This research is an investigation of the problem of generalization in neural networks: how do the task which the network must learn, the architecture of the network, the training of the network, and the data representations used in that training, both individually and collectively, affect the ability of a network to learn the training data and to generalize well to novel data.
A psychological model of speech perception, Liberman and Mattingly's Motor Theory, provides the theoretical foundation for the tasks and architectures specified for the networks used in the research. Linguistic theories of vowel perception guided the preparation of speech data representations used in training the networks. Vowel data was collected across varying contexts and speakers to provide a broad test of the networks' ability to generalize to highly variable data.
Results of the research show that networks having different task requirements but trained with the same number and type of data representations form a family of networks which exhibit similar generalization across a broad range of hidden units. Contradicting commonly accepted guidelines, networks trained with larger data representations exhibit better generalization than networks trained with smaller representations, even though the larger networks have a significantly greater capacity. In addition, networks having the same training performance can exhibit different levels of generalization; researchers interested in generalization must track generalization directly. Finally, given an appropriate architecture, training algorithm, and sufficient training data, the data representation itself is the primary determiner of a network's ability to generalize well to new data.

LINKS
DETAILS

Subject: Computer science; Artificial intelligence

Classification: 0984: Computer science; 0800: Artificial intelligence

Identifier / keyword: Applied sciences

Number of pages: 183

Publication year: 1991

Degree date: 1991

School code: 0051

Source: DAI-B 52/09, Dissertation Abstracts International

Place of publication: Ann Arbor

Country of publication: United States

Advisor: Bradshaw, Gary L.

University/institution: University of Colorado at Boulder

University location: United States -- Colorado

Degree: Ph.D.

Source type: Dissertations &Theses

Language: English

Document type: Dissertation/Thesis

Dissertation/thesis number: 9206643

ProQuest document ID: 303915881

Document URL: https://search.proquest.com/docview/303915881?accountid=8243

Copyright: Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

Database: ProQuest Dissertations &Theses Global



Integrating neural networks with influence diagrams for multiple sensor diagnostic systems
Tseng, Ming-Lei . University of California, Berkeley, ProQuest Dissertations Publishing, 1991. 9228887.
ProQuest document link




ABSTRACT
The objective of this research is to develop an adaptive architecture for the fusion of sensory information for diagnostic reasoning. The system architecture envisioned uses influence diagrams to provide a symbolic representation of the manufacturing or process system model. The independence encoded in influence diagrams is utilized in defining the structure of the neural networks to reduce the system complexity. Neural networks are then used as a learning mechanism for estimating and updating conditional probabilities from a set of training data.
To this end, we have designed two types of probabilistic neural networks: probabilistic Hamming nets (PHNs) for discrete variables and self-organizing probabilistic neural networks (SOPNNs) for continuous variables. A PHN is basically a modified Hamming net which implements pattern matching in the first layer and estimates conditional probabilities in the second layer. This model uses a Bayesian learning rule to update the weights of networks. A SOPNN is a neural network with a fixed-size self-organizing net for clustering and a modified Specht's model for estimating the probability density functions (PDFs). The idea is to represent similar training patterns in the same cluster by its centroid and then use these centroids for PDF estimation.
Several sets of simulated and real sensor data were employed to test the effectiveness of the proposed models. The results were analyzed and compared with those obtained by the conjunctoid, Specht's model and the polynomial Adaline (Padaline) neural models as well as Bayesian learning rules. The effectiveness of the integrated network for reducing the system complexity both in representation and in inference is also demonstrated. Based on the results of this study, our proposed architecture has the following advantages: (1) The structure of the neural networks is well defined and less complex. (2) Our neural models (PHNs and SOPNNs) are of fixed-size. (3) Both models are very general, not limited to pre-specified parametric or unimodal distributions. (4) The SOPNN is self-organized. (5) The learning is one-pass and incremental. (6) Training and performance are very fast. (7) Partial data are better utilized. (8) Subjective information can be easily incorporated. (9) The integrated network can be flexibly reconfigured to respond to on-line requests. Since our models do not employ the time-consuming backpropagation and simulated annealing algorithms, this integrated network appears more promising for real-time applications.

LINKS
DETAILS

Subject: Mechanical engineering; Computer science; Artificial intelligence

Classification: 0548: Mechanical engineering; 0984: Computer science; 0800: Artificial intelligence

Identifier / keyword: Applied sciences

Number of pages: 209

Publication year: 1991

Degree date: 1991

School code: 0028

Source: DAI-B 53/05, Dissertation Abstracts International

Place of publication: Ann Arbor

Country of publication: United States

Advisor: Agogino, Alice M.

University/institution: University of California, Berkeley

University location: United States -- California

Degree: Ph.D.

Source type: Dissertations &Theses

Language: English

Document type: Dissertation/Thesis

Dissertation/thesis number: 9228887

ProQuest document ID: 303917765

Document URL: https://search.proquest.com/docview/303917765?accountid=8243

Copyright: Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

Database: ProQuest Dissertations &Theses Global



Neural network algorithms and architectures for pattern classification
Mao, Weidong . Princeton University, ProQuest Dissertations Publishing, 1991. 9112942.
ProQuest document link




ABSTRACT
The study of artificial neural networks is an integrated research field that involves the disciplines of applied mathematics, physics, neurobiology, computer science, information, control, parallel processing and VLSI. This dissertation deals with a number of topics from a broad spectrum of neural network research in models, algorithms, applications and VLSI architectures. Specifically, this dissertation is aimed at studying neural network algorithms and architectures for pattern classification tasks. The work presented in this dissertation has a wide range of applications including speech recognition, image recognition, and high level knowledge processing.
Supervised neural networks, such as the back-propagation network, can be used for classification tasks as the result of approximating an input/output mapping. They are the approximation-based classifiers. The original gradient descent back-propagation learning algorithm exhibits slow convergence speed. Fast algorithms such as the conjugate gradient and quasi-Newton algorithms can be adopted. The main emphasis on neural network classifiers in this dissertation is the competition-based classifiers. The well known linear perceptron and its learning algorithm can deal with linearly separable classification problems. We propose two extensions, the generalized perceptron classifier and the multi-cluster classifier. They can perform more complex pattern classification tasks. We also give the corresponding learning algorithms and prove certain convergence properties. Another powerful classification model is the Hidden Markov Model (HMM), a doubly stochastic automaton that has been applied in speech recognition. We propose the Ring Hidden Markov Model (RHMM) and demonstrate its good performance in a shape recognition application.
Due to the rapid advance in VLSI technology, parallel processing, and computer aided design (CAD), application-specific VLSI systems are becoming more and more powerful and feasible. In particular, VLSI array processors offer high speed and efficiency through their massive parallelism and pipelining, regularity, modularity, and local communication. A unified VLSI array architecture can be used for implementing neural networks and Hidden Markov Models. We also propose a pipeline interleaving approach to design VLSI array architectures for real-time image and video signal processing.

LINKS
DETAILS

Subject: Electrical engineering; Computer science; Artificial intelligence

Classification: 0544: Electrical engineering; 0984: Computer science; 0800: Artificial intelligence

Identifier / keyword: Applied sciences

Number of pages: 160

Publication year: 1991

Degree date: 1991

School code: 0181

Source: DAI-B 51/12, Dissertation Abstracts International

Place of publication: Ann Arbor

Country of publication: United States

Advisor: Kung, S. Y.

University/institution: Princeton University

University location: United States -- New Jersey

Degree: Ph.D.

Source type: Dissertations &Theses

Language: English

Document type: Dissertation/Thesis

Dissertation/thesis number: 9112942

ProQuest document ID: 303921199

Document URL: https://search.proquest.com/docview/303921199?accountid=8243

Copyright: Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

Database: ProQuest Dissertations &Theses Global



Analysis of self-organizing neural networks with application to pattern classification
Lo, Zhen-Ping . University of California, Irvine, ProQuest Dissertations Publishing, 1991. 9212431.
ProQuest document link




ABSTRACT
Recent studies have indicated that neural networks can be implemented to solve the pattern recognition problem. The formulation of the pattern classification problem by self-organizing neural networks, specifically investigation of the Kohonen neural networks are presented in this work. The Kohonen Topology Preserving Mapping (TPM) network and the Learning Vector Quantization (LVQ) algorithms are reviewed. A formal analysis of the convergence property and the neighborhood interaction function selection in the topology preserving unsupervised neural network are presented. Furthermore, the derivation and convergence of the LVQ algorithms are investigated. A neural network piecewise linear classifier based on the Kohonen LVQ2 algorithm and the Kohonen TPM network is developed. The neural network classifier is tested on both synthesized and real data sets. The performance of the proposed classifier is compared with other neural network classifiers and classical classifiers.

LINKS
DETAILS

Subject: Electrical engineering; Automotive materials; Systems design; Artificial intelligence

Classification: 0544: Electrical engineering; 0540: Automotive materials; 0790: Systems design; 0800: Artificial intelligence

Identifier / keyword: Applied sciences

Number of pages: 173

Publication year: 1991

Degree date: 1991

School code: 0030

Source: DAI-B 52/12, Dissertation Abstracts International

Place of publication: Ann Arbor

Country of publication: United States

Advisor: Bavarian, Behnam

University/institution: University of California, Irvine

University location: United States -- California

Degree: Ph.D.

Source type: Dissertations &Theses

Language: English

Document type: Dissertation/Thesis

Dissertation/thesis number: 9212431

ProQuest document ID: 303921519

Document URL: https://search.proquest.com/docview/303921519?accountid=8243

Copyright: Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

Database: ProQuest Dissertations &Theses Global



Modified Newton's method for supervised training of dynamical neural networks for applications in associative memory and nonlinear identification problems
Bhalala, Smita Ashesh . The University of Arizona, ProQuest Dissertations Publishing, 1991. 1345608.
ProQuest document link




ABSTRACT
There have been several innovative approaches towards realizing an intelligent architecture that utilizes artificial neural networks for applications in information processing. The development of supervised training rules for updating the adjustable parameters of neural networks has received extensive attention in the recent past. In this study, specific learning algorithms utilizing modified Newton’s method for the optimization of the adjustable parameters of a dynamical neural network are developed. Computer simulation results show that the convergence performance of the proposed learning schemes match very closely that of the LMS learning algorithm for applications in the design of associative memories and nonlinear mapping problems. However, the implementation of the modified Newton’s method is complex due to the computation of the slope of the nonlinear sigmoidal function, whereas, the LMS algorithm approximates the slope to be zero.

LINKS
DETAILS

Subject: Electrical engineering; Computer science; Artificial intelligence

Classification: 0544: Electrical engineering; 0984: Computer science; 0800: Artificial intelligence

Identifier / keyword: Applied sciences

Number of pages: 99

Publication year: 1991

Degree date: 1991

School code: 0009

Source: MAI 30/01M, Masters Abstracts International

Place of publication: Ann Arbor

Country of publication: United States

Advisor: Sundareshan, Malur K.

University/institution: The University of Arizona

University location: United States -- Arizona

Degree: M.S.

Source type: Dissertations &Theses

Language: English

Document type: Dissertation/Thesis

Dissertation/thesis number: 1345608

ProQuest document ID: 303924389

Document URL: https://search.proquest.com/docview/303924389?accountid=8243

Copyright: Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

Database: ProQuest Dissertations &Theses Global



Symbolic knowledge and neural networks: Insertion, refinement and extraction
Towell, Geoffrey Gilmer . The University of Wisconsin - Madison, ProQuest Dissertations Publishing, 1991. 9209252.
ProQuest document link




ABSTRACT
Explanation-based and empirical learning are two largely complementary methods of machine learning. These approaches to machine learning both have serious problems which preclude their being a general purpose learning method. However, a "hybrid" learning method that combines explanation-based with empirical learning may be able to use the strengths of one learning method to address the weaknesses of the other method. Hence, a system that effectively combines the two approaches to learning can be expected to be superior to either approach in isolation. This thesis describes a hybrid system called K scBANN which is shown to be an effective combination of these two learning methods.
K scBANN (Knowledge-Based Artificial Neural Networks) is a three-part hybrid learning system built on top of "neural" learning techniques. The first part uses a set of approximately-correct rules to determine the structure and initial link weights of an artificial neural network, thereby making the rules accessible for modification by neural learning. The second part of K scBANN modifies the resulting network using essentially standard neural learning techniques. The third part of K scBANN extracts refined rules from trained networks.
K scBANN is evaluated by empirical tests in the domain of molecular biology. Networks created by K scBANN are shown to be superior, in terms of their ability to correctly classify unseen examples, to a wide variety of learning systems as well as techniques proposed by experts in the problems investigated. In addition, empirical tests show that K scBANN is robust to errors in the initial rules and insensitive to problems resulting from the presence of extraneous input features.
The third part of K scBANN, which extracts rules from trained networks, addresses a significant problem in the use of neural networks--understanding what a neural network learns. Empirical tests of the proposed rule-extraction method show that it simplifies understanding of trained networks by reducing the number of: consequents (hidden units), antecedents (weighted links), and possible antecedent weights. Surprisingly, the extracted rules are often more accurate at classifying examples not seen during training than the trained network from which they came.

LINKS
DETAILS

Subject: Computer science; Artificial intelligence

Classification: 0984: Computer science; 0800: Artificial intelligence

Identifier / keyword: Applied sciences

Number of pages: 263

Publication year: 1991

Degree date: 1991

School code: 0262

Source: DAI-B 53/02, Dissertation Abstracts International

Place of publication: Ann Arbor

Country of publication: United States

Advisor: Shavlik, Jude William

University/institution: The University of Wisconsin - Madison

University location: United States -- Wisconsin

Degree: Ph.D.

Source type: Dissertations &Theses

Language: English

Document type: Dissertation/Thesis

Dissertation/thesis number: 9209252

ProQuest document ID: 303927450

Document URL: https://search.proquest.com/docview/303927450?accountid=8243

Copyright: Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

Database: ProQuest Dissertations &Theses Global



A novel algorithm for neural network implementation of Boolean functions and its application to character recognition
Hussain, Basit . University of Miami, ProQuest Dissertations Publishing, 1991. 9205357.
ProQuest document link




ABSTRACT
Boolean logic is considered to be a good source for classification problems, an area dominated by neural networks. Although quite a few algorithms exist for training and implementing neural networks, no technique exists that can guarantee the transformation of any arbitrary Boolean function to a neural network (28). We have developed an algorithm that accomplishes exactly that. The algorithm is backed up by analytical proof and examples. It is verified using the classic character recognition problem to test its efficacy on Boolean vectors. Thereafter, the base algorithm is extended to a more robust Feature Recognition Algorithm that demonstrates its usefulness for pattern recognition. This algorithm uses piece-wise pattern recognition to provide results in a manner of progressive hierarchy. Results are demonstrated on translated, noisy, scaled, and deformed patterns. Comparisons to existing neural networks are also part of the research. The network's complexity analysis, capacity analysis and entropy analysis is also performed.

LINKS
DETAILS

Subject: Electrical engineering; Computer science; Artificial intelligence

Classification: 0544: Electrical engineering; 0984: Computer science; 0800: Artificial intelligence

Identifier / keyword: Applied sciences

Number of pages: 79

Publication year: 1991

Degree date: 1991

School code: 0125

Source: DAI-B 52/09, Dissertation Abstracts International

Place of publication: Ann Arbor

Country of publication: United States

Advisor: Kabuka, Mansur R.

University/institution: University of Miami

University location: United States -- Florida

Degree: Ph.D.

Source type: Dissertations &Theses

Language: English

Document type: Dissertation/Thesis

Dissertation/thesis number: 9205357

ProQuest document ID: 303955562

Document URL: https://search.proquest.com/docview/303955562?accountid=8243

Copyright: Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

Database: ProQuest Dissertations &Theses Global



A neural network approach for marketing strategies research and decision support
Poh, Hean Lee . Stanford University, ProQuest Dissertations Publishing, 1991. 9122576.
ProQuest document link




ABSTRACT
The primary objective of this thesis is to explore the efficacy of neural network methodology in capturing the relationship between a strategic goal such as market share, and strategic variables such as product quality, and competitive environmental variables such as the number of competitors. PIMS (Profit Impact of Market Strategy) database contains this kind of data. Traditionally, statistical methods such as regression have been used to analyze the underlying relationships using the PIMS data.
This alternative method is called the neural network approach, which seeks to mimic brain functions, and has been applied in areas such as pattern recognition where the problems are ill-structured. Rules are implicit in the network. The neural network model adopted in this thesis is a network configured with one output unit (market share), 24 input units, and 10 hidden units between, interconnected in a feed-forward mode with no direct connection between output and input. The learning algorithm used is called backpropagation which seeks to minimize the total sum of squared error between target output and actual output from the network by propagating the error back and thus updating the weights over many passes of the training data sets of inputs and outputs through the network. The 994 cases of data used come from the PIMS database.
The network is able to predict market share as well as a linear model using three-stage-least squares (3SLS). It is also able to test hypothesis of marketing strategies. The neural network model also captures the nonlinear relationship between the input and the output variables automatically, without having to specify the nonlinear terms to fit the data, as in the case of regression. In comparison to the linear regression model, the responsiveness of the neural network model to the change in the input differs from the linear model depending on the input variables and their values. The network can be reduced by using 3SLS to identify the most important input variables.

LINKS
DETAILS

Subject: Marketing; Computer science; Artificial intelligence

Classification: 0338: Marketing; 0984: Computer science; 0800: Artificial intelligence

Identifier / keyword: Social sciences Applied sciences

Number of pages: 131

Publication year: 1991

Degree date: 1991

School code: 0212

Source: DAI-A 52/03, Dissertation Abstracts International

Place of publication: Ann Arbor

Country of publication: United States

Advisor: Weyant, John P.

University/institution: Stanford University

University location: United States -- California

Degree: Ph.D.

Source type: Dissertations &Theses

Language: English

Document type: Dissertation/Thesis

Dissertation/thesis number: 9122576

ProQuest document ID: 303959271

Document URL: https://search.proquest.com/docview/303959271?accountid=8243

Copyright: Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

Database: ProQuest Dissertations &Theses Global



Neural network approaches to power system short-term load forecasting
Peng, Tiemao . Arizona State University, ProQuest Dissertations Publ

/ 0 نظر / 182 بازدید