پایان نامه های شبکه عصبی مصنوعی(2)

Time Scale of Asymmetric Neural Networks
Zhou, Yuan Jun (周浚源) . Lanzhou University (People's Republic of China), ProQuest Dissertations Publishing, 1905. H220022.
ProQuest document link




ABSTRACT
The convergence time of patterns which been stored in the artificial neural networks with associative memory has important influence on theoretical quality of the system and on applicability of it. This problem restricts the work efficiency of the artificial neural networks when it been put to effect. All research works on the convergence time which focus on symmetric models till now are weakened on the essential importance by vast numbers of spurious memories that can not be eliminated totally in symmetric neural networks. The asymmetric memory model which is designed by adaptation optimization rule gives a systemic theory and method to design asymmetric artificial neural networks and can eliminate all the spurious memories for the first time. So it is essentially important to research the convergence time in asymmetric neural networks now. On these situations, the convergence time of asymmetric memory model has been studied by random sampling in this article. Firstly the relation of convergence time and system size on condition that symmetric degree changes and storage ratio keeps constant has been studied. Secondly how this relation changes when the storage ratio changes has been studied. It has been found that there is a comparatively stable convergence time in the memory phase of asymmetric memory model and a minimal convergence time in every fixed system size, and the minimal convergence time will increase exponentially with the increase of system size. This convergence property attribute to the change in the three phases of the asymmetric memory model. In the first chapter we introduce the theory of artificial neural networks. In the second chapter we describe in detail the adaptation optimization rule and the asymmetric memory neural networks. In the last chapter we give the results of studies on the convergence time in asymmetric memory model and the discussion on these results

ALTERNATE ABSTRACT
在联想记忆类人工神经网络模型中,系统存储模式的收敛时间特性对于整个系统的理论设计质量和大尺度下的实际应用前景有着重要的影响,制约着人工神经网络在应用中的各种工作速度。目前这方面的工作都集中于对称模型情况下,而由于对称模型中伪吸引子问题没有办法完全消除,导致对于时间特性问题的研究结果缺乏本质的意义。使用变异优化规则设计的非对称记忆模型首次提出了设计非对称人工神经网络的系统理论和方法,并且首次完全消除了伪吸引子的影响,使得对于非对称神经网络时间特性问题的研究有了本质的意义。在这种背景下,本文采用随机抽样的方法详细研究了非对称记忆模型的时间特性。首先研究了相同存储率情况下时间特性和系统尺寸的关系随着对称度的增加而变化的规律,接着研究了这种规律和存储率之间的关系。发现在非对称记忆模型的记忆相中有着比较稳定的系统收敛时间,在固定系统尺寸之下这个收敛时间有最小值,最小收敛时间随着系统尺寸的增加满足指数增长的关系,这种收敛时间特性和该模型的三种相之间的变化有着紧密联系。 本文第一章简单介绍了人工神经网络的研究情况。第二章系统描述了用于设计具有联想记忆功能的非对称人工神经网络的变异优化规则,并对相关的非对称记忆模型及其研究结果给出了详细说明。第三章详细研究了非对称记忆模型的时间特性问题,最后给出了我们的研究结果并作了讨论。

LINKS
DETAILS

Subject: Physics

Classification: 0605: Physics

Identifier / keyword: (UMI)AAIH220022 Pure sciences Asymmetric Neural Networks Time Scale 时间尺度 非对称神经网络

Alternate title: 非对称神经网络的时间尺度

Number of pages: 0

Publication year: 1905

Degree date: 1905

School code: 1120

Source: DAI-C 71/56, Dissertation Abstracts International

Place of publication: Ann Arbor

Country of publication: United States

Advisor: Zhao, Hong (赵鸿)

University/institution: Lanzhou University (People's Republic of China)

University location: Peoples Republic of China

Degree: M.S.

Source type: Dissertations &Theses

Language: Chinese

Document type: Dissertation/Thesis

Dissertation/thesis number: H220022

ProQuest document ID: 1026536685

Document URL: https://search.proquest.com/docview/1026536685?accountid=8243

Copyright: Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

Database: ProQuest Dissertations &Theses Global



Artificial neural network learning mechanism enhancements.
McCormack, C. . University College Cork (Ireland), ProQuest Dissertations Publishing, 1977. U100888.
ProQuest document link




ABSTRACT
Neural network learning rules have undergone a continuous process of refinement and improvement over the last few years. Unfortunately, in spite of improvements to the efficiency of learning rules it remains necessary to manually select appropriate learning rule parameter values in order to achieve an acceptable solution. The learning rule parameters which yield a performance of highest quality (where quality can be defined as the speed of convergence and the accuracy of the resultant network) are usually unique for each problem with no effective method of judging what parameter value is suitable for which problem. This is a significant shortcoming in the area of learning rule implementation as the use of an inappropriate parameter can have a marked effect on the performance of most learning rules. This work outlines an application independent method of automating learning rule parameter selection using a form of supervisor neural network, known as Meta Neural Network, to alter the value of a learning rule parameter during training. The Meta Neural Network is trained using data generated by observing the training of a neural network and recording the effects on the selection of various parameter values.
The Meta Neural Network is then combined with a learning rule and is used to augment the learning rules performance. Experiments were undertaken to see how the method performs by using it to adapt a global parameter of the RPROP and Quick propagation learning rules. The method was found to yield a consistently superior performance over conventional methods. This method of creating a Meta Neural Network is proposed as a first step in an attempt to develop a self-modifying Neural Network and involves a method which does not need intervention, is a consistent performer and requires a minimal of computational overhead. Two improvements of the Meta Neural Network scheme are discussed. One approach investigates the result of combining training sets from several Meta Neural Networks, the other involves polling a number of individual Meta Neural Networks to produce a consensus on how learning rule parameters can be adapted.

LINKS
DETAILS

Subject: Artificial intelligence

Classification: 0800: Artificial intelligence

Identifier / keyword: (UMI)AAIU100888 Applied sciences

Number of pages: 1

Publication year: 1977

Degree date: 1977

School code: 1269

Source: DAI-C 70/21, Dissertation Abstracts International

Place of publication: Ann Arbor

Country of publication: United States

Publication subject: Psychology--Abstracting, Bibliographies, Statistics

University/institution: University College Cork (Ireland)

University location: Ireland

Degree: Ph.D.

Source type: Dissertations &Theses

Language: English

Document type: Dissertation/Thesis

Dissertation/thesis number: U100888

ProQuest document ID: 301400872

Document URL: https://search.proquest.com/docview/301400872?accountid=8243

Copyright: Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

Database: ProQuest Dissertations &Theses Global



Artificial Brain Again Seen As a Guide To the Mind
Johnson, George . New York Times , Late Edition (East Coast); New York, N.Y. [New York, N.Y]16 Aug 1988: C.1.
ProQuest document link




ABSTRACT (ABSTRACT)
''The possibility is really before us to start making models that bridge the gap between neurons and complex behavior,'' said Patricia Smith Churchland, a philosopher at the University of California at San Diego. ''For years there has been this great no-man's-land between the two levels. Now we're really beginning to investigate that no-man's-land, and in the long haul we might get a theory of how the brain works.''

The new developments in neural networks ''are tremendously important for philosophers,'' she said. ''Plato's question was, 'What is knowledge and how is it possible?' Now we have a real shot at answering that.''

''Neural networks are serving as a medium or a mathematical language for describing all these phenomena,'' said Terrence J. Sejnowki of Johns Hopkins University's Department of Biophysics. ''There is still a big gulf between psychology and neuroscience, but at least they're now in the same mathematical ballpark.''


LINKS
DETAILS

Subject: DATA PROCESSING (COMPUTERS); BRAIN; PSYCHOLOGY AND PSYCHOLOGISTS

People: Minsky, Marvin MINSKY, MARVIN JOHNSON, GEORGE

Publication title: New York Times, Late Edition (East Coast); New York, N.Y.

Pages: C.1

Publication year: 1988

Publication date: Aug 16, 1988

Section: C

Publisher: New York Times Company

Place of publication: New York, N.Y.

Country of publication: United States

Publication subject: General Interest Periodicals--United States

ISSN: 03624331

CODEN: NYTIAO

Source type: Newspapers

Language of publication: English

Document type: NEWSPAPER

ProQuest document ID: 426918985

Document URL: https://search.proquest.com/docview/426918985?accountid=8243

Copyright: Copyright New York Times Company Aug 16, 1988

Last updated: 2017-11-15

Database: Global Newsstream



Computers bridging gap in artificial intelligence
Reuters . Financial Post ; Toronto, Ont. [Toronto, Ont]23 Sep 1988: 14.
ProQuest document link




ABSTRACT (ABSTRACT)
Neural networks are a type of artificial intelligence. The most common artificial intelligence approach, expert systems, are programmed with a set of rules and make determinations by applying those rules.
Neural network consultant Tom Schwartz of Schwartz Associates estimates that sales of neural network modeling tools, the building blocks for applications, will grow 50% a year for the next five years, to US$150 million by 1992.
Japan's Ministry of International Trade and Industry received funding requests last month for a major effort to develop neural network technologies. [Edward Rosenfeld] said that although Japan lags behind the United States and Europe in this area now, indications are that neural networks will be a major part of its Sixth Generation Human Frontier Project, a consortium of government and industry to develop advanced technologies.

LINKS
DETAILS

Company / organization: Name: Nestor Inc; Ticker: NEST; NAICS: 511210; SIC: 7372; DUNS: 11-600-9499

Publication title: Financial Post; Toronto, Ont.

Pages: 14

Number of pages: 0

Publication year: 1988

Publication date: Sep 23, 1988

Dateline: Boston,MA

Section: 1, News

Publisher: The Financial Times Limited

Place of publication: Toronto, Ont.

Country of publication: United Kingdom

Publication subject: Business And Economics--Banking And Finance

ISSN: 08388431

Source type: Newspapers

Language of publication: English

Document type: NEWS

ProQuest document ID: 436739422

Document URL: https://search.proquest.com/docview/436739422?accountid=8243

Copyright: (Copyright The Financial Post 1988)

Last updated: 2010-06-29

Database: Global Newsstream



Higher Tech: Computer Researchers Find 'Neural Networks' Help Mimic the Brain --- The Systems, a Building Block For Artificial Intelligence, May Analyze Loans, Radar --- Dwarfed by a Sharp Cockroach
By David Stipp . Wall Street Journal , Eastern edition; New York, N.Y. [New York, N.Y]29 Sep 1988: 1.
ProQuest document link




ABSTRACT (ABSTRACT)
He is developing a robot driven by a so-called neural network, a computer system modeled roughly after the brain's web of neurons. Mr. Kuperstein feels neural networks will revolutionize computing. To make his point, he wants to match his machine next year against a ping-pong-playing robot recently built by AT&T's Bell Laboratories.

"The same kind of excitement that surrounded artificial intelligence some years ago seems to be around neural networks today," says Arno Penzias, Bell Laboratories' vice president of research. "Some of it is hype. But neural networks are moving faster from concepts to serious applications than artificial intelligence did."

Clearly, many proposed uses of neural networks will require computing speed available only from specialized neural-network hardware. Such devices are cousins of new parallel processors, which employ numerous small computers working in concert. Indeed, neural networks may foster a multibillion-dollar market for new kinds of chips and computers. At least, that is the hope of giants pursuing the technology, such as AT&T, International Business Machines Corp. and Japan's Ministry of International Trade and Industry.


LINKS
DETAILS

Publication title: Wall Street Journal, Eastern edition; New York, N.Y.

Pages: 1

Number of pages: 0

Publication year: 1988

Publication date: Sep 29, 1988

Publisher: Dow Jones &Company Inc

Place of publication: New York, N.Y.

Country of publication: United States

Publication subject: Business And Economics--Banking And Finance

ISSN: 00999660

Source type: Newspapers

Language of publication: English

Document type: NEWSPAPER

ProQuest document ID: 398164368

Document URL: https://search.proquest.com/docview/398164368?accountid=8243

Copyright: Copyright Dow Jones &Company Inc Sep 29, 1988

Last updated: 2017-11-01

Database: Global Newsstream



New learning and control algorithms for neural networks
Youn, Chung Hwa . Louisiana State University and Agricultural & Mechanical College, ProQuest Dissertations Publishing, 1989. 9017311.
ProQuest document link




ABSTRACT
Neural networks offer distributed processing power, error correcting capability and structural simplicity of the basic computing element. Neural networks have been found to be attractive for applications such as associative memory, robotics, image processing, speech understanding and optimization. Neural networks are self-adaptive systems that try to configure themselves to store new information. This dissertation investigates two approaches to improve performance: better learning and supervisory control. A new learning algorithm called the Correlation Continuous Unlearning (CCU) algorithm is presented. It is based on the idea of removing undesirable information that is encountered during the learning period. The control methods proposed in the dissertation improve the convergence by affecting the order of updates using a controller.
Most previous studies have focused on monolithic structures. But it is known that the human brain has a "bicameral" nature at the gross level and it also has several specialized structures. In this dissertation, we investigate the computing characteristics of neural networks that are not monolithic being enhanced by a controller that can run algorithms that take advantage of the known global characteristics of the stored information. Such networks have been called bicameral neural networks. Stinson and Kak considered elementary bicameral models that used asynchronous control. New control methods, the method of iteration and bicameral classifier, are now proposed. The method of iteration uses the Hamming distance between the probe and the answer to control the convergence to a correct answer, whereas the bicameral classifier takes advantage of global characteristics using a clustering algorithm. The bicameral classifier is applied to two different models of equiprobable patterns as well as the more realistic situation where patterns can have different probabilities.
The CCU algorithm has also been applied to a bidirectional associative memory with greatly improved performance. For multilayered networks, indexing of patterns to enhance system performance has been studied.

LINKS
DETAILS

Subject: Computer science; Artificial intelligence

Classification: 0984: Computer science; 0800: Artificial intelligence

Identifier / keyword: Applied sciences Learning algorithms

Number of pages: 155

Publication year: 1989

Degree date: 1989

School code: 0107

Source: DAI-B 51/02, Dissertation Abstracts International

Place of publication: Ann Arbor

Country of publication: United States

Advisor: Kak, Subhash C.

University/institution: Louisiana State University and Agricultural &Mechanical College

University location: United States -- Louisiana

Degree: Ph.D.

Source type: Dissertations &Theses

Language: English

Document type: Dissertation/Thesis

Dissertation/thesis number: 9017311

ProQuest document ID: 303786218

Document URL: https://search.proquest.com/docview/303786218?accountid=8243

Copyright: Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

Database: ProQuest Dissertations &Theses Global



The design and analysis of effective and efficient neural networks and their applications
Makovoz, Walter Vladimir . The Union Institute, ProQuest Dissertations Publishing, 1989. 9010492.
ProQuest document link




ABSTRACT
The dissertation addresses complicated design issue of efficient multi-layer neural networks, and thoroughly examines perception and similar neural networks. It shows that a three-layer perceptron neural network with specially designed learning algorithms provides an efficient framework to solve an "exclusive OR" problem using only n $-$ 1 processing elements in the second layer.
Two efficient rapidly converging algorithms for any symmetric Boolean function were developed using only n $-$ 1 processing elements in the perceptron neural network and int(n/2) processing elements in the Adaline and perceptron neural network with the stepfunction transfer function. Similar results were obtained for the quasi-symmetric Boolean functions using a linear number of processing elements in perceptron neural networks, Adaline's, and perceptron neural networks with the stepfunction transfer functions. Generalized Boolean functions are discussed and two rapidly converging algorithms are shown for perceptron neural networks, Adaline's, and perceptron neural network with stepfunction transfer function. Many other interesting perceptron neural networks are discussed in the dissertation. Perceptron neural networks are applied to find the largest value of the n inputs. A new perceptron neural network is designed to find the largest value of the n inputs with the minimum number of inputs and the minimum number of layers. New perceptron neural networks are developed to sort n inputs. New, effective and efficient back-propagation Neural networks are designed to sort n inputs. The Sigmoid transfer function was discussed and a generalized Sigmoid function to improve Neural network performance was developed. A modified back-propagation learning algorithm was developed that builds any n input symmetric Boolean function using only int(n/2) processing elements in the second layer. This is the most efficient neural network that builds symmetric Boolean functions currently. The application of neural networks as associative memories to store and retrieve information for expert systems and intelligent tutoring systems was examined and researched. The BAM (bidirectional associative memory) has been examined and used in these applications.

LINKS
DETAILS

Subject: Computer science; Artificial intelligence

Classification: 0984: Computer science; 0800: Artificial intelligence

Identifier / keyword: Applied sciences

Number of pages: 387

Publication year: 1989

Degree date: 1989

School code: 1033

Source: DAI-B 50/11, Dissertation Abstracts International

Place of publication: Ann Arbor

Country of publication: United States

University/institution: The Union Institute

University location: United States -- Ohio

Degree: Ph.D.

Source type: Dissertations &Theses

Language: English

Document type: Dissertation/Thesis

Dissertation/thesis number: 9010492

ProQuest document ID: 303800150

Document URL: https://search.proquest.com/docview/303800150?accountid=8243

Copyright: Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

Database: ProQuest Dissertations &Theses Global



Casting Their Neural Nets
memory., George Johnson; George Johnson, an editor of The Week in Review of The New York Times and the author of ''Machinery of the Mind,'' is completing a book about . New York Times , Late Edition (East Coast); New York, N.Y. [New York, N.Y]24 Dec 1989: A.12.
ProQuest document link




ABSTRACT (ABSTRACT)
The neural net enthusiasts are as far from their goal of making intelligent machines as are their competitors in artificial intelligence. But for some reason people who have reviled the artificial intelligentsia, as they like to call them, are suddenly embracing neural networks. The philosopher Hubert Dreyfus has made his career using obscure arguments from Heidegger and other philosophers to denounce artificial intelligence as theoretically impossible. But he was so impressed by neural nets that he rewrote parts of his 1985 book ''Mind Over Machine'' (written with his brother, Stuart Dreyfus), allowing that thinking machinery might not be so unthinkable after all.

It is surprising that as good a science writer as Mr. [Jeremy Campbell] would fall into this same confusion, believing that neural nets constitute ''an approach that is radically different from much of the Western philosophical tradition.'' His first book, ''Grammatical Man,'' was an exhilarating meditation on information and entropy, order and chaos - the poles of the dynamo that generates life. ''The Improbable Machine'' elegantly describes the importance of neural networks in studying the brain-mind connection. But in trying to make neural nets seem like an upheaval rather than a variation on a theme, he turns artificial intelligence into a caricature that few of its adherents would recognize.

IN fact, the neural nets themselves, with all their wonderful properties, are usually simulations run on digital computers. It's clear from his book that Mr. Campbell knows this, so it is baffling that he can approvingly quote the Dreyfus brothers, who seem to believe that neural networks are contributing to the downfall of the idea that the brain is a formal system. Stranger still is Mr. Campbell's contention that the representations inside both brains and artificial neural nets have a quality he calls ''aboutness,'' which the empty symbols of a digital computer supposedly cannot have. Where does this elusive quality go when a neural net is being simulated on a regular old computing machine? Artificial neural networks are formal systems. Deny that brains fall into the same category and you're in danger of becoming a holist, worshipping a ghost in the machine.


LINKS
DETAILS

Subject: BOOK REVIEWS

People: JOHNSON, GEORGE CAMPBELL, JEREMY

Company / organization: Name: Simon &Schuster Inc; NAICS: 511130; SIC: 2731, 7372, 8732, 2741; DUNS: 00-149-5969

Publication title: New York Times, Late Edition (East Coast); New York, N.Y.

Pages: A.12

Publication year: 1989

Publication date: Dec 24, 1989

Section: A

Publisher: New York Times Company

Place of publication: New York, N.Y.

Country of publication: United States

Publication subject: General Interest Periodicals--United States

ISSN: 03624331

CODEN: NYTIAO

Source type: Newspapers

Language of publication: English

Document type: Review

ProQuest document ID: 427459834

Document URL: https://search.proquest.com/docview/427459834?accountid=8243

Copyright: Copyright New York Times Company Dec 24, 1989

Last updated: 2017-11-15

Database: Global Newsstream



Digital implementation issues of artificial neural networks
Pesulima, Edward Elisha . Florida Atlantic University, ProQuest Dissertations Publishing, 1990. 1341366.
ProQuest document link




ABSTRACT
Recent years have seen the renaissance of the neural network field. Significant advances in our understanding of neural networks and its possible applications necessitate investigations into possible implementation strategies. Among the presently available implementation medium, digital VLSI hardware is one of the more promising because of its maturity and availability. We discuss various issues connected with implementing neural networks in digital VLSI hardware. A new sigmoidal transfer function is proposed with that implementation in mind. Possible realizations of the function for stochastic and deterministic neural networks are discussed. Simulation studies of applying neural networks in constraint optimization and learning problems are carried out. These simulations were performed strictly in integer arithmetic. Simulation results provides an encouraging outlook for implementing these neural network applications in digital VLSI hardware. Important results concerning the sizes of various network values were found for learning algorithms.

LINKS
DETAILS

Subject: Computer science; Electrical engineering; Neurology

Classification: 0984: Computer science; 0544: Electrical engineering; 0317: Neurology

Identifier / keyword: Applied sciences Biological sciences

Number of pages: 224

Publication year: 1990

Degree date: 1990

School code: 0119

Source: MAI 29/01M, Masters Abstracts International

Place of publication: Ann Arbor

Country of publication: United States

Advisor: Shankar, Ravi

University/institution: Florida Atlantic University

University location: United States -- Florida

Degree: M.S.

Source type: Dissertations &Theses

Language: English

Document type: Dissertation/Thesis

Dissertation/thesis number: 1341366

ProQuest document ID: 194087916

Document URL: https://search.proquest.com/docview/194087916?accountid=8243

Copyright: Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

Database: ProQuest Dissertations &Theses Global



A neural network model, its properties and applications
Xu, Xin . University of Minnesota, ProQuest Dissertations Publishing, 1990. 9109380.
ProQuest document link




ABSTRACT
In this study, we propose a neural network model which generalizes the Hopfield neural network model. Like the original model the proposed model has the convergence property. It can model application problems that can not be modeled using the original Hopfield neural network.
We then study two major applications of the proposed model: (1) Associative memories. Associative memories are of great use in many areas, such as pattern recognition, signal processing, and machine learning. In using the proposed neural network as associative memories, two important issues have to be addressed: (a) efficient and effective construction of neural associative memories, (b) the memory capacity of such associative memory. Our results show that the proposed neural network increases the memory capacity. We obtained a good upper bound on the capacity of such neural associative memory using the worst case analysis. Other issues of the neural associative memory, such as the parasite memory, nearest convergence are also studied. (2) Combinatorial optimization problems. We study the Traveling Salesman Problem using the proposed neural network model. The proposed neural network model allows many effective adaptive techniques to escape from local optimums (one important reason why many previous researches did not succeed using Hopfield neural network). The results are very encouraging. The new neural algorithm has performance comparable not only to other neural algorithms, but also to some of the best conventional heuristic algorithms, such as Lin &Kernighan algorithm. The new neural algorithm also scales better than the Lin &Kernighan algorithm.

LINKS
DETAILS

Subject: Computer science; Artificial intelligence

Classification: 0984: Computer science; 0800: Artificial intelligence

Identifier / keyword: Applied sciences associative memory memory capacity

Number of pages: 170

Publication year: 1990

Degree date: 1990

School code: 0130

Source: DAI-B 51/11, Dissertation Abstracts International

Place of publication: Ann Arbor

Country of publication: United States

Advisor: Tsai, Wei Tek

University/institution: University of Minnesota

University location: United States -- Minnesota

Degree: Ph.D.

Source type: Dissertations &Theses

Language: English

Document type: Dissertation/Thesis

Dissertation/thesis number: 9109380

ProQuest document ID: 303861912

Document URL: https://search.proquest.com/docview/303861912?accountid=8243

Copyright: Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

Database: ProQuest Dissertations &Theses Global



An application of artificial neural networks to activated sludge bulking: Analysis and forecasting
Jones, Harold V. B. . Marquette University, ProQuest Dissertations Publishing, 1990. 9107784.
ProQuest document link




ABSTRACT
Activated sludge bulking is a phenomenon whereby filamentous organisms in activated sludges over-proliferate. This situation is thought to be caused by environmental conditio

/ 0 نظر / 47 بازدید