پایان نامه های شبکه عصبی مصنوعی(1)

Time Scale of Asymmetric Neural Networks

Zhou, Yuan Jun (周浚源) . Lanzhou University (People's Republic of China), ProQuest Dissertations Publishing, 1905. H220022.

ProQuest document link

 

 

 

 

ABSTRACT

The convergence time of patterns which been stored in the artificial neural networks with associative memory has important influence on theoretical quality of the system and on applicability of it. This problem restricts the work efficiency of the artificial neural networks when it been put to effect. All research works on the convergence time which focus on symmetric models till now are weakened on the essential importance by vast numbers of spurious memories that can not be eliminated totally in symmetric neural networks. The asymmetric memory model which is designed by adaptation optimization rule gives a systemic theory and method to design asymmetric artificial neural networks and can eliminate all the spurious memories for the first time. So it is essentially important to research the convergence time in asymmetric neural networks now. On these situations, the convergence time of asymmetric memory model has been studied by random sampling in this article. Firstly the relation of convergence time and system size on condition that symmetric degree changes and storage ratio keeps constant has been studied. Secondly how this relation changes when the storage ratio changes has been studied. It has been found that there is a comparatively stable convergence time in the memory phase of asymmetric memory model and a minimal convergence time in every fixed system size, and the minimal convergence time will increase exponentially with the increase of system size. This convergence property attribute to the change in the three phases of the asymmetric memory model. In the first chapter we introduce the theory of artificial neural networks. In the second chapter we describe in detail the adaptation optimization rule and the asymmetric memory neural networks. In the last chapter we give the results of studies on the convergence time in asymmetric memory model and the discussion on these results

 

ALTERNATE ABSTRACT

在联想记忆类人工神经网络模型中,系统存储模式的收敛时间特性对于整个系统的理论设计质量和大尺度下的实际应用前景有着重要的影响,制约着人工神经网络在应用中的各种工作速度。目前这方面的工作都集中于对称模型情况下,而由于对称模型中伪吸引子问题没有办法完全消除,导致对于时间特性问题的研究结果缺乏本质的意义。使用变异优化规则设计的非对称记忆模型首次提出了设计非对称人工神经网络的系统理论和方法,并且首次完全消除了伪吸引子的影响,使得对于非对称神经网络时间特性问题的研究有了本质的意义。在这种背景下,本文采用随机抽样的方法详细研究了非对称记忆模型的时间特性。首先研究了相同存储率情况下时间特性和系统尺寸的关系随着对称度的增加而变化的规律,接着研究了这种规律和存储率之间的关系。发现在非对称记忆模型的记忆相中有着比较稳定的系统收敛时间,在固定系统尺寸之下这个收敛时间有最小值,最小收敛时间随着系统尺寸的增加满足指数增长的关系,这种收敛时间特性和该模型的三种相之间的变化有着紧密联系。 本文第一章简单介绍了人工神经网络的研究情况。第二章系统描述了用于设计具有联想记忆功能的非对称人工神经网络的变异优化规则,并对相关的非对称记忆模型及其研究结果给出了详细说明。第三章详细研究了非对称记忆模型的时间特性问题,最后给出了我们的研究结果并作了讨论。

 

LINKS

DETAILS

 

Subject:

Physics

 

Classification:

0605: Physics

 

Identifier / keyword:

(UMI)AAIH220022 Pure sciences Asymmetric Neural Networks Time Scale 时间尺度 非对称神经网络

 

Alternate title:

非对称神经网络的时间尺度

 

Number of pages:

0

 

Publication year:

1905

 

Degree date:

1905

 

School code:

1120

 

Source:

DAI-C 71/56, Dissertation Abstracts International

 

Place of publication:

Ann Arbor

 

Country of publication:

United States

 

Advisor:

Zhao, Hong (赵鸿)

 

University/institution:

Lanzhou University (People's Republic of China)

 

University location:

Peoples Republic of China

 

Degree:

M.S.

 

Source type:

Dissertations &Theses

 

Language:

Chinese

 

Document type:

Dissertation/Thesis

 

Dissertation/thesis number:

H220022

 

ProQuest document ID:

1026536685

 

Document URL:

https://search.proquest.com/docview/1026536685?accountid=8243

 

Copyright:

Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

 

Database:

ProQuest Dissertations &Theses Global

 

 

 

Higher Tech: Computer Researchers Find 'Neural Networks' Help Mimic the Brain --- The Systems, a Building Block For Artificial Intelligence, May Analyze Loans, Radar --- Dwarfed by a Sharp Cockroach

By David Stipp . Wall Street Journal , Eastern edition; New York, N.Y. [New York, N.Y]29 Sep 1988: 1.

ProQuest document link

 

 

 

 

ABSTRACT (ABSTRACT)

He is developing a robot driven by a so-called neural network, a computer system modeled roughly after the brain's web of neurons. Mr. Kuperstein feels neural networks will revolutionize computing. To make his point, he wants to match his machine next year against a ping-pong-playing robot recently built by AT&T's Bell Laboratories.

 

"The same kind of excitement that surrounded artificial intelligence some years ago seems to be around neural networks today," says Arno Penzias, Bell Laboratories' vice president of research. "Some of it is hype. But neural networks are moving faster from concepts to serious applications than artificial intelligence did."

 

Clearly, many proposed uses of neural networks will require computing speed available only from specialized neural-network hardware. Such devices are cousins of new parallel processors, which employ numerous small computers working in concert. Indeed, neural networks may foster a multibillion-dollar market for new kinds of chips and computers. At least, that is the hope of giants pursuing the technology, such as AT&T, International Business Machines Corp. and Japan's Ministry of International Trade and Industry.

 

 

LINKS

DETAILS

 

Publication title:

Wall Street Journal, Eastern edition; New York, N.Y.

 

Pages:

1

 

Number of pages:

0

 

Publication year:

1988

 

Publication date:

Sep 29, 1988

 

Publisher:

Dow Jones &Company Inc

 

Place of publication:

New York, N.Y.

 

Country of publication:

United States

 

Publication subject:

Business And Economics--Banking And Finance

 

ISSN:

00999660

 

Source type:

Newspapers

 

Language of publication:

English

 

Document type:

NEWSPAPER

 

ProQuest document ID:

398164368

 

Document URL:

https://search.proquest.com/docview/398164368?accountid=8243

 

Copyright:

Copyright Dow Jones &Company Inc Sep 29, 1988

 

Last updated:

2017-11-01

 

Database:

Global Newsstream

 

 

 

The design and analysis of effective and efficient neural networks and their applications

Makovoz, Walter Vladimir . The Union Institute, ProQuest Dissertations Publishing, 1989. 9010492.

ProQuest document link

 

 

 

 

ABSTRACT

The dissertation addresses complicated design issue of efficient multi-layer neural networks, and thoroughly examines perception and similar neural networks. It shows that a three-layer perceptron neural network with specially designed learning algorithms provides an efficient framework to solve an "exclusive OR" problem using only n $-$ 1 processing elements in the second layer.

 Two efficient rapidly converging algorithms for any symmetric Boolean function were developed using only n $-$ 1 processing elements in the perceptron neural network and int(n/2) processing elements in the Adaline and perceptron neural network with the stepfunction transfer function. Similar results were obtained for the quasi-symmetric Boolean functions using a linear number of processing elements in perceptron neural networks, Adaline's, and perceptron neural networks with the stepfunction transfer functions. Generalized Boolean functions are discussed and two rapidly converging algorithms are shown for perceptron neural networks, Adaline's, and perceptron neural network with stepfunction transfer function. Many other interesting perceptron neural networks are discussed in the dissertation. Perceptron neural networks are applied to find the largest value of the n inputs. A new perceptron neural network is designed to find the largest value of the n inputs with the minimum number of inputs and the minimum number of layers. New perceptron neural networks are developed to sort n inputs. New, effective and efficient back-propagation Neural networks are designed to sort n inputs. The Sigmoid transfer function was discussed and a generalized Sigmoid function to improve Neural network performance was developed. A modified back-propagation learning algorithm was developed that builds any n input symmetric Boolean function using only int(n/2) processing elements in the second layer. This is the most efficient neural network that builds symmetric Boolean functions currently. The application of neural networks as associative memories to store and retrieve information for expert systems and intelligent tutoring systems was examined and researched. The BAM (bidirectional associative memory) has been examined and used in these applications.

 

LINKS

DETAILS

 

Subject:

Computer science; Artificial intelligence

 

Classification:

0984: Computer science; 0800: Artificial intelligence

 

Identifier / keyword:

Applied sciences

 

Number of pages:

387

 

Publication year:

1989

 

Degree date:

1989

 

School code:

1033

 

Source:

DAI-B 50/11, Dissertation Abstracts International

 

Place of publication:

Ann Arbor

 

Country of publication:

United States

 

University/institution:

The Union Institute

 

University location:

United States -- Ohio

 

Degree:

Ph.D.

 

Source type:

Dissertations &Theses

 

Language:

English

 

Document type:

Dissertation/Thesis

 

Dissertation/thesis number:

9010492

 

ProQuest document ID:

303800150

 

Document URL:

https://search.proquest.com/docview/303800150?accountid=8243

 

Copyright:

Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.

 

Database:

ProQuest Dissertations &Theses Global

 

 

 

An application of artificial neural networks to activated sludge bulking: Analysis and forecasting

Jones, Harold V. B. . Marquette University, ProQuest Dissertations Publishing, 1990. 9107784.

ProQuest document link

 

 

 

 

ABSTRACT

Activated sludge bulking is a phenomenon whereby filamentous organisms in activated sludges over-proliferate. This situation is thought to be caused by environmental conditions within the activated sludge medium, which favor the growth of the filamentous species over that of the floc-formers. The result is sludge that has poor settling characteristics, and a loss of solids in the effluent.

 Studies have shown that although there are many filamentous species, only a few are commonly cited as being dominant in bulking sludges. The growth requirements of these organisms has been related to the aeration basin dissolved oxygen, food:to:microorganism ratios, nutrient imbalances, the sludge age, and hydraulic conditions within the activated sludge environment. Some researchers claim that if a specific environmental factor can be shown to influence the growth of a particular organism, then that factor can be manipulated and effectively used to control the growth of the organism.

 A contrary viewpoint is proposed by other researchers. These researchers claim that the cause and effect relationship defined for various filamentous species is often contradictory and that the interaction between coexistent factors is of more significance. Attempts to examine and investigate coexistent factors have been mostly empirical and far from conclusive.

 This dissertation provides a new, unique and powerful approach to studying activated sludge bulking. This approach is the use of Artificial Neural Networks, and in particular the Back-Propagation algorithm. Artificial Neural Networks are computer based models of natural neural systems. These networks are capable of learning similar to humans, in that they can generalize information into basic underlying rules and rela

/ 0 نظر / 69 بازدید