Please use this identifier to cite or link to this item: http://bura.brunel.ac.uk/handle/2438/1096
Full metadata record
DC FieldValueLanguage
dc.contributor.authorGuan, SU-
dc.contributor.authorLi, SC-
dc.coverage.spatial9en
dc.date.accessioned2007-08-02T15:08:01Z-
dc.date.available2007-08-02T15:08:01Z-
dc.date.issued2002-
dc.identifier.citationIEEE Trans. on Neural Networks, 13(3): 542 -550, May 2002en
dc.identifier.issn1045-9227-
dc.identifier.urihttp://bura.brunel.ac.uk/handle/2438/1096-
dc.description.abstractIn order to find an appropriate architecture for a large-scale real-world application automatically and efficiently, a natural method is to divide the original problem into a set of sub-problems. In this paper, we propose a simple neural network task decomposition method based on output parallelism. By using this method, a problem can be divided flexibly into several sub-problems as chosen, each of which is composed of the whole input vector and a fraction of the output vector. Each module (for one sub-problem) is responsible for producing a fraction of the output vector of the original problem. The hidden structure for the original problem’s output units are decoupled. These modules can be grown and trained in parallel on parallel processing elements. Incorporated with a constructive learning algorithm, our method does not require excessive computation and any prior knowledge concerning decomposition. The feasibility of output parallelism is analyzed and proved. Some benchmarks are implemented to test the validity of this method. Their results show that this method can reduce computational time, increase learning speed and improve generalization accuracy for both classification and regression problems.en
dc.format.extent130174 bytes-
dc.format.mimetypeapplication/pdf-
dc.language.isoen-
dc.publisherIEEEen
dc.relation.ispartof13;-
dc.subjectMultilayered feedforward networksen
dc.subjectConstructive learning algorithm-
dc.subjectParallel growing-
dc.subjectOutput parallelism-
dc.subjectTask decomposition-
dc.titleParallel growing and training of neural networks using output parallelismen
dc.typeResearch Paperen
dc.identifier.doihttp://dx.doi.org/10.1109/TNN.2002.1000123-
Appears in Collections:Electronic and Computer Engineering
Dept of Electronic and Electrical Engineering Research Papers

Files in This Item:
File Description SizeFormat 
parallel growing neural networks 2002.pdf285 kBAdobe PDFView/Open


Items in BURA are protected by copyright, with all rights reserved, unless otherwise indicated.