Disclaimer: I do not know much about neural networks.
At VIP-ALC 2008 Paul Cherkez told about his experiences with neural networks. He has also made some mails in the Vip7 forum about it.
His presentation triggered me to find a solution to some performance problems he had. Using objects the net construction took very long time, but the calculation was rather fast. Using a more traditional fact approach the building was much faster, but calculation was slow. Both things as the nets grew larger.
The program is described in the wiki article Neural Network Program.
- Thomas Linder Puls
- VIP Member
- Posts: 1444
- Joined: 28 Feb 2000 0:01
Neural Network Program
- Attachments
-
- neuralNet.zip
- neuralNet project
- (19.6 KiB) Downloaded 3415 times
Regards Thomas Linder Puls
PDC
PDC
-
- VIP Member
- Posts: 108
- Joined: 6 Mar 2000 0:01
the NN sample program Thomas provided was a good start and gave me a number of good ideas to get around some of the limitations I was experiencing.
while design of the NN architecture was an influcence in some networks, the actual implementaiton had more of an impact as hte networks became larger.
SOMs were the most processor intensive and computing increased exponentailly with increases in size
Back Props were fast and design had a huge impact on processing
Neural Abstraction Pyramid (NAP) - while design had some minor imapct, size and implementation were the driving factors.
Hybrid Custom Network (HCN) - Same as the NAP.
THe SOM, NAP and HCN were all capabile of unsupervised training. The final implementation of the HCN used semi-supervised training.
P.
while design of the NN architecture was an influcence in some networks, the actual implementaiton had more of an impact as hte networks became larger.
SOMs were the most processor intensive and computing increased exponentailly with increases in size
Back Props were fast and design had a huge impact on processing
Neural Abstraction Pyramid (NAP) - while design had some minor imapct, size and implementation were the driving factors.
Hybrid Custom Network (HCN) - Same as the NAP.
THe SOM, NAP and HCN were all capabile of unsupervised training. The final implementation of the HCN used semi-supervised training.
P.
AI Rules!
P.
P.
-
- VIP Member
- Posts: 108
- Joined: 6 Mar 2000 0:01
- Thomas Linder Puls
- VIP Member
- Posts: 1444
- Joined: 28 Feb 2000 0:01
Updated for Visual Prolog 7.4
- Attachments
-
- neuralNet.zip
- Updated for Visual Prolog 7.4
- (19.49 KiB) Downloaded 2012 times
Regards Thomas Linder Puls
PDC
PDC
- Thomas Linder Puls
- VIP Member
- Posts: 1444
- Joined: 28 Feb 2000 0:01
The profile package is used to measure the times spent in various parts of the program. But the profile package is only present in the Commercial Edition.
However, measuring execution times is of course not essential to the neural network problem itself. And can simply be removed from the program.
If you remove profiling from the program the run predicate should end looking like this:
Besides this change you should remove all packages that the IDE says it cannot find and you should delete the include directory for the profile package that gives a file not found error.
Given these changes the program will run on the Personal Edition.
(I cannot recall what this program does except building a pyramid shaped neural network.)
However, measuring execution times is of course not essential to the neural network problem itself. And can simply be removed from the program.
If you remove profiling from the program the run predicate should end looking like this:
Code: Select all
clauses
run():-
console::init(),
N1 = pyramidBuilder::new(layers):net,
N1:calculate(),
netSaver::new(N1):save(netFile1),
NL2 = netLoader::new(netFile1),
N2 = NL2:net,
N2:calculate(),
netSaver::new(N2):save(netFile2).
Given these changes the program will run on the Personal Edition.
(I cannot recall what this program does except building a pyramid shaped neural network.)
Regards Thomas Linder Puls
PDC
PDC
-
- VIP Member
- Posts: 108
- Joined: 6 Mar 2000 0:01
Thomas,
The NN example provided a construct for a developer to use to build a pyramid based NNs.
It definitely solved the performance issues I was experiencing. I took your base example and modified/extended it to support the various pryamid architectures I was experimenting with.
For example, I built a NAP with an input layer of 256x256 (i x j). It had 7 'processing' layers with K feature arrays (comprised of i x j neural nodes per array) per layer. The layers decrease N/2. But in a NAP while the i x j dimension is decreasing, the number of feature arrays (K) in a layer increases by K*2 as you travel up the pyramid.
L(0) 256x256, K=0
L(1) 128x128, K =2
L(2) 64x64, K =4
etc.
For a 256x256 input layer, you can expect approx 469,000 neurons in the net (with all their weighted connections) and reducing to 128x128, the total drops to approx 117,000. (a factor of 4 change in total net size)
It had sufficiently adequate performance for my research purposes. Planning on experimenting with trying to implement a CUDA capable version in the near future to improve processing speed for a 'production-like environment' implementation.
P.
The NN example provided a construct for a developer to use to build a pyramid based NNs.
It definitely solved the performance issues I was experiencing. I took your base example and modified/extended it to support the various pryamid architectures I was experimenting with.
For example, I built a NAP with an input layer of 256x256 (i x j). It had 7 'processing' layers with K feature arrays (comprised of i x j neural nodes per array) per layer. The layers decrease N/2. But in a NAP while the i x j dimension is decreasing, the number of feature arrays (K) in a layer increases by K*2 as you travel up the pyramid.
L(0) 256x256, K=0
L(1) 128x128, K =2
L(2) 64x64, K =4
etc.
For a 256x256 input layer, you can expect approx 469,000 neurons in the net (with all their weighted connections) and reducing to 128x128, the total drops to approx 117,000. (a factor of 4 change in total net size)
It had sufficiently adequate performance for my research purposes. Planning on experimenting with trying to implement a CUDA capable version in the near future to improve processing speed for a 'production-like environment' implementation.
P.
AI Rules!
P.
P.
Re:
Thomas, I tried importing this into VPC 9, but I guess I'm not familiar enough with earlier versions to upgrade the project. If it were trivial, could you post an upgraded project for VPC 9. It is fairly dated, but right now NN, and deep learning technology is very relevant. For example, I just solved a problem for a company using a very simple NN, that professional OCR engines were unable to solve.
Paul, did you take this any farther? I'm working with Tensorflow and Keras with my professional work atm, but would be nice for my personal Prolog work, to have a simple NN without having to require all those packages.Paul Cerkez wrote: ↑21 Oct 2013 11:56 The NN example provided a construct for a developer to use to build a pyramid based NNs.
Cheers,
choibakk
- Thomas Linder Puls
- VIP Member
- Posts: 1444
- Joined: 28 Feb 2000 0:01
Re: Neural Network Program
Updated for Visual Prolog 9
- Attachments
-
- neuralNet.zip
- Updated for Visual Prolog 9
- (16.28 KiB) Downloaded 1453 times
Regards Thomas Linder Puls
PDC
PDC
-
- VIP Member
- Posts: 108
- Joined: 6 Mar 2000 0:01
Re: Neural Network Program
choibakk,
Been a while since I was active on this site (getting back to it though )
Anyway, in response to your question about did I do anything more with my earlier work. Simply, just a little bit.
Without building an API to C++, I could not directly take my VIP code to a CUDA implementation. I could not access the parallel processing capabilities. I simply did not have the time to do all the necessary coding.
I later took my VIP NN code, re-coded it to C++ (in Visual Studio with the CUDA plug in) and then did some timing studies on the NN as sequential processing (like in VIP) and CUDA parallel processing. There was well over a 250% increase in processing speed. I knew parallel was going to be faster, even with some extra overhead dealing with data movement, but the whole thing was much, much, faster than I expected.
While I would LOVE to do everything in VIP, it appears that I may need to create a C++ "translation" interface between VIP and the CUDA environment. The data is not the issue, it is the structural differences of some of the code/commands. I did start on it and it is doable but I just don't have the time to do it right now.
Been a while since I was active on this site (getting back to it though )
Anyway, in response to your question about did I do anything more with my earlier work. Simply, just a little bit.
Without building an API to C++, I could not directly take my VIP code to a CUDA implementation. I could not access the parallel processing capabilities. I simply did not have the time to do all the necessary coding.
I later took my VIP NN code, re-coded it to C++ (in Visual Studio with the CUDA plug in) and then did some timing studies on the NN as sequential processing (like in VIP) and CUDA parallel processing. There was well over a 250% increase in processing speed. I knew parallel was going to be faster, even with some extra overhead dealing with data movement, but the whole thing was much, much, faster than I expected.
While I would LOVE to do everything in VIP, it appears that I may need to create a C++ "translation" interface between VIP and the CUDA environment. The data is not the issue, it is the structural differences of some of the code/commands. I did start on it and it is doable but I just don't have the time to do it right now.
AI Rules!
P.
P.