a few years back, based on advice from Thomas, Carston, and others from PDC while at the 2008 VIP-ALC in St Petersburg, I implemented my Neural Network in a Red-Black collection.
It gave me faster performance and a clean structure. the NN was a multi-level one with many thousand neurons and many more connections. (The final structure had 119,552 NAP neurons, 63 BPNN nodes, and over 1.2 million connections....)
Anyway, as the red-black collection could not be directly saved, I created a Fact base that I could use to consult and save. the fact base had 3 different types of facts in it (including lists, with compound arguments (tuples)). A relatively simple set of predicates read the fact base and rebuilt the red-black collection. when it was time to save the red-black collection, a couple more predicates moved the data from the collection to the fact base.
Sample from saved file:
Code: Select all
w(tuple(4,9999,8,7),0).
w(tuple(4,9999,8,8),0).
b(tuple(131,2,1,1),0).
b(tuple(200,1,1,1),0.143345542166336).
bw(tuple(101,1,1,1),tuple(4,9999,8,8),-60.2136796911554).
bw(tuple(101,1,1,1),tuple(4,9999,8,7),-60.2646875165705).
At first, it was a bit difficult to get the fact base set up correctly to match what was in the collection, but once it was done, it worked great. For example, in the red-black, I had a couple of data elements that get dynamically generated (like object IDs) during the consult/build but can be safely ignored during the save.
this concept is what I now use when I have large, interconnected sets of data stored in collections. As Thomas stated, some conversions to facts take a bit more work, others are a snap.