When going from one language to another, you may of course consider how things you faced in the old language is treated in the new one. But often you will find that the "things" in the new language is not the same as those in the old language.
You should notice that Visual Prolog is a fully compiled language. So your clauses
will be compiled into a binary executable program.
In ancient time SICStus Prolog (and other Edinburgh/ISO Prolog variants) have clauses that can be asserted/retracted/saved/consulted/etc during runtime. So the program can be updated dynamically during runtime. The clauses are treated as data that can be updated at runtime. This is for example used to deal with knowledge bases. I.e. your knowledge base is represented as clauses which is dynamically asserted/retracted/saved/consulted/etc.
The dynamic behavior of ISO Prolog comes at a certain cost. Namely that it may be too expensive (and/or difficult) to compile and/or optimize the clauses if they often change. Furthermore even if you use the dynamic features for knowledge bases and the like, 98% (figuratively speaking) of your predicates will never be changed dynamically; these predicates are a static part of your program which you have spend months/years developing, debugging, etc. You don't need (or even want) them to update dynamically. On the other hand, you want these predicates to perform at best possible speed.
So at some time in the history of ISO Prolog, it was made possible to have non-dynamic predicates. This made it possible/easier for ISO Prolog languages like SICStus to deal with optimizations and compilation.
I guess it is this feature you are referring to.
Visual Prolog is quite different in this respect, a Visual Prolog program is the 98% static part of your program, and this part is optimized in many ways and fully compiled into an executable file (exe or dll).
So your question is somewhat incompatible with Visual Prolog so to speak. The "really good" question in this respect is not about the 98% static code in your program it is about the 2%
dynamic code.
Dealing with those 2% can range from trivial to
very complicated.
It is trivial if your 2% is just what we call facts (i.e. a grounded/variable free clause without body). Such facts can be asserted/retracted/saved/consulted/etc during runtime, and as the other users mention above save and consult does not pose any significant performance problem. At runtime they are handled linearly and that can be a source of algorithmic inefficiency (I do not know whether SICStus have a more efficient behavior in this respect).
It can be very complicated if your 2% contains "real clauses" with variables and bodies, because then you will have to implement something which eliminates this need. And that may be very complicated.
Finally, I have a little twofold general advice on performance:
- Do not solve fictive performance problems that you don't actually have.
- Instead learn to consider algorithmic complexity of your code and use efficient data structures by default.
The first part of the advice both has to do with not wasting your time on solving non existing problems and what is even worse make your code unnecessarily complex and hard to maintain.
The second part of the advice has to do with the real source of (software) performance problems
algorithmic complexity. While you can possibly make a program/algorithm 20% more efficient by "fiddling" with the code, nobody will really be happy because 20% is almost always "nothing". When you have a real performance problem you will normally need at least to make the code twice as efficient but more likely 10 times as efficient. Such an efficiency improvement cannot be solved by "fiddling", it will require a reduction of the algorithms complexity.
Facts as mentioned above are arranged linearly and that can easily become a source of inefficiency. Those parts of your code which need to traverse all your data will not suffer from this, but those parts that need to access a single piece of data will/may suffer dramatically because in average you will have to search through half your data to find a certain piece.
Therefore it is a good idea to (by default) consider access to your data:
- Does this code search linearly for a piece of data among a (potentially) large amount of data
- Should I use a more efficient data representation (e.g. a map from the collection library) for this kind of information
- ...
If you do a good job in this respect you will by default make your code perform much better without decreasing the readability of it. The downside is that when you then do get performance problems they will be much harder to solve, because you have already done what will normally help
.