-
- VIP Member
- Posts: 331
- Joined: 14 Nov 2002 0:01
Callback Actions for Exceptions and Failure
Hello Thomas,
for some purposes it could be very beneficial, if callback predicates could be set, which are called
a) whenever an exception occurs,
b) whenever something has failed, i.e. each time backtracking has been performed.
The purpose, for which I would like to use the callback predicates, is (I talked about it already in earlier posts): By such callback predicates it would become possible to code a determ unify predicate (you described the unify predicate in tutorial How To Remove Reference Domains from a Project). Currently the unify predicate must be nondeterm to accomplish releasing bindings when backtracking takes place. A nondeterm unify predicate however has the huge drawback, that cuts cannot be used anymore in places, where it also would cut off a backtrack point from the unify predicate. The problem applies also to the implicite cuts in list comprehension, if-then-else, and foreach.
To not fall into an endless loop, the callback predicate of a) should not be called on exceptions, which occur in itself, resp. the predicate of b) should not be called, when something fails in itself.
Can you pleeease consider implementing a way programmers can set such callback predicates in a future VIP version?
Best regards
Martin
for some purposes it could be very beneficial, if callback predicates could be set, which are called
a) whenever an exception occurs,
b) whenever something has failed, i.e. each time backtracking has been performed.
The purpose, for which I would like to use the callback predicates, is (I talked about it already in earlier posts): By such callback predicates it would become possible to code a determ unify predicate (you described the unify predicate in tutorial How To Remove Reference Domains from a Project). Currently the unify predicate must be nondeterm to accomplish releasing bindings when backtracking takes place. A nondeterm unify predicate however has the huge drawback, that cuts cannot be used anymore in places, where it also would cut off a backtrack point from the unify predicate. The problem applies also to the implicite cuts in list comprehension, if-then-else, and foreach.
To not fall into an endless loop, the callback predicate of a) should not be called on exceptions, which occur in itself, resp. the predicate of b) should not be called, when something fails in itself.
Can you pleeease consider implementing a way programmers can set such callback predicates in a future VIP version?
Best regards
Martin
- Thomas Linder Puls
- VIP Member
- Posts: 1424
- Joined: 28 Feb 2000 0:01
I fail to see how such callbacks should help you. The problem you face it to detect whenever a fail goes across your unify predicate, because that is the time where you need to release the bindings that was made in that unification.
If your call stack looks like this:
D
C
B
A
and you fail. Then control may be transferred to A, B, C or stay within D depending on where the first backtrack point is located.
If it is B that contains the first backtrack point, then you must undo the unifications made in D, C and those made in B after the backtrack point, but not those made before the backtrack point.
I fail to see how you can find exactly those unifications in a "fail" callback.
If you want to have more control over the unbinding you should instead examine the PIE code, because it has (since the tutorial) been updated to handle unbinding more explicitly.
I have also attached a little project that illustrates how to get more precise control over the unbinding, this project does however not solve any of the problems you mention.
So it may be a better starting point than PIE, but PIE is more complete.
If your call stack looks like this:
D
C
B
A
and you fail. Then control may be transferred to A, B, C or stay within D depending on where the first backtrack point is located.
If it is B that contains the first backtrack point, then you must undo the unifications made in D, C and those made in B after the backtrack point, but not those made before the backtrack point.
I fail to see how you can find exactly those unifications in a "fail" callback.
If you want to have more control over the unbinding you should instead examine the PIE code, because it has (since the tutorial) been updated to handle unbinding more explicitly.
I have also attached a little project that illustrates how to get more precise control over the unbinding, this project does however not solve any of the problems you mention.
So it may be a better starting point than PIE, but PIE is more complete.
- Attachments
-
- refElim.zip
- Refrence Elimination example
- (8.43 KiB) Downloaded 832 times
Regards Thomas Linder Puls
PDC
PDC
-
- VIP Member
- Posts: 331
- Joined: 14 Nov 2002 0:01
The idea to find exactly those unifications, which have to be undone, is:
When I perform a binding, I push the current backtrack-stack position along with the action to undo the binding, on a stack. When a fail occurs, the fail-callback predicate will be executed. In the callback predicate I look at the backtrack-stack position, which is stored in the stack's top entry. As long as it is smaller than the current backtrack-stack position, I keep popping entries from the stack and perform their undo-actions.
Regards
Martin
When I perform a binding, I push the current backtrack-stack position along with the action to undo the binding, on a stack. When a fail occurs, the fail-callback predicate will be executed. In the callback predicate I look at the backtrack-stack position, which is stored in the stack's top entry. As long as it is smaller than the current backtrack-stack position, I keep popping entries from the stack and perform their undo-actions.
Regards
Martin
- Thomas Linder Puls
- VIP Member
- Posts: 1424
- Joined: 28 Feb 2000 0:01
-
- VIP Member
- Posts: 147
- Joined: 5 Dec 2012 7:29
- Thomas Linder Puls
- VIP Member
- Posts: 1424
- Joined: 28 Feb 2000 0:01
Reference domains was removed from Visual Prolog in version 6.0. A reference domain could contain free/unbound variables:
It is by many people considered a fundamental vital part of Prolog (i.e. ISO/Edinburgh Prolog). but we decided to remove it from Visual Prolog, because we in practice never used it (with PIE as the only exception). The reason we didn't use it is that it is a double-edged sword: Using free variables can be very powerful, but like it can also be very difficult to control the power.
Code: Select all
clauses
p() :-
X = _, % X is free
L1 = [X, X, X, Y], % X is still free, Y is also free
L2 = [Z, 2, Z, Y], % Z is also free
L1 = L2, % L1 and L2 will now have the value [2, 2, 2, Y] where Y is still free,
...
Regards Thomas Linder Puls
PDC
PDC
-
- VIP Member
- Posts: 147
- Joined: 5 Dec 2012 7:29
-
- VIP Member
- Posts: 331
- Joined: 14 Nov 2002 0:01
- Thomas Linder Puls
- VIP Member
- Posts: 1424
- Joined: 28 Feb 2000 0:01
-
- VIP Member
- Posts: 331
- Joined: 14 Nov 2002 0:01
The semantics of try-finally would be: Just call the Handler once in the very end. That is, after Body has been executed (resp. has tried to be executed) and no backtrack points in Body are left.
That means, the call to the Handler would either be triggered by one of these:
This semantics seems perfectly intuitive to me. There is no misleading or awkward syntax here (unlike there is in the thread with the "for all .. holds .." syntax). Or am I overlooking some problem, which the above semantik would introduce? Of course I don't know however, how difficult it is, to implement that semantics into VIP.
Regards
Martin
That means, the call to the Handler would either be triggered by one of these:
- All alternatives in Body have been tried and finally Body finishes without leaving a backtrack point.
- A cut after the try-finally statement, which cuts off the backtrack points from the Body.
- The Body raises an exception.
This semantics seems perfectly intuitive to me. There is no misleading or awkward syntax here (unlike there is in the thread with the "for all .. holds .." syntax). Or am I overlooking some problem, which the above semantik would introduce? Of course I don't know however, how difficult it is, to implement that semantics into VIP.
Regards
Martin
- Thomas Linder Puls
- VIP Member
- Posts: 1424
- Joined: 28 Feb 2000 0:01
I see.
The try-finally could probably be implemented, but we have not been very "happy" about having to run programmer code on cut. This has never been the case before, and it seems a bit too much to change cut in such a dramatic way to achieve this feature.
Making try-catch nondeterm is even harder. The problem is caused by the combination of the following two properties:
I am curious how you would use this feature to implement your "determ unify"?
The try-finally could probably be implemented, but we have not been very "happy" about having to run programmer code on cut. This has never been the case before, and it seems a bit too much to change cut in such a dramatic way to achieve this feature.
Making try-catch nondeterm is even harder. The problem is caused by the combination of the following two properties:
- a try-catch a thing in the call stack, everything that is deeper in the call stack is affected by the try-catch.
- an apparent return from a nondeterm predicate is actually a call deeper into the code, adding additional stuff on the call stack rather than removing anything.
I am curious how you would use this feature to implement your "determ unify"?
Regards Thomas Linder Puls
PDC
PDC
-
- VIP Member
- Posts: 331
- Joined: 14 Nov 2002 0:01
I would use the feature to enclose a nondeterministic version of the unify predicate in a try-finally statement. That way the unify predicate does not actually become determ, but regarding cuts it could behave like the "ordinary" deterministic unification. The construction could be similar as it's done in pie:
I use resetPoint objects. These objects can store actions to be executed later. When bindings are made in the unify predicate, unify stores the actions to undo the bindings in a resetPoint object.
The resetPoint objects are created inside the backtrackpoint predicate. The predicate has mode multi, a call to it will leave a backtrack point. When the program backtracks to it, it executes the undo-actions, which have been stored in the resetPoint object. Aside from that the backtrackpoint predicate inserts the resetPoint objects into a fact database, when they are created, and removes them from there, when their undo-actions are executed.
The unification is done in the usual way, however the clause-body of the unify predicate is now enclosed in a try-finally statement.
In the Cut-Handler I inspect the resetPoint objects, which are stored in resetPoint_fact: Those resetPoint objects, which have a stackPosition, that is lower than the current backtrack stack position, have been accidentally cut off (since the backtrack stack grows downwards). The resetPoint object having the least stackPosition, which is greater or equal to the current backtrack stack position, is the last "alive" resetPoint object. Using getResetAction_nd and addResetAction I move the undo-actions (with preserving their order) from the accidentally cut-off resetPoint objects to the last alive resetPoint object. Finally I retract the cut-off resetPoint objects from resetPoint_fact.
To garantee, that there is always a suitable last alive resetPoint object, I demand from the programmer, that he must call predicate backtrackpoint at the beginning of each cut-scope, in which unify is used. That means, each clause, which contains a call to unify, must start with a call to backtrackpoint. To make this call more intuitive to the programmer, backtrackpoint could be renamed to initCutScope.
---
Having written all that up to here, I am getting doubts, whether that solution is really elegant. Drawbacks are, that unify is still nondeterm and that the calls to initCutScope are necessary. Furthermore the construction makes unify compliant with cuts by !, but not with arbitrary dynamic cuts.
If the Handler in try-finally would also be allowed to leave backtrack points, the initCutScope calls could be omitted. Because then I could call backtrackpoint in the Handler to create a new resetPoint object, instead of having to use an existant one. However that would complicate the change of VIP about the try-finally statement even more.
So, reviewing the thing, I am comming to the conclusion, that the initially proposed solution with callback predicates for failure and exceptions is the better one, even though the callback predicates are looking a bit more technical/artificial, while allowing try-finally statement to have a nondeterministic Body seems more natural. Furthermore I suppose, the callback predicates could be easier to implement into VIP than the change of try-finally and try-catch-finally.
Regards
Martin
I use resetPoint objects. These objects can store actions to be executed later. When bindings are made in the unify predicate, unify stores the actions to undo the bindings in a resetPoint object.
Code: Select all
interface resetPoint
domains
resetAction = (). %An undo-action.
properties
stackPosition : programControl::stackMark (o). %Position of the backtrack stack at creation of this object.
predicates
addResetAction : (resetAction). %Append an undo-action.
predicates
getResetAction_nd : () %Retrieve the undo-actions.
-> resetAction
nondeterm.
predicates
reset : resetAction. %Executes the undo-actions.
end interface
Code: Select all
predicates
backtrackpoint : () multi.
clauses
backtrackpoint() :-
RP = resetPoint::new(),
( % Stack on forward
asserta(resetPoint_fact(RP))
or % reset on backtrack
retractAll(resetPoint_fact(RP)),
RP:reset(),
fail
).
Code: Select all
predicates
unify : (term A, term B) nondeterm.
clauses
unify(A, B) :-
try
Do-The-Unification
finally
Cut-Handler
end try.
To garantee, that there is always a suitable last alive resetPoint object, I demand from the programmer, that he must call predicate backtrackpoint at the beginning of each cut-scope, in which unify is used. That means, each clause, which contains a call to unify, must start with a call to backtrackpoint. To make this call more intuitive to the programmer, backtrackpoint could be renamed to initCutScope.
---
Having written all that up to here, I am getting doubts, whether that solution is really elegant. Drawbacks are, that unify is still nondeterm and that the calls to initCutScope are necessary. Furthermore the construction makes unify compliant with cuts by !, but not with arbitrary dynamic cuts.
If the Handler in try-finally would also be allowed to leave backtrack points, the initCutScope calls could be omitted. Because then I could call backtrackpoint in the Handler to create a new resetPoint object, instead of having to use an existant one. However that would complicate the change of VIP about the try-finally statement even more.
So, reviewing the thing, I am comming to the conclusion, that the initially proposed solution with callback predicates for failure and exceptions is the better one, even though the callback predicates are looking a bit more technical/artificial, while allowing try-finally statement to have a nondeterministic Body seems more natural. Furthermore I suppose, the callback predicates could be easier to implement into VIP than the change of try-finally and try-catch-finally.
Regards
Martin
-
- VIP Member
- Posts: 331
- Joined: 14 Nov 2002 0:01
Looking again on my above construction I see, that it does not work at all: The problem is, that a cut would also cut off that backtrack point, which the programmer has set by initCutScope. The backtrack point, to which to move the undo-actions from the cut-off resetPoint objects, had to be already set BEFORE entering the cut scope, in order that it cannot be cut off from inside the scope.
Anyway, it would significantly perfecting VIP, if there was some feature, whatever it is, which enables coding a cut-safe deterministic unify.
Many regards
Martin
Anyway, it would significantly perfecting VIP, if there was some feature, whatever it is, which enables coding a cut-safe deterministic unify.
Many regards
Martin
- Thomas Linder Puls
- VIP Member
- Posts: 1424
- Joined: 28 Feb 2000 0:01
Thank you for the description.
Allowing backtrack points in the mentioned places will change these construction in a drastic way. Today you are affected by the construction when you are in the textual scope from try and end try, but with the change the construction will be a dynamic construction whose effect can also be active when you have left the scope (actually until there are no backtrack points left in the construction).
So it will change the construction from a static scope (compile time determined) to a dynamic scope (runtime determined).
The syntax with a clear textual scope is chosen to fit the textural/static scope semantics.
Allowing backtrack points in the mentioned places will change these construction in a drastic way. Today you are affected by the construction when you are in the textual scope from try and end try, but with the change the construction will be a dynamic construction whose effect can also be active when you have left the scope (actually until there are no backtrack points left in the construction).
So it will change the construction from a static scope (compile time determined) to a dynamic scope (runtime determined).
The syntax with a clear textual scope is chosen to fit the textural/static scope semantics.
Regards Thomas Linder Puls
PDC
PDC