I need to run a data definition update on databases with runtime license installed only. I've been following the process in this KB article http://knowledgebase.progress.com/articles/Article/P4760 which works fine when doing it manually so that I can click away any errors and post-update notifications.
The issue is that I'm trying to automate this through our maintenance procedure which runs os-command with the prowin32 call to the update procedure and that it freezes the maintenance with lock messages and other post-update prompts.
Is there anyway that I can suppress those or force termination of the process so that it can continue on with the maintenance procedure? I have tried adding -lkwtmo 60 and -rr as startup parameters but it still doesn't help with terminating the process.
Currently the command being passed through to os-command is:
"prowin <dbname> -H <DBSERVER> -S <PORT> -U <username> -P <password> -lkwtmo 60 -rx -p p-updatedf.p -rr -param delta.df"
And in p-updatedf.p, I simply have:
run prodict/load_df.r(SESSION:PARAMETER).
Sorry, my example was not very clear. Put the PAUSE statement before the FIND FIRST, you should see the schema lock. The schema lock is not in response to running lines of ABL. When the AVM loads the program, it checks for database references, and will get the schema lock as necessary, before running any code. Hence the debugger doesn't provide the granularity for you to see when the lock is obtained. You could use RCODE-INFO:DB-REFERENCES on RunOnlineUpdate.r (you have to compile it to .r). It will confirm whether there are database references. Or maybe compile it without any database connected.
|
Thanks Mike for that. Do you know if that would work on sites with client networking licenses only?
As for my current method, I managed to get it working with batch mode (-b) startup parameter but I'm finding that there are always schema locks caused by my own process calling p-updatedf.p. The schema locks appear as soon as it executes the os-command and terminates once it reaches the lock wait time out. I'm sure I'm missing something here...
In the DataDigger I have an option to clone the complete database (schema) and I use this to suppress messages:
RUN prodict/dump_df.p PERSISTENT SET hProc (INPUT 'ALL', INPUT cDf, INPUT '1252'). RUN setSilent IN hProc(YES) NO-ERROR. /* setSilent not avail in all versions */ RUN doDump IN hProc. DELETE PROCEDURE hProc.
The trick here is to run setSilent in dump_df. You could check whether an option like this is available in load_df as well. I did not use that for DataDigger since the message is kind of useful there.
(get DD here if you like to test it, send me pm if you need more info)
The load schema task can be executed with just a runtime license. And even if you have data migration procedures, you can still execute them with -rx (<Option name="-rx" /> in your <PCTRun node>)
Thanks for all your suggestions.
However, I need to do this data definition update on more than 100 client sites and so it is not really feasible to install third party software on all of the sites which I believe is required to use PTC and data digger?
I have managed to suppress the messages using the -b parameter, howevever it seems that when I run the df load via os-command, it will always give me a locking error. Whereas if I run it through command prompt it works fine, so I'm stuck on how I can get around this as this will have to be done automatically via our maintenance procedure.
If you are running the upload via OS-COMMAND, I am assuming that means you are executing another AVM with -b through the OS-COMMAND to do the .df load, and that is getting blocked on a schema lock. Do any programs in your first AVM contain references to DB tables? If so, they get a shared schema lock, and the subsequent client will block trying to get the exclusive lock. If this is indeed what you are trying to do, you have to remove any references to DB tables from programs running inside the first client. Maybe I am misunderstanding your issue.
Thanks Garry. Yes you're correct in your assumption in that we do have a loader procedure that first elevates the connected user to an admin user that has read/write access to the appropriate VSTs for the schema change. This procedure then calls the maintenance procedure which contains the bit of code that calls the OS-COMMAND for the DF load.
However the thing is that when I check promon just prior to the OS-COMMAND call, I do not see any schema locks. But as soon as it executes the procedure containing the RUN prodict/load_df.r(SESSION:PARAMETER), it encounters locking errors on _DB, _Field and _File tables until it reaches the lock wait time out and terminates the call at which point the locks are then released. This does not happen if I run the call via command prompt instead of the procedure call.
I have also tried running a separate procedure manually with just the OS-COMMAND call and no references to any of the database tables and the same thing happens, so basically the procedure itself is locking the schema but I'm not sure how I can get around it other than initiating the update outside of the AVM.
Hopefully what I've explained makes sense, I can send across the separate update procedure I run manually if it helps.
If you load the schema using the DataAdminService LoadSchemaChanges the post load messages should be suppressed:
----
block-level on error undo, throw.
using OpenEdge.DataAdmin.DataAdminService from propath.
using OpenEdge.DataAdmin.Error.DataAdminErrorHandler from propath.
define variable service as DataAdminService no-undo.
define variable errorhandler as DataAdminErrorHandler no-undo.
service = new DataAdminService(?). /* current connection */
service:LoadSchemaChanges("mydatadef.df").
catch e as Progress.Lang.Error :
errorHandler = new DataAdminErrorHandler().
errorHandler:Error(e).
end catch.
----
You can use a two method approach if you want to set some of the data definition options, like AddObjectsOnline, ForceCommit, ForceIndexDeactive, ForceSharedObjects (for multi-tenant):
----
block-level on error undo, throw.
using OpenEdge.DataAdmin.DataAdminService from propath.
using OpenEdge.DataAdmin.Error.DataAdminErrorHandler from propath.
using OpenEdge.DataAdmin.ISchema from propath.
define variable service as DataAdminService no-undo.
define variable errorhandler as DataAdminErrorHandler no-undo.
define variable mychanges as ISchema no-undo.
service = new DataAdminService(?). /* current connection */
mychanges = service:GetSchemaChanges("mydatadef.df").
mychanges:LoadOptions:ForceCommit = true.
service:UpdateSchemaChanges(mychanges).
catch e as Progress.Lang.Error :
errorHandler = new DataAdminErrorHandler().
errorHandler:Error(e).
end catch.
----
You might need to show the errors you are getting, and the promon screens showing the locks. Locks on _Db and _File are not the traditional "schema lock". Without knowing exactly what you are seeing, I could be providing incorrect information.
note that schema locking mechanism does not use the lock table so schema locks don’t show up in the lock table.
Thanks Håvard, your method works running via procedure editor but again when I run it in the maintenance procedure, it encounters the schema lock same as the previous method.
It seems the lock is introduced by the maintenance procedure but that is required to run the procedure to do the update so I'm kind of stuck in a catch 22. Is there anyway that I can circumvent the lock?
promon is showing a share-lock - I am going to assume you have a record that was locked exclusively and was not tightly scoped resulting in a downgrade to share-lock.
Our code for checking if an incremental df needs to be loaded is all dynamic.
Promon is showing a shared schema lock. You have a program running which has a DB reference. Even if it is not locking a record, the AVM still gets a lock on the schema. Try the following in the sports2000 db:
FIND FIRST CUSTOMER NO-LOCK. PAUSE.
Go look in promon at the schema locks. You will see a schema lock.
You need to change your maintenance program so there are no programs with DB references running when you execute the load procedure. Use dynamic code, or else have all your db references in subprograms, and return to a "reference-less" main procedure to execute the load procedure.
Thanks Garry. The issue is that I'm getting a schema lock even before I reference any database tables because I'm calling the online update procedure within the maintenance procedure.
What happens is that say we initiate the maintenance by running RunMaintenance.p. In that procedure, we only have 2 calls:
run ConnectAsAdmin.p.
run RunOnlineUpdate.p.
ConnectAsAdmin.p simply elevates the current user to an admin user with permissions to make changes to the schema. If I put a break point between run ConnectAsAdmin.p and run RunOnlineUpdate.p and check promon then, there are no schema locks.
However, if I put a break point right at the start of RunOnlineUpdate.p before any definitions or database references takes place, I get the SHR lock. I'm assuming it's due to transaction scoping but I'm not sure how to get around it.
Sorry, my example was not very clear. Put the PAUSE statement before the FIND FIRST, you should see the schema lock. The schema lock is not in response to running lines of ABL. When the AVM loads the program, it checks for database references, and will get the schema lock as necessary, before running any code. Hence the debugger doesn't provide the granularity for you to see when the lock is obtained. You could use RCODE-INFO:DB-REFERENCES on RunOnlineUpdate.r (you have to compile it to .r). It will confirm whether there are database references. Or maybe compile it without any database connected.
Ah right, I see what you mean. I didn't know that it would check the code for any references and lock at that point. I think I know what I need to do then to get rid of the lock. Thanks for your help!
the reason it works this way is that once a compiled program that contains database references starts running, (some) schema changes must be prevented in order for the compiled bytecode to remain valid during its execution.
(a .p is compiled just before it starts running)