OpenEdge 11.6.3 Sun Solaris 64 Bit.
As far as I understand I can move individual storage objects via the data administration tool or whole storage areas with the proutil utility into the alternate buffer pool.
For security reasons I am not allowed to start a data administration session against our production database. Is there any other way I could deploy the assignment of individual tables and indexes to the alternate buffer pool? For example an ABL procedure that could be invoked by a shell script by the operator of the system?
Thanks in Advance, Richard.
You can apply a .df that assigns objects to the ABP. Here is a simple example of the type of thing you could do:
def var v-tables as char.
def var v-table as char.
def var i as i.
v-tables = "customer,order,order-line".
output to value( ldbname( "dictdb" ) + "_abp.df" ).
do i = 1 to num-entries( v-tables ):
v-table = entry(i, v-tables).
find dictdb._file no-lock where _file-name = v-table no-error.
if not available _file then next.
put unformatted "UPDATE TABLE " + quoter( v-table ) skip.
put unformatted ' BUFFER-POOL "Alternate"' skip(1).
for each dictdb._index no-lock of _file:
put unformatted "UPDATE INDEX " + quoter( _index-name )
+ " OF " + quoter( v-table ) skip.
put unformatted ' BUFFER-POOL "Alternate"' skip(1).
end.
end.
put "." skip.
put "PSC" skip.
put "bufpool=yes" skip.
put "cpstream=ISO8859-1" skip.
put "." skip.
put "0000099999" skip.
output close.
You can change the logic to meet your needs, add error-handling, etc. You would wind up with a .df that looks like this:
UPDATE TABLE "customer" BUFFER-POOL "Alternate" UPDATE INDEX "Comments" OF "customer" BUFFER-POOL "Alternate" UPDATE INDEX "Cust-Num" OF "customer" BUFFER-POOL "Alternate" UPDATE TABLE "order" BUFFER-POOL "Alternate" UPDATE INDEX "Cust-Order" OF "order" BUFFER-POOL "Alternate" . PSC bufpool=yes cpstream=ISO8859-1 . 0000099999
Then you could apply that from the command line with the dictionary code (prodict/load_df.p).
You can apply a .df that assigns objects to the ABP. Here is a simple example of the type of thing you could do:
def var v-tables as char.
def var v-table as char.
def var i as i.
v-tables = "customer,order,order-line".
output to value( ldbname( "dictdb" ) + "_abp.df" ).
do i = 1 to num-entries( v-tables ):
v-table = entry(i, v-tables).
find dictdb._file no-lock where _file-name = v-table no-error.
if not available _file then next.
put unformatted "UPDATE TABLE " + quoter( v-table ) skip.
put unformatted ' BUFFER-POOL "Alternate"' skip(1).
for each dictdb._index no-lock of _file:
put unformatted "UPDATE INDEX " + quoter( _index-name )
+ " OF " + quoter( v-table ) skip.
put unformatted ' BUFFER-POOL "Alternate"' skip(1).
end.
end.
put "." skip.
put "PSC" skip.
put "bufpool=yes" skip.
put "cpstream=ISO8859-1" skip.
put "." skip.
put "0000099999" skip.
output close.
You can change the logic to meet your needs, add error-handling, etc. You would wind up with a .df that looks like this:
UPDATE TABLE "customer" BUFFER-POOL "Alternate" UPDATE INDEX "Comments" OF "customer" BUFFER-POOL "Alternate" UPDATE INDEX "Cust-Num" OF "customer" BUFFER-POOL "Alternate" UPDATE TABLE "order" BUFFER-POOL "Alternate" UPDATE INDEX "Cust-Order" OF "order" BUFFER-POOL "Alternate" . PSC bufpool=yes cpstream=ISO8859-1 . 0000099999
Then you could apply that from the command line with the dictionary code (prodict/load_df.p).
Deploying Delta-DF is already part of our deployment process.
Thanks!