AI Best Practice (v10.2b07)

Posted by Betty Hardin on 04-Apr-2016 05:02

Windows 2008, Progress 10.2b07, Syteline v6.x

We are (finally!) upgrading from Syteline v4 to v6 (Progress 8.3e to 10.2b07).   In the past, I have managed AI with batch files - planning to use OEM on 10.2b - which is new to me.   We are on a REALLY tight schedule with go-live planned for April 15th.  Although I've played with it a bit, I haven't had a whole lot of playtime.

On 8.3e, I have been using ALL variable extents and switching extents every 15 minutes via scheduled tasks.   Using multiple extents (7).

Wondering if that's still a good idea; or should I use fixed extents and and OnDemand switch?   One big extent or several extents?  (I do have large files enabled)

Would appreciate anything that anyone might want to share about 'best practices' in the implementation of AI on 10.2bx.   And .. if you have scripts to share that sure would be handy ;-)

Thank you!

All Replies

Posted by James Palmer on 04-Apr-2016 05:12

Check out the docs on the AI Archiver. This does all the hard work of switching the extents etc all by itself making your life a whole lot easier. All you have to do is set an interval, a location (or locations), and then create a script to back up the archive files, and clear out aged ones. It is a whole lot simpler than the old style management.

Posted by Paul Koufalis on 04-Apr-2016 05:14

Good Morning Betty,

I also use all variable extents and timed switching. There is a small performance impact but it is often (but not always) not measurable in human time.

You should get rid of most of your scripts and use the automatic AI File Manager Daemon. It can be enabled online or offline very easily. You'll also need to add -aiarinterval 900 -aiarcdir <some local dir> to your startup parameters. If you are using OpenEdge Explorer then these parameters are available in the configuration screen for each DB.

Regarding the AI archive directory, you can specify TWO directories separated by a comma. If one fills then the AI Archiver starts writing to the second one. Both should be local drives.

You need to script only two things:

1. Transferring those AI files from the production box to some other box, preferably offsite

2. Purging the AI Archive directories (local and remote). I often keep a few days on hand and delete anything older than 3 days

Scripting is easy. Look at the FORFILES command.

You should also go to 10.2B08 and not 10.2B07. SP8 is the last., greatest and most stable service pack for 10.2B. There is really no reason to stay on SP7.

Posted by Betty Hardin on 04-Apr-2016 05:38

Sounds like I should be ready to go.   Double checking myself.   Will stick with variable extents; they've worked fine for all these years!   Thanks so much for the tip on using TWO directories; in the past I had my batch file abort the swtich and send me an alert that my archive directory was not available.  Very familiar with FORFILES .. use it quite frequently in fact.  Will look into 10.2b08.   Thank you!

Posted by James Palmer on 04-Apr-2016 06:01

One point of note with the Archiver and 2 directories - if your first location fills up and it reverts to the secondary you will need to restart the Archiver before it will revert back to the first location. You can do this without restarting the database IIRC, but it's worth being aware of as it can otherwise cause some head scratching!

Posted by Paul Koufalis on 04-Apr-2016 06:04

James,

You can run rfutil db -C aiarchiver setdir <dir1,dir2> to get the AI Archiver to start writing to dir1 again.

Posted by gus on 04-Apr-2016 09:30

Regarding the subject of how many days worth of ai archive data to keep, 3 days might not be enough, depending on your requirements.

Not too long ago, I had to restore a backup and roll forward. The most recent daily backup was invalid. So was the one before that.

I ended up restoring from the sunday weekly backup and then rolling forward to friday.

Posted by Tim Kuehn on 04-Apr-2016 10:04

Speaking of "how many days" - Tom needs to recount his story about the site that never rebased their DB and he ended up rolling forward from when AI was first activated. :)

Posted by ChUIMonster on 04-Apr-2016 10:30

Since you brought it up...

I don't like purging archived after image logs.  I especially don't like purge periods measured in days.  If you must do it at all try 3 months as a minimum.  They probably don't take up all that much space relative to your db size.  (And they zip very nicely.)

On several occasions I have been able to rescue customers whose backups had failed one way or another (usually bad scripts, but sometimes hardware problems) only because there was an old forgotten copy of a backup somewhere on the hard drive and a lot of archived ai files.

The most spectacular is the one that Tim refers to.  In that case roughly a year's worth of ai files were rolled forward against the *original* backup.

That happened because the off-site destination for the ai logs disappeared at some point shortly after the sysadmin left the company.  When he left nobody replaced him so all of the alert messages were going to an unread e-mail.  (There are several very important lessons here...)

Then the hard drive with the db on it crashed.

The customer dug out their recovery procedures and discovered that the backup server was nowhere to be found.  (So far as I know they never did find it.)  At that point I got a call...

Lucky for the customer they had two drives in that server and when I had originally setup the server I put a copy of that 1st backup on the 2nd drive.  And the first part of getting the ai files offsite was to copy them to that 2nd server.  Since nobody ever purged them there was a year or so of ai logs ready and willing to roll forward.

It took all weekend to get through the roll-forward but that was much preferable to having no data at all.

Some Important Lessons:

1) Keep as many ai files and old backups as you can.  One old backup that works and lots of ai files can work wonders.

2) Don't put all of your alerting eggs in one mailbox.

3) When key personnel leave get someone to make sure loose ends get taken care of.

Posted by Dmitri Levin on 04-Apr-2016 12:05

AI files are very useful not just in case of disaster. roll-forward is just one way to use them. And those of us who ( I would like to say care about database ) have replication do not need to roll-forward old AI files.

AI files are very useful for investigation. When we need to reconstruct some event in the past one of the most useful tools is aimage scan verbose.

I was investigating some user activity the other day. It was Monday and the activity happen on a previous work day a Friday. The policy of keeping only 3 days AI archive prevented me investigate it since Friday morning was deleted already on Monday. So definitely keep AI files as long as you can.

Posted by Betty Hardin on 04-Apr-2016 23:41

Thank you all for your input!

Posted by James Palmer on 05-Apr-2016 05:12

Dmitri do you have a guide to tracing user activity using the AI notes at all? This sounds really interesting stuff.

Posted by gus on 05-Apr-2016 08:12

See here for the slide deck of a talk i have done a few times about how to read ai files

community.progress.com/.../2266.recovery-log-notes

Posted by James Palmer on 05-Apr-2016 08:18

Thanks Gus. 

Posted by George Potemkin on 05-Apr-2016 15:00

I have wrote two programs that deal with ai scans. I often use them to investigate the customer's incidents. Ai scans are indeed very useful. You will know what an end user did as if you were standing behind him/her during transaction.

AiScanStat.p reports the different aspects of the transaction activity.

grepAiScan.p allows to extact the details of the selected transactions. The result file can be opened in Excel. Program also adds the timestamps to all notes. So we can see the pauses during transactions. And it's turned out to be useful as well. BTW, I saw the scans of ai files generated by the different applications. The most transactions have a large percent of idle time, including the ones created by the batch processes. It was a bit unexpected for me.

I wish that the scan verbose option would also report the sizes of the recovery notes.

This thread is closed