Windows 2008, Progress 10.2b07, Syteline v6.x
We are (finally!) upgrading from Syteline v4 to v6 (Progress 8.3e to 10.2b07). In the past, I have managed AI with batch files - planning to use OEM on 10.2b - which is new to me. We are on a REALLY tight schedule with go-live planned for April 15th. Although I've played with it a bit, I haven't had a whole lot of playtime.
On 8.3e, I have been using ALL variable extents and switching extents every 15 minutes via scheduled tasks. Using multiple extents (7).
Wondering if that's still a good idea; or should I use fixed extents and and OnDemand switch? One big extent or several extents? (I do have large files enabled)
Would appreciate anything that anyone might want to share about 'best practices' in the implementation of AI on 10.2bx. And .. if you have scripts to share that sure would be handy ;-)
Check out the docs on the AI Archiver. This does all the hard work of switching the extents etc all by itself making your life a whole lot easier. All you have to do is set an interval, a location (or locations), and then create a script to back up the archive files, and clear out aged ones. It is a whole lot simpler than the old style management.
Good Morning Betty,
I also use all variable extents and timed switching. There is a small performance impact but it is often (but not always) not measurable in human time.
You should get rid of most of your scripts and use the automatic AI File Manager Daemon. It can be enabled online or offline very easily. You'll also need to add -aiarinterval 900 -aiarcdir <some local dir> to your startup parameters. If you are using OpenEdge Explorer then these parameters are available in the configuration screen for each DB.
Regarding the AI archive directory, you can specify TWO directories separated by a comma. If one fills then the AI Archiver starts writing to the second one. Both should be local drives.
You need to script only two things:
1. Transferring those AI files from the production box to some other box, preferably offsite
2. Purging the AI Archive directories (local and remote). I often keep a few days on hand and delete anything older than 3 days
Scripting is easy. Look at the FORFILES command.
You should also go to 10.2B08 and not 10.2B07. SP8 is the last., greatest and most stable service pack for 10.2B. There is really no reason to stay on SP7.
Sounds like I should be ready to go. Double checking myself. Will stick with variable extents; they've worked fine for all these years! Thanks so much for the tip on using TWO directories; in the past I had my batch file abort the swtich and send me an alert that my archive directory was not available. Very familiar with FORFILES .. use it quite frequently in fact. Will look into 10.2b08. Thank you!
One point of note with the Archiver and 2 directories - if your first location fills up and it reverts to the secondary you will need to restart the Archiver before it will revert back to the first location. You can do this without restarting the database IIRC, but it's worth being aware of as it can otherwise cause some head scratching!
You can run rfutil db -C aiarchiver setdir <dir1,dir2> to get the AI Archiver to start writing to dir1 again.
Regarding the subject of how many days worth of ai archive data to keep, 3 days might not be enough, depending on your requirements.
Not too long ago, I had to restore a backup and roll forward. The most recent daily backup was invalid. So was the one before that.
I ended up restoring from the sunday weekly backup and then rolling forward to friday.
Speaking of "how many days" - Tom needs to recount his story about the site that never rebased their DB and he ended up rolling forward from when AI was first activated. :)
Since you brought it up...
I don't like purging archived after image logs. I especially don't like purge periods measured in days. If you must do it at all try 3 months as a minimum. They probably don't take up all that much space relative to your db size. (And they zip very nicely.)
On several occasions I have been able to rescue customers whose backups had failed one way or another (usually bad scripts, but sometimes hardware problems) only because there was an old forgotten copy of a backup somewhere on the hard drive and a lot of archived ai files.
The most spectacular is the one that Tim refers to. In that case roughly a year's worth of ai files were rolled forward against the *original* backup.
That happened because the off-site destination for the ai logs disappeared at some point shortly after the sysadmin left the company. When he left nobody replaced him so all of the alert messages were going to an unread e-mail. (There are several very important lessons here...)
Then the hard drive with the db on it crashed.
The customer dug out their recovery procedures and discovered that the backup server was nowhere to be found. (So far as I know they never did find it.) At that point I got a call...
Lucky for the customer they had two drives in that server and when I had originally setup the server I put a copy of that 1st backup on the 2nd drive. And the first part of getting the ai files offsite was to copy them to that 2nd server. Since nobody ever purged them there was a year or so of ai logs ready and willing to roll forward.
It took all weekend to get through the roll-forward but that was much preferable to having no data at all.
Some Important Lessons:
1) Keep as many ai files and old backups as you can. One old backup that works and lots of ai files can work wonders.
2) Don't put all of your alerting eggs in one mailbox.
3) When key personnel leave get someone to make sure loose ends get taken care of.
AI files are very useful not just in case of disaster. roll-forward is just one way to use them. And those of us who ( I would like to say care about database ) have replication do not need to roll-forward old AI files.
AI files are very useful for investigation. When we need to reconstruct some event in the past one of the most useful tools is aimage scan verbose.
I was investigating some user activity the other day. It was Monday and the activity happen on a previous work day a Friday. The policy of keeping only 3 days AI archive prevented me investigate it since Friday morning was deleted already on Monday. So definitely keep AI files as long as you can.
Thank you all for your input!
Dmitri do you have a guide to tracing user activity using the AI notes at all? This sounds really interesting stuff.
See here for the slide deck of a talk i have done a few times about how to read ai files
I have wrote two programs that deal with ai scans. I often use them to investigate the customer's incidents. Ai scans are indeed very useful. You will know what an end user did as if you were standing behind him/her during transaction.
AiScanStat.p reports the different aspects of the transaction activity.
grepAiScan.p allows to extact the details of the selected transactions. The result file can be opened in Excel. Program also adds the timestamps to all notes. So we can see the pauses during transactions. And it's turned out to be useful as well. BTW, I saw the scans of ai files generated by the different applications. The most transactions have a large percent of idle time, including the ones created by the batch processes. It was a bit unexpected for me.
I wish that the scan verbose option would also report the sizes of the recovery notes.