What is the difference?
Flash cache (SSD) VS real memory - faster?
Trying to talk my VP of operation that going to 64-bit is the way to go. The env is using the Flash cache to supplement memory for the database, since 32-bit can only see 1 GB memory. Then I could resize the database to Type two storage areas and perform a dump and load. This sales guy is say a new disk array will solve you performance issues for 180K. This env will be gone in 18 months.
Thanks in advance!
The storage guy is either simply ignorant or he is lying.
Access to data in -B is 75x than access to the same data in the operating systems cache and 1000x faster than that data being accessed out at the far end of a cable connecting a SAN.
DB access patterns are *random*. Latency is critical. When there is a cable between you and your data it is very slow.
You are correct. The most effective (and least expensive) thing for you to do would be to convert to 64 bits, rebuild your storage areas and use memory on the server.
I will also bet you that 18 months from now the system in question is still running.
A sales guy is LYING? UN-possible.
You can probably get 32-bit database shared memory up to 1.6 - 1.7 GB depending on the platform. Officially 32-bit processes can see 2 GB of memory. In practice it's less.
were the salesman’s lips moving?
the absolute best way to increase a system’s performance is to add more memory. if you are running a 32-bit system, the amount of memory applications running on it can use is severerly limited. for the past umpteen years, there have not been /any/ general purpose machines of the 32-bit variety. most likely whatever you are running is not a 32 bit machine so switching to 64-bit executables might not be very difficult or expensive.
another good way to increase performance is to use solid state disks. but that will have limited effect on a 32-bit system. also, not all SSD devices are equal. Some are not much faster than spinning rust devices. make the salesman prove his case - if he can. if you believe him, buy, and your performance still sucks, who pays ??? who walks away with cash ??? what will the salesman guarantee (in writing) ???
You want the memory to be as close as possible to where it will be used. That "shortens the path" when data is accessed. Aside from on-chip caches the closest place to put memory for use by a Progress application is the -B buffer pool. Then comes the OS filesystem cache. Internal SSD is fast compared to rotating rust (and dirt cheap compared to a SAN) but it is also 3 or 4 layers of indirection away from the action.
External SSD buried in a SAN device is about as far away from the action as you can get without sending a tape offsite via carrier pigeon.
Speckled Jim is not as young as he used to be