open edge 11.1 RDBMS and windows 2012 R2 hyper-V

Posted by monae on 29-Jan-2014 01:36

I have intalled open edge 11.1 on a 64-bit windows server 2012 R2 hyper-v machine and create a database and when I start to load the table contents to the database what i have realized it is too slow, for example a 170000 record file took to load on a normal machine about 10 minutes while the same file on the hyper-v machine in took about two hours. why? is there any configuration to be done on the open edge side or on the hyper-v side to enhance the performance?

All Replies

Posted by Paul Koufalis on 29-Jan-2014 06:52

There could be a hundred different reasons.  Since you don't detail HOW you loaded the data (binary? dict? bulk?, single-user? multi-user? DB params?) I will assume you did it the same way on both. This means that most likely it's simply a question of I/O throughput: your virtual machine simply has less than the "normal" machine.  

Other possibilities are that the other VMs are consuming the resources of the physical machine, leaving nothing for you; or that you simply did not attribute enough resources to your VM.

Posted by monae on 30-Jan-2014 02:34

The vm has 10 gb memory and 500 hardisk and the load is done from client . And the table is just 100 mb size so do u have any suggestion


[collapse]
From: pkoufalis <bounce-pkoufalis@community.progress.com>
Sent: Wednesday, January 29, 2014 2:53:28 PM
To: TU.OE.RDBMS@community.progress.com
Subject: RE: open edge 11.1 RDBMS and windows 2012 R2 hyper-V
 
Reply by pkoufalis

There could be a hundred different reasons.  Since you don't detail HOW you loaded the data (binary? dict? bulk?, single-user? multi-user? DB params?) I will assume you did it the same way on both. This means that most likely it's simply a question of I/O throughput: your virtual machine simply has less than the "normal" machine.  

Other possibilities are that the other VMs are consuming the resources of the physical machine, leaving nothing for you; or that you simply did not attribute enough resources to your VM.

Stop receiving emails on this subject.

Flag this post as spam/abuse.

[/collapse]

Posted by Thomas Mercer-Hursh on 30-Jan-2014 09:04

So, you are loading across the network and expecting it to be fast?

Posted by Paul Koufalis on 30-Jan-2014 18:38

Monae: There is no way we can help you unless you post information.  What is the difference between the old and new machine? How are you loading?  The VM has 10 GB of RAM, ok, who is using it?  By a "500 hardisk" I presume you mean 500 Gb?  But what is it underneath?  Is it a lun carved  from a SAN?  Is it a physical disk?  What?  

And what did you do to load?  Your statement "the load is done from client" tells me very little.  TMH asks if it's across the network but maybe the client is local on the VM.  We don't know because you didn't tell us.

Posted by gus on 30-Jan-2014 18:47

need much more data.

how much memory is in the real machine ?

in addition to the 10 GB of RAM, what are the other VM configuration settings ?

please describe your load procedures.

Posted by monae on 31-Jan-2014 01:16

Thanks for your help, I did another scenario to be more precise. I have run same report on the old sun machine  with 2GB RAM and no VM with 75GB disk, and same report on the 2012 R2 hyper-v machine with 20Gb RAM 500Gb disk space one 4 core with 2.0 GHZ cpu and I am using the whole machine alone because it is newly installed and not used

also using same network on both. the report on the old sun machine took 15 min. while on the VM it took 30 min.

Posted by monae on 31-Jan-2014 02:22

2 cpus's 4 core each 2.0 GHZ, Real memory is 96GB we are using SAN , we have for VM'ss and my VM  has the highest resources . I did another thing than load i prepare same report on the old sun server and the new VM which has much much more resources than the sun on the old sun it took 15 min. on the new VM it took 30 min. using same network

Posted by Rob Fitzpatrick on 31-Jan-2014 07:25

Your load speed will also be affected by database configuration: which database license is in use (Enterprise or Workgroup), BI buffers, BI block size, BI cluster size, which helper processes are running, your DB structure, etc.  That information would be helpful.

Posted by Paul Koufalis on 31-Jan-2014 07:28

Monae,

Let's stick to one issue at a time.  The report execution time is affected by another 100 factors and it will get confusing if we try and address both the original write issue and the new read issue in the same thread.  I understand that there is likely overlap in the causes of both issues but it will just get confusing trying to address both.  

This is what we know and what we do not know:

1. Old machine = some Solaris box with 2Gb Ram (seems unlikely) and 75 Gb HD

2. New VM: Win2K12 20 Gb RAM

2.a) HDD has a capacity of 500 Gb but as I mentioned previously, I DO NOT CARE about capacity. I care about I/O throughput

3. You are loading data and it is slower on the Windows VM

3.a) We DO NOT KNOW how you loaded data.  C/S?  Dictionary?  Bulk? Binary?

For the report:

1. Takes twice as long on Windows

2. We DO NOT KNOW DB startup parameters and client connection parameters on Sun vs Windows

Paul

Posted by pedrorodriguez on 31-Jan-2014 07:50

Hi,

Long shot, but worth mentioning, if you have moved from Unix to Windows and you are using shared memory connection to the server (from a local client), bear in mind that configuration is different, and what it would be using shared memory in Unix might not be using it in Windows. Meaning that the performance speed would be based on network traffic.

If this is the case, could you post how do you connect your client in Unix and Windows?

Regards,

Pedro

Posted by monae on 04-Feb-2014 01:06

I use same startup parameters on the sun solaris and hyper-v machine, concerning the load of the data i did not use bulk load i use the traditional data dictionary load utility.  Do you have any startup parameters to use on my hyper-v to get good performance? if yes what are they?

Posted by monae on 04-Feb-2014 01:08

on both windows and unix my users are windows base and i use the client networking to connect to the database using host and port number

Posted by Paul Koufalis on 04-Feb-2014 12:28

Workgroup or Enterprise DB OpenEdge licence on the server?  Assuming Enterprise., try the following on the Windows server:

1. Open a proenv command line (Start - Programs - OpenEdge Proenv)

2. cd to the DB directory

3. Stop the db (proshut db) or use OpenEdge Explorer if that's how it's configured

4. _proutil db -C truncate bi -bi 16384 -biblocksize 16

5. If using OE Explorer, make sure to enable one APW and the BIW

6. Start the database and repeat load test

7. Report back the results.

This is still not the fastest way but it's tough to teach a full dump&load class in a discussion forum.  You should consider signing up for some DBA training.  What part of the world are you in?

Posted by Mike Fechner on 04-Feb-2014 12:32

[quote user="pkoufalis"]This is still not the fastest way but it's tough to teach a full dump&load class in a discussion forum.  You should consider signing up for some DBA training.  What part of the world are you in?[/quote]

I'm sure that some of the DBA pro's around here (I'm not one of them!) are available for onsite or remote consulting services.

Posted by Paul Koufalis on 04-Feb-2014 12:34

Well I was trying to be more subtle than Mike...but yes I am available for remote or onsite consulting.

:-)

Posted by monae on 05-Feb-2014 02:44

thanks all for your reply ,

we have been DBA and 4gl developers for 20 years at non-profit organization having schools , hospital , cemetery , head office ...., we are at Beirut , Lebanon  ( the unique progress developers in Lebanon and maybe in middle east ) .

for more clarification , we  already migrated a database and a system into  openedge 11.1 enterprise  under windows server 2012  on server side

and the client side was windows 7 openedge studio 11.1  . the application is for cemetery ...

the speed there was ok and everything moved very smooth.  But  WE FACED the  speed  PROBLEM when we were preparing to migrate the hospital database into a new dell server on the hyper-v  machine  so just  I am wondering if someone in this community is using my same case   and he had ever faced  my problem   .

Posted by Paul Koufalis on 05-Feb-2014 08:27

I have deployed in similar environments many times.  Your first "error" is to load via client/server.  You should load in shared memory directly on the server.  While this does not explain the load time difference it will go a loooong way to making your load faster.

In summary:

1. Dump from the old system

2. Transfer *.d files to the Windows server

3. Open a "proenv" prompt on the Windows box

4. cd to the DB directory

5. prowin32 <dbname> -p _admin.p

6. Load your data in the normal way: Admin - Load - Table Contents

Posted by monae on 10-Feb-2014 08:26

actually  I did it on the windows server using openedge explorer then data administration then load .d content and

it was amazing the table that took half hour through network , it took less than  minute on the windows server  .

but we should figure out why it took so long through network . but I could not upload the .df files from openedge explorer

it gave an error anyway I did it  from the client and loading the definition through network is fast  .

thanks again for your help .

Posted by Rob Fitzpatrick on 17-Feb-2014 22:09

I don't think it's worth exploring why a remote client data load is slower than you expect; it will be slow.  You should always load table data on the server, unless your tables are trivially small.

Posted by monae on 18-Feb-2014 01:05

yes thank you all so much .

we operated the bulkload on the server  and it was so fast . eventhough yes I don't understand why the load through client

could not be fast .

This thread is closed