Improving the performance of query

Posted by vignesh kumar on 26-Oct-2015 07:10

Hi all,

I need to improve the performance of query because it takes more time for the drop down list to appear.

Lock should be shared-lock only by changing no-lock its improving but it should be in shared-lock only.

Below is the query:

FOR EACH xmm012 BREAK BY curcod BY effdat:
IF LAST-of(curcod) AND LAST-OF(effdat) THEN DO:
CREATE wxmm012.
ASSIGN wxmm012.curcod = xmm012.curcod
wxmm012.curdec = xmm012.curdec
wxmm012.effdat = xmm012.effdat
wxmm012.curd_t = xmm012.curd_t
wxmm012.rate = xmm012.rate.
END.
END.

Any suggestions friends?

All Replies

Posted by James Palmer on 26-Oct-2015 08:27

Posting in multiple forums isn't going to help you get this answered quicker.

I don't understand your question though. Why do you think you need a share lock? Share locks should be avoided at all costs. This query should be done with NO-LOCK.

How many records are in xmm012? You're reading all of them.

Posted by Piotr Ryszkiewicz on 26-Oct-2015 09:43

Hi,

At first, your code can be simplified. There is no sense in "IF LAST-of(curcod) AND LAST-OF(effdat)", while "IF LAST-of(curcod)" will give the same results. But this won't help for performance.

Assuming you have an index starting with curcod and effdat, you can rewrite this code like this:

DEF VAR vcurcod LIKE xmm012.curcod NO-UNDO.

FIND FIRST xmm012 NO-LOCK.

vcurcod = xmm012.curcod.

DO WHILE TRUE:

FIND LAST xmm012 WHERE xmm012.curcod = vcurcod NO-LOCK NO-ERROR.

CREATE wxmm012.

ASSIGN wxmm012.curcod = xmm012.curcod

wxmm012.curdec = xmm012.curdec

wxmm012.effdat = xmm012.effdat

wxmm012.curd_t = xmm012.curd_t

wxmm012.rate = xmm012.rate.

 FIND FIRST xmm012 WHERE xmm012.curcod > vcurcod NO-LOCK NO-ERROR.

 IF NOT AVAILABLE xmm012 THEN LEAVE.

 vcurcod = xmm012.curcod.

END.

If you don't have such index, then create one ;)

And of course I agree with James about NO-LOCK vs SHARE-LOCK.

 

Posted by vignesh kumar on 27-Oct-2015 01:26

There are more then 10,000 records in xmm012 table.I need share lock because no user will not able to update the table at the same time the user is accessing.

If am using No-lock means then the user may not get the updated value if some one is updating at the same time when she is using.

Posted by Marian Edu on 27-Oct-2015 01:51

You should stop trying to freeze the time there, it doesn't work unfortunately :(

Beside your query doesn't do what you think it does anyway, there is no table lock in Progress so the records will get locked one by one and released as the for each query moves forward. You best bet is still to go with no-lock, it's like using a short shutter speed for fast moving targets ;)

Posted by Patrick Tingen on 27-Oct-2015 02:37

If you use SHARE-LOCK to see updates from other users, you might want to consider using startup parameter -rereadnolock This will make sure that your records show the updated value.

Using SHARE-LOCK is something of the past, something which you should normally never use. There are but a handful of use cases nowadays for the use of SHARE-LOCK

Posted by Mark Davies on 27-Oct-2015 08:33

A read on 10,000 records should still take less than one second - especially with NO-LOCK. If you are not seeing sub-second response on your read then there might be other things you want to look at. If your read is sub-second then the chances of someone else updating the same data is incredibly rare and you are getting data from a snapshot of the records when you query it at a specific time.


Posted by Piotr Ryszkiewicz on 27-Oct-2015 08:46

> A read on 10,000 records should still take less than one second

If it's a client-server connection I would not be so sure...

Posted by jamesmc on 27-Oct-2015 10:27

On the surface there is nothing wrong with it (ignoring the SHARE-LOCK issue) so it should work nice and fast so something else must be slowing it down.  Poor database design? (table in a type I area with really bad scatter?) Poor index availability? (all the data needs to be read and sorted first?)  Or perhaps a large record size (lots of fields being sent to the client that are not needed?).

How long does the query actually take to run as you havent mentioned that in your post?

Posted by vignesh kumar on 29-Oct-2015 02:40

Its taking around 1500ms.

Posted by vignesh kumar on 29-Oct-2015 02:42

Ya after using the modified code the performance looks to be improved.

is there any other way to improve more?

Posted by Piotr Ryszkiewicz on 29-Oct-2015 03:25

You did not answer if you have the proper index. How many records meets the condition ? Is it client-server or local (shared memory connection) connection ?

Posted by vignesh kumar on 01-Nov-2015 04:19

Yes have index on curcod and effdat. Its a client-server connection.

Posted by Piotr Ryszkiewicz on 02-Nov-2015 02:58

OK. And the last question ? How many records meet the condition ?

Posted by vignesh kumar on 02-Nov-2015 06:13

Around 70 records meets the condition and creating in workfile.

Posted by Piotr Ryszkiewicz on 02-Nov-2015 06:27

70 out of cca 10.000, right ? And how much time does it take ? You wrote about 1500 ms with your version of code, or mine ?

Posted by vignesh kumar on 02-Nov-2015 06:48

yes 70..No 1500ms is with my version of code and urs its taking around 250ms improved lot.

Posted by Piotr Ryszkiewicz on 02-Nov-2015 07:07

OK. You may try run this code on appserver instead of client-server connection. If wxmm012 is a temp-table you can pass it as a parameter.

Another possibility - if xmm012 has a lot of fields you may rewrite my code to use FOR with FIELDS option instead of FIND.

If you don't have appserver you may have a look at startup parameters responsible for client-server performance described here:

knowledgebase.progress.com/.../000031154

That's all what comes into my mind for a moment.

This thread is closed