Simple loop is slow compared to VB.NET

Posted by swilson-musco on 31-May-2017 09:14

Just setting the stage here...I am not a 4GL expert but was trying to show our developers how to use background threads in VB.NET to use animated images on forms.  

So I created a Progress .NET form with a button and put this code in the click event so that the foreground thread would be busy..to show that animated images need their own thread or that the processing needed to be a separate worker thread from the form.

def variable i as integer init 0.
pictPleaseWait:Visible = true.
lblStart:Text = STRING(NOW).

process events.

do while i < 999999999 :

i = i + 1.

end.

pictPleaseWait:Visible = false.
lblEnd:Text = STRING(NOW).

RETURN.

Now here is the code on my vb.net form with code behind on click event.

Dim i As Integer = 0
lblStart.Text = Now.ToLongTimeString

pictPleaseWait.Visible = True

Application.DoEvents()

Do While i < 999999999

i = i + 1

Loop

pictPleaseWait.Visible = False
lblEnd.Text = Now.ToLongTimeString

 

The progress code takes minutes to run...the vb code takes seconds....

Any idea why?

 

Thanks,

Scott

Posted by Evan Bleicher on 06-Jun-2017 10:28

The Core Client team is committed to supporting and enhancing the ABL, which includes improving the performance of the language.

The team has discussed the use of LLVM several times in the past and although we have not moved forward with a project which leverages this technology we are not opposed to integrating newer technologies into the product.  However, each project needs to be evaluated against the other projects which PM identifies as a priority for a release.  

From time to time an example is posted to Community which highlights the performance other languages vs. the ABL.  We will continue to review these situations and if we determine there is a real benefit to the ABL we will investigate optimizing the Language.

We have investigated and are continuing to analyze our OOABL infrastructure, looking for optimizations we can make.  When optimizations can be safely made, we implement these changes.

The point of this thread is to identify that runtime performance is important and in that, there is agreement.  Working with PM, this development effort must be prioritized with the team’s other development tasks.

All Replies

Posted by George Potemkin on 31-May-2017 09:23

> def variable i as integer init 0.

Try to add NO-UNDO.

Posted by swilson-musco on 31-May-2017 09:33

No difference..

Posted by ChUIMonster on 31-May-2017 09:51

The Progress code is compiled to a platform neutral format which is then interpreted by the AVM.  The VB code is compiled to an X86 executable.

Posted by thunderfoot79 on 31-May-2017 09:52

The NO-UNDO does improve my performance by about 40%.  Now since it takes 12 minutes to run the loop alone in my ABL, it may not be noticeable unless you're letting it run all the way through.  

I would imagine that the .Net compiler is using more optimization and realizes that your Loop isn't actually doing anything so it cuts down on the computation.  I'd be curious what your timings look like if you actually did something(such as write to a log for each iteration) during your .Net loop.  

Posted by Piotr Ryszkiewicz on 31-May-2017 09:53

Try

do i = 1 to 999999999:

end.

Maybe not as fast as vb, but much faster.

Posted by swilson-musco on 31-May-2017 09:55

I will try that..but begs the question..what other 4GL functions are magnitudes slower...I mean, I wouldn't complain if 4GL was 10 seconds vs 3 seconds but this example is supercalifragilisticexpialidociously slower.

Posted by Laura Stern on 31-May-2017 10:32

Frankly, I tend to agree!  It seems unreasonably slower.  Yes - the DO WHILE took about 15 minutes on my machine!  The DO i = 1 to 999999999 took 5.6 minutes.  That was without NO-UNDO.  That difference in itself seems odd.  And with NO-UNDO, it took 3.15 minutes.  That also seems odd since without a database transaction in effect, nothing would be undone anyway.  You could log a bug.

And on a completely different note: What is the purpose of the PROCESS EVENTS/Application:DoEvents() in your example?

Posted by Laura Stern on 31-May-2017 10:42

Actually, I tend to agree!   Too much of a difference.

On my Windows machine:

  the DO WHILE took about 15 minutes!  

  DO i = 1 TO 999999999 took about 5.6 minutes

That difference in itself seems odd.  That was without NO-UNDO.

With NO-UNDO, the DO i = 1 TO 999999999 took 3.15 minutes.  That also seems odd since without a database transaction, nothing would have gotten undone anyway.

You could log a bug.

Posted by Jean-Christophe Cardot on 31-May-2017 10:52

I assume there is not JIT compiler in the AVM?

Posted by davez on 31-May-2017 10:53

When you use a DO WHILE, I believe the condition is reevaluated after each iteration, which would explain why it's slower than just a DO loop. With the NO-UNDO statement, Progress skips the transaction back-out tracking so it's even faster.

Posted by swilson-musco on 31-May-2017 11:00

So I added some code in vb.net loop to see if it is "short cutting" or if compiler is causing a premature exit of the loop.

I defined a filewriter and streamwriter and output the following...

Do While i < 999999999

           stream.Write("scott" + vbCrLf)

           i = i + 1

Loop

It when from 3 seconds to 70 seconds...still faster than 4GL even when doing disk IO.

Odd..

Posted by swilson-musco on 31-May-2017 11:01

Process-events is in the trigger to allow the main form to enable the .NET picture box with an animated gif.

Posted by Matt Baker on 31-May-2017 11:36

Using C# instead of visual studio and using ilspy to see what this does:

This is what the byte code looks like.  The compiler converted my while loop to a for loop and removed the increment on the integer.  So it becomes a do nothing loop.  I suspect if I ran this a few hundred times the JIT would throw out the loop completely.

Original version:

Using C# instead of visual studio and using ilspy to see what this does:

// ConsoleApplication2.Program

private static void Main(string[] args)

{

int i = 0;

while (i < 999999999) {

 i = i + 1;

}

Console.WriteLine("done");

bytecode version

// ConsoleApplication2.Program

private static void Main(string[] args)

{

for (int i = 0; i < 999999999; i++)

{

}

Console.WriteLine("done");

}

Posted by Patrick Tingen on 31-May-2017 14:00

Double entry removed

Posted by Patrick Tingen on 31-May-2017 14:10

I did a comparison between ABL and .Net a few months ago. I wrote a double loop with 2 vars, X and Y and let them loop from 1 to 1000 while calculating the Pythagorean theorem for both like this;

DEFINE VARIABLE X AS INTEGER NO-UNDO.
DEFINE VARIABLE Y AS INTEGER NO-UNDO.
ETIME(YES).

DO X = 1 TO 1000:
  DO Y = 1 TO 1000:
    SQRT(X * X + Y * Y).
  END.
END.

MESSAGE ETIME VIEW-AS ALERT-BOX INFO BUTTONS OK.

This took around 1060 msec on my computer. The same solution in c# was about 100 times faster in ~ 10msec. The power of .Net runs circles around an ABL solutions in terms of performance. I guess this is the penalty for working in a language that is highly optimized to work with transactions and databases.

While I was working on it, I also tested these variations:

  • Remove NO-UNDO -> 1462 msec
  • Use SQRT(EXP(X,2) + EXP(Y,2)) -> 2060 msec
  • Use DECIMAL instead of INTEGER -> 2473 msec
  • All of the above -> 2985 msec

Posted by Laura Stern on 31-May-2017 16:15

Sorry that this has turned into two completely unrelated conversations.  But re post above by swilson-musco:

 Process-events is in the trigger to allow the main form to enable the .NET picture box with an animated gif.

I thought you said the animation was happening in another thread.  So why do you need to be processing events on this main UI thread to make it work?  Besides, you were JUST in a WAIT-FOR, processing events,  before this trigger code ran.    

Posted by doa on 01-Jun-2017 02:27

I still don't get why progress completely ignores the ABL performance.

Having a runtime language without JIT or any kind of optimization in 2017 is just madness.

This case just proves that they don't even target the "easy" optimizations.

Posted by bronco on 01-Jun-2017 03:39

This is because every time this kind of questions pop up some says that once you hit the database that will be much slower anyway, therefor concluding that the 4GL performance doesn't matter. For what it's worth, I strongly disagree with this...

Posted by Mike Fechner on 01-Jun-2017 04:04

Me to. ABL performance matters!!!!!!!!!!!!!!!!! Especially since in layered architectures the DB is only touched in the lowers layer.
 
One can say, that ProDatasets and TT’s used in the higher layers are just like a DB. But when leveraging more and more OO, we’re requiring much better ABL performance as TT’s may just be used to hold data and ABL constructs are used to provide high-level accessors to those. .

Posted by ske on 01-Jun-2017 05:54

doa:

> Having a runtime language without JIT or any kind of optimization in 2017 is just madness.

Mike Fechner:

> Me to. ABL performance matters!!!!!!!!!!!

Which kinds of optimizations would give the best value for the investment in time and money?

Is there any quick way of adding JIT, which would fit within the company's available resources, and would remain portable to all platforms?

Posted by bronco on 01-Jun-2017 13:40

Oh, and btw, ABL performance is important as well because when it performs good it gives more bang for your bucks on the server. In OpenEdge it is highly beneficial to have your appserver next to the database (shared mem), and the more efficient the ABL is,  the more I can run on the the same server before I have to scale out/up.

Posted by doa on 02-Jun-2017 01:48

Every optimization would matter, this has to be a continuous process.

I guess the best (but completely unrealistic) case would be if the port the ABL to something like LLVM.

But yeah...i have no hope that any singificant things will change here.

Posted by gus bjorklund on 03-Jun-2017 16:38

moving the ABL to llvm would be a huge task.

much of the code is in the runtime anyway.

that said, there are quite a few worthwhile performance improvements that could be made to the 4GL interpreter.

as with any other improvement, what gets done or not done is all a matter of priorities.

go to the PUG Challenge in Manchester NH tomorrow. Pester Evan.

Posted by onnodehaan on 03-Jun-2017 16:41

I sometimes wonder why PSC does little compiler ptimizations. Is it because it's to risky?

Posted by George Potemkin on 04-Jun-2017 00:30

> This case just proves that they don't even target the "easy" optimizations.

I don't think so.
Time of DO loop on the same box per Progress version:

Time(ns)  Progress 
     149  11.6 
     159  10.2B 
     477  10.1C 
     444  10.0A 
     660  9.1B 
     878  8.3A 
   8,610  7.3C (VM) 

Posted by gus bjorklund on 04-Jun-2017 12:00

> On Jun 3, 2017, at 5:42 PM, onnodehaan wrote:

>

> I sometimes wonder why PSC does little optimizations. Is it because it's to risky?

>

it is a question of on what are the most important things to spend the finite developer time.

go to the PUG Challenge and pester Evan Bleicher.

Posted by swilson-musco on 05-Jun-2017 08:43

I appreciate the answers given..my intent was not to ruffle any feathers..just trying to learn and understand.  As stated earlier, I am not a 4GL expert.  My job is more of a DBA/Architect role and when developers come to me as say "Doing X or Y is slow" generally the first reaction is we need more hardware or faster storage.  Just trying to understand that perhaps the solution isn't to spend more money or build a better mouse trap but consider other language alternatives to get the job done without having to invest a new system design.  The Progress DB supports many ways to get to the database, making sure developers are aware of the strengths and weakness of 4GL may influence them on which language will help them get the task done in the least amount of time.

Thank you for your time.

Posted by Evan Bleicher on 06-Jun-2017 10:28

The Core Client team is committed to supporting and enhancing the ABL, which includes improving the performance of the language.

The team has discussed the use of LLVM several times in the past and although we have not moved forward with a project which leverages this technology we are not opposed to integrating newer technologies into the product.  However, each project needs to be evaluated against the other projects which PM identifies as a priority for a release.  

From time to time an example is posted to Community which highlights the performance other languages vs. the ABL.  We will continue to review these situations and if we determine there is a real benefit to the ABL we will investigate optimizing the Language.

We have investigated and are continuing to analyze our OOABL infrastructure, looking for optimizations we can make.  When optimizations can be safely made, we implement these changes.

The point of this thread is to identify that runtime performance is important and in that, there is agreement.  Working with PM, this development effort must be prioritized with the team’s other development tasks.

Posted by Stefan Marquardt on 15-Jun-2017 11:39

I think that OpenEdge has many good features but the language itself is too slow.
I raised often the same problem, objects are slow, simple I = I + 1 and string concatenation is a catastrophe.

I am working in a mixed team, my colleagues are using C# and they are using the cpu power, I can't.
ABL GUI is useful for displaying data, some calculation but the rest has to be done on the appserver.
If you need hardware support you are forced to write assemblies in C# because they are using multi-threading.

Years ago, as I used OEA the first time (10.2A in Paris), I can't get Mikes presentation out of my head where he was so happy that OE has now a possibility to resize windows and a useful flow control, I head the first contact with the bridge.
10.2A was very painful and with 10.2B it made sense to use it.

During this time I joined a German PUG meeting and a member company presented a fully new developed solution based on ABL.NET.

I had the chance to join one year later the next meeting again and we got the same presentation, only with the latest output after one year developing time.

But what happend?!

I noticed that they buried ABL in the frontend and switched to C#, only the background was still using the "big ABL experience", that was the main point for the frontend last year too.
I asked them what happened and I was told: They realized that a modern concept wasn't possible because the UI was too slow and had limitations.

If nothing will change it looks like that ABL UI could be a dead end in this kind of implementation.

It's nice to have the ability to use the same code from lightyears ago, this was still the killer argument.
But meanwhile UI changed in windows to WPF, none of my colleagues is using winforms anymore.
I don't think that WPF will be introduced in ABL, but perhaps it will be buried too like Silverlight and HTML5 and winforms are still there?

I think that Progress could make a hard break to something new without 100% compatibility with old code.
Make something new with a new compiler.

Another option could be something like integrating OE into MS VisualStudio and provide a good and easy data and appserver access. Deliver an entity framework OE database provider and a ABL compatibility class like MS did with the first VB.NET version to help users to convert old VB6 code into the new world of .NET
Then we could switch to C# with all language components.

It could be that I wrote nonsense but when I understood Mikes comment correct, the client is only fast with data when using datasets and temp-table.

My colleagues do not use this technology anymore since entity frameworks are available.
It's not perfect but objects are much better in the UI and compatible to every control and with ABL a performance nightmare.

My 2 cents.

Posted by Chan Ming Kin on 18-Jun-2017 03:06

I tried a development tool to write a program in "VB6 like" syntax and run it in its IDE using JAVA 8 JDK. If this development approach can be applied to ABL, I can use ABL to write programs and make use of JAVA runtime.

Posted by Thomas Mercer-Hursh on 18-Jun-2017 10:20

You do realize that much of the work is done by the AVM, which you would have to duplicate in your tool?

Posted by Chan Ming Kin on 19-Jun-2017 00:41

Yes I know the AVM does much of the work. Just have an idea if I write ABL programs in Progress Developer Studio that is plugged in Eclipse, the compiler of ABL can transform ABL sources and other ABL libraries related to AVM to JAVA bytecode that can be run in the JAVA runtime.    

Posted by Thomas Mercer-Hursh on 19-Jun-2017 09:26

There is nothing about the use of Eclipse which is going to provide the missing pieces.  There is a company who tackled a related problem of translating ABL into Java as a way for a company to get away from their ABL licenses (on which they had not been paying maintenance).  The project was a massive overrun.  It did eventually go live, but unsurprisingly it produced really ugly Java so, that, I believe, for a while they continued to make code changes in the ABL and retranslate it.  I have no info on the performance of the translated code, but I doubt it was very good,, but the main point of the work was to make a hardware change without having to rebuy Progress licenses for the new boxes.

Posted by Chan Ming Kin on 19-Jun-2017 21:09

I guess the project you mentioned is named as "FWD" which is an open source project. The tools in this project are for conversion purpose.

I just want to keep using ABL because many ABL programs have to be maintained in my company.

My idea is triggered by the tool “B4J”. So if the ABL code can be compiled in JAVA bytecode and run on JVM, the performance issue might be tackled.

Posted by Stefan Marquardt on 21-Jun-2017 04:39

Hi Gus,

I just read that Chris Lattner, one of the main LLVM developer, has no job in th moment.

Stefan

Posted by Tim Kuehn on 21-Jun-2017 18:28

In most of my work the ability to scan and process records was more important than how fast the AVM could do a DO loop like this.

IMO the main priority for the language performance team is speeding up object instantiation. Mike posited some tweaks at PCA 2017 - more still needs to be done.

Posted by Jean-Christophe Cardot on 22-Jun-2017 10:18

The ABL performance matters a lot.

First people tend con compare languages based on the performance of simple constructs or algorithms, where ABL is currently at a (big) disadvantage. Then they choose to go with something else :(

It also matters in order to do things which do not involve the database, e.g. implement complex algorithms in pure ABL.

It would allow to have more ABL libraries, without relying on external dll or .Net (not Unix friendly) and build a more comprehensive ecosystem.

Finally as I'm one of those writing a lot of code without any db access (e.g. pdfInclude ;) I would personnaly be thrilled ;)

Posted by doa on 26-Jun-2017 06:23

....just get this guy and let him handle the switch ;-)

Imho a switch to LLVM would be a huge leap in performance.

Posted by marian.edu on 28-Jun-2017 13:24

Stefan,

that most probably looks like a nonsense to most peoples following PSC on every move and happily went down the OE.NET path switching from one controls set to another every two years… Codejock, Infragistics, now Telerik.

I might be totally wrong but I still don’t get it why peoples still prefer to use the AVM runtime which in it’s turn loads the .NET VM wrapping everything as a P.L.O, ‘benefit' from the single threaded nature of the runtime (having fun with controls that kinda like to go the multi-threaded way) then build everything in a Java based IDE that arguably is far better than Visual Studio (sic)… and all that for the benefit of using temp-tables and datasets :)

For me things would be much more easier if the UI part will just be done completely in .NET, use the darn Visual Studio for development, keep the business logic on AppSrv and call it through OpenClient - you can probably easily build some EF layer on top of OERAs business entities. Things is, everyone looks at me in a strange way when I dare to propose something like that… it’s not the PSC way, everyone needs to do OE.NET.

Marian

p.s. resending that, looks like latelly the forums have a strange 'censorship' feature or something when replying by email, or it might be just my messages that are filtered out for some reason :)

Posted by Peter Judge on 28-Jun-2017 13:28

p.s. resending that, looks like latelly the forums have a strange 'censorship' feature or something when replying by email, or it might be just my messages that are filtered out for some reason :)

Occam applies here. No censorship just crappy forum code - it has a long and distinguished history of randomly swallowing email responses.
 
 

Posted by Stefan Marquardt on 29-Jun-2017 00:10

Tim,

"In most of my work the ability to scan and process records was more important than how fast the AVM could do a DO loop like this"

Try to create hash keys from all records of a table with ABL ..

Posted by Thomas Mercer-Hursh on 29-Jun-2017 09:43

But, Stefan, is this a repeating requirement?  If it is something you do once a month, does it really matter?  If you do it for each record as it is created, is it really going to make any difference?

Isn't the point being efficient at the things one has to do all the time?  To be sure, there are special cases like the matrix algebra in transportation planning, but then we have a mechanism to calling out to an external package for such situations.

And, even in your issue, isn't the the bulk of the time going to be spent reading all of the records of the table, regardless of what you do with it once read?   And, is that actually going to be faster in another language?

Posted by doa on 30-Jun-2017 03:19

"And, even in your issue, isn't the the bulk of the time going to be spent reading all of the records of the table, regardless of what you do with it once read? "

But the Database performance is also very bad

ABL and the database (especially over network!) both are years behind other technologies in terms of performance

And the ABL is also behind in terms of features (especially  Oo still hasn't some of the basic features)

Some serious work is needed here to stay competitive with other technologies

Posted by George Potemkin on 30-Jun-2017 03:42

Another example:

In the self-service mode

FOR EACH customer NO-LOCK WHERE TRUE: END.

is almost twice slower than

FOR EACH customer NO-LOCK WHERE FALSE: END.

Both queries create exactly the same db activity. The difference: the query with WHERE FALSE does not copy the records from the so-called network buffers to the client's record pool. In other words, the time needed to copy a record from one part of the client's private memory to the another its part almost equals to the time needed to retrieve a record from shared memory using the locking protocols. Why this operation is much longer (tens times) than, for example, a latch lock?

Posted by doa on 30-Jun-2017 03:55

[mention:ae2ea2f6412743fc8be36c522f414ef0:e9ed411860ed4f2ba0265705b8793d05] In other systems this would be considers a very high priority bug...but here stuff like this is alyways "expected to work like that" or "very hard to fix"

Posted by Thomas Mercer-Hursh on 30-Jun-2017 09:23

The key to your observation is "over the network".  There are tuning parameters to potentially help this, but any DB access over the network is going to be tons slower than self-service .... regardless of the programming language.  I doubt you can come up with other technology which is substantially faster over the network.  This is what AppServers are for.

Posted by ChUIMonster on 30-Jun-2017 10:33

TMH -- George's example is comparing the two loops with shared memory.

I just tried it -- adding "where FALSE" to a simple FOR EACH customer NO-LOCK does indeed make it about 4x faster.  With shared memory connections in both cases.

Posted by Thomas Mercer-Hursh on 30-Jun-2017 10:54

Yes, I was not responding to George's example, but to the prior statement about "across the network" and the claim that ABL was particularly slow in that context.

The WHERE FALSE seems a curiosity which doesn't have a lot to do with real world requirements.  

Posted by ChUIMonster on 30-Jun-2017 11:42

Handling over large datasets is pretty real-world.  Doing so 3x or 4x faster seems pretty worth-while to me.  Having to add a bizarre bit of syntax to get that performance improvement is not very appealing.

Posted by George Potemkin on 30-Jun-2017 11:54

> a simple FOR EACH customer NO-LOCK does indeed make it about 4x faster.

I also run the tests with kill -9. The probability that a session running "FOR EACH customer" will die with a latch lock was only 2-5 % (I don't remember an exact number). In other words the most of the time the session does not use the structures in the shared memory.

> The WHERE FALSE seems a curiosity which doesn't have a lot to do with real world requirements.

The functions in WHERE clause that are resolved on a server side are often used in the real applications. They might decrease a network traffic, let's say, by 50 or 90% while WHERE FALSE clause decreases the traffic by 100% (up to the absolute zero). But my point was: WHERE FALSE simply allows us to measure the time needed to copy the records from network buffers to the client's record pool (the small -l parameter) and this time seems to be too large.

Posted by Thomas Mercer-Hursh on 30-Jun-2017 12:09

I guess I am not sure whether there is something interesting here.  If one is doing this on sports, then the table isn't all that big and the DB activity is not going to be that great ... especially if still in -B.  But, in any case, returning all records vs returning none seems like a big difference in the amount of work done.  And, are we sure that the WHERE FALSE is really doing all the same DB work?

And, yes, I get that server-side resolution has a major impact on performance of a network query ... run this same test networked if you want to see a dramatic difference.  My point is just that WHERE FALSE is not a real world example because it provides no index selection and yet reads every record.  Much more typical are conditions which provide index selection, reducing the number of records read.

All of which is beside the original point which was that ABL is particularly slow at network queries.

Posted by George Potemkin on 30-Jun-2017 12:17

> And, are we sure that the WHERE FALSE is really doing all the same DB work?

I'm sure. Checked with promon.

> My point is just that WHERE FALSE is not a real world example because it provides no index selection and yet reads every record.

The query uses exactly the same index as with WHERE TRUE clause (or without an explict WHERE clause) - the primary index.

A bit off-topic: comparing a standard readprobe test vs. its "aggressive" version (with WHERE FALSE clause) is useful for a proving that the current bottleneck is the latches rather than the lack of CPUs on the box. Only if both tests shows the best results on the same number of concurrent sessions then the bottleneck is the latches.

Posted by gus bjorklund on 02-Jul-2017 06:22

> On Jun 30, 2017, at 1:18 PM, George Potemkin wrote:

>

> > And, are we sure that the WHERE FALSE is really doing all the same DB work?

>

> I'm sure. Checked with promon.

>

George,

relying on memory, i can’t be entirely sure, but my guess is that what is happening is that in the true case, the selection logic decides to take the record and then:

an icb (what the client's record buffer and related info is called) is being allocated and the record is copied into it. there is a small abount of (buffered) lbi activity related to this. since the loop is empty, the icb is deallocated and lbi data rewound.

then the next iteration starts.

i did a 2 part talk a while back called “Updating a record”. Part 2 explains what happens on the client side. can’t find it now.

Posted by George Potemkin on 02-Jul-2017 09:22

> i did a 2 part talk a while back called “Updating a record”. Part 2 explains what happens on the client side. can’t find it now.

Is it the "Birth, Death, Infinity" presentation from PUG Challenge Americas 2013?

pugchallenge.org/.../423_Birth_Death_infinity_v14.pptx

pugchallenge.org/.../423_Birth_Death_infinity_v14_wav.mp3

BTW, the similar issue seems to exist with transaction activity: transaction undo is twice faster than transaction create. Client's "hemisphere" (functions and structures that are specific for the client sessions and are not used by the servers) is not involved during transaction's undo. Kill -9 will crash db only with 30% probability when a client creates a transaction and with 90% probability if a client is undoing a transaction though the amount of db updates is exactly the same in both cases.

Posted by gus bjorklund on 02-Jul-2017 10:30

> On Jul 2, 2017, at 10:23 AM, George Potemkin wrote:

>

> Is it the "Birth, Death, Infinity" presentation from PUG Challenge Americas 2013?

no, it’s this one (last performed in 2010 for a user group, i think):

Title: Behind The Scenes: Updating A Record

Speaker: Gus Bjorklund, Parmington Foundation

Description:

In this talk we examine what happens under the covers (or “behind the scenes”) in the 4GL client and the database when we execute a a simple 4GL program. Included are what happens when the program is compiled, client side data structures for the database connection and record buffers, database transaction management, data buffering and locking, and other arcane subjects. After hearing this talk, you should be able to go home and make your own 4GL client and database.

today i sent it as a late submission for emea pug challenge 2017

maybe i can post the slides on communities. don’t know if i am allowed anymore.

Posted by George Potemkin on 02-Jul-2017 10:45

The presentation is available on communities:

community.progress.com/.../787.behind-the-scenes-updating-a-record

BTW, am I correct thinking that the network buffers mentioned in the error 1077 are also the /record/ buffers and they are not a part of ICB structure?

SYSTEM ERROR: Attempt to free buffer type 2. (1077)

Description says: "It indicates that the client/server communications buffers have been corrupted" but in fact the error can take place on the self-service clients as well.

This thread is closed