Functional programming in ABL

Posted by Lieven De Foor on 05-Aug-2019 15:37

In functional programming languages, functions are first-class citizens.
This means you can assign a function to a variable and pass it as parameter to another function or return it from other functions.

In ABL there is one construct in which you can specify a method as a parameter, and that is when attaching an event handler to an event:


We can (ab)use this to have some functional programming capabilities:

CLASS FunctionalProgramming.Calculator:



Result_ = a + b.



Result_ = a * b.


CONSTRUCTOR Calculator():


Calculator:Publish(3, 4, OUTPUT Result_).

Calculator:Publish(3, 4, OUTPUT Result_).



The boilerplate code (Subscribe/Publish/Unsubscribe) might be able to get isolated in an include file.
If anyone has any other creative ideas on how this pattern could be used, please share them here...

All Replies

Posted by Richard.Kelters on 07-Aug-2019 14:34

Thanks Lieven, no my brain is in a knot :)

Posted by Lieven De Foor on 07-Aug-2019 14:52

Sorry for that!

If Progress could somehow open up that syntax to ABL methods as well, things could get interesting...

Posted by agent_008_nl on 08-Aug-2019 10:22

I have some experience with functional programming in javascript and elixir (a language implemented on top of erlang). About two years ago I have been refactoring a large OO ABL codebase, during a whole year, where maintainance was a problem thanks a lot of problems, among which the over-enthousiast implementation of OO patterns and inheritance. I have learned to appreciate "referential transparancy", see f.e.  : 
"Write no classes!

Joe Armstrong: "I think the lack of reusability comes in object-oriented languages, not in functional languages. Because the problem with object-oriented languages is they've got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle. If you have referentially transparent code, if you have pure functions-all the data comes in its input arguments and everything goes out and leaves no state behind-it's incredibly reusable. You can just reuse it here, there, and everywhere. When you want to use it in a different project, you just cut and paste this code into your new project. Programmers have been conned into using all these different programming languages and they've been conned into not using easy ways to connect programs together. The Unix pipe mechanism-A pipe B pipe C-is trivially easy to connect things together. Is that how programmers connect things together? No. They use APIs and they link them into the same memory space, which is appallingly difficult and isn't cross-language. If the language is in the same family it's OK-if they're imperative languages, that's fine. But suppose one is Prolog and the other is C. They have a completely different view of the world, how you handle memory. So you can't just link them together like that. You can't reuse things. There must be big commercial interests for whom it is very desirable that stuff won't work together."

- Peter Seibel, Coders at Work: Reflections on the Craft of Programming"



 You can send functions around contained in objects in ABL. I have made extensive use of parameter objects that look like the following one (you could make use of interfaces of course):


   File        : SomeParameter

   Purpose     :

   Syntax      :

   Description :

   Author(s)   : Stefan

   Created     :

   Notes       :


using Common.Server.CommonRequest         from propath.

using Common.Server.SomeHelper            from propath.

using Basics.Server.DataAccess.daWhatever from propath.

block-level on error undo, throw.

class Somewhere.Common.SomeParameter:

 define public property CurrentRequest as CommonRequest no-undo


   private set.      

 define public property DaX            as daWhatever no-undo


   private set.    

 define public property Helper         as SomeHelper no-undo


   private set.            

 constructor public SomeParameter(curRequest as CommonRequest):


     Helper            = new SomeHelper()

     CurrentRequest    = curRequest

     DaX               = new daWhatever()


 end constructor.

 destructor public DebetCreditParameter():

   delete object DaX    no-error.

   delete object Helper no-error.

   error-status:error = false.

 end destructor.

end class.

Using this I have eliminated almost all inheritance and the use of OO patterns relying on it. The team was happy with the changes. See the recommendation I got on linkedin by the product owner: "[..] This has greatly improved the stability and maintainability of our product. The customer now experiences our product as very stable." (


Kind regards,

Stefan Houtzager

Houtzager ICT consultancy & development

Posted by agent_008_nl on 09-Aug-2019 05:30

Lieven, it would help if you explain your purpose. I understand progress, I can program functional myself but I do not understand what and why of your question.

My previous mail is of course very incomplete. Hire me and I'll explain everything in detail. ;-) Small extra:

I named "referential transparancy". An as far as possible referential transparent method should not depend on "indirect input" (a shared var so evil ;-), like an object reference obtained via constructor injection. This can be done with the parameter object as explained in my previous mail. In the example I have put (on purpose) one injected dependency. For parameter objects I do not mind much using it. My goal is making objects as much as possible stateless / push state to a lets call it statefull layer. In the end it is state (persist something in a db f.e) that you want to manipulate in your application  My goal is maintainability (see prev mail).  Read  for more on dependency injection.

Posted by on 09-Aug-2019 06:57

[quote user="Lieven De Foor"]

In functional programming languages, functions are first-class citizens.


Luckily the 4GL still has functions so you can pretty much go with the functional programming paradigm if that works for you although I do not see how changing state can be avoided in a business application and I would very much like to keep the 'assign' statement part of the language - regardless of how imperative it might sound - for us mortals that don't write mathematical algorithms in 4GL :)

[quote user="Lieven De Foor"]

The boilerplate code (Subscribe/Publish/Unsubscribe) might be able to get isolated in an include file.
If anyone has any other creative ideas on how this pattern could be used, please share them here...


No other creative idea so will probably just stick to using interfaces instead. 

Posted by Lieven De Foor on 09-Aug-2019 07:09

The topic of my post was more of an eye catcher.

You can't do functional programming in ABL.

I've simply tried to demonstrate a way to pass a function as parameter, which is one of the pillars of functional programming.

In ABL you would usually do this, like [mention:d768d2089b264b89b29e49c0617a193b:e9ed411860ed4f2ba0265705b8793d05] said using interfaces and callbacks.

But sometimes adding an interface is not possible (3rd party code) or not wanted, and in that case the above could be useful to add some sort of callback without passing the whole object (through the interface), but only the method you want executed...

[mention:5a647afc67ec4e118db4fd9337f8a264:e9ed411860ed4f2ba0265705b8793d05] , I did not ask any question, so it's not clear to me what your referring to. I'm also not interested in your consulting services...

Posted by agent_008_nl on 09-Aug-2019 07:30

Your question: "If anyone has any other creative ideas on how this pattern could be used, please share them here..."

I took the question a bit broader, partly because I do not understand what it is you want and why.

You do not reach much when you get snarky guys. Stay in your bubble if it fits your needs. ;-)

Posted by Lieven Cardoen on 09-Aug-2019 07:41

Stefan, I totally agree with you. I'm also totally in favor of referential transparancy. It has really changed my life.

Posted by agent_008_nl on 09-Aug-2019 07:48

That is nice to hear! At the moment I'm not developing with progress, but I'm always interested in exchanging ideas as far as I can make time.

Posted by Lieven Cardoen on 09-Aug-2019 07:56

You really should make time. Lots of developers here can really learn a lot from someone like you! Now back to programming. Nice day to you!

Posted by agent_008_nl on 11-Aug-2019 13:14

If you read the blog about dependency injection you will have seen the link to [View:]

"In object-oriented architecture, we often struggle towards the ideal of the Ports and Adapters architecture, although we often call it something else: layered architecture, onion architecture, hexagonal architecture, and so on. The goal is to decouple the business logic from technical implementation details, so that we can vary each independently. This creates value because it enables us to manoeuvre nimbly, responding to changes in business or technology." (bold by me)

You can get a bit of this value in progress. Architectural agility wins, else push the elephant:

Posted by gus bjorklund on 11-Aug-2019 16:44

> On Aug 11, 2019, at 9:17 AM, agent_008_nl wrote:


> "In object-oriented architecture, we often struggle ..."

wonder if the following fashion of object orientation is worth the struggle. i've never seen any substantiation for the claims of easier maintainability, higher productivity, more reliability, easier readability, etc. lots of claims, no evidence.

seems to me that it leads to a lot more code and figuring out what's going on amongst all the abstractions and "design patterns" and such is difficult. are the design patterns are really kludges to make up for language flaws ?

Posted by smat-consulting on 12-Aug-2019 01:41

A bit off topic of the original post - but, I feel it necessary to point out:

My observations as consultant/contractor seeing plenty of code is pointing in the same direction as Gus' comment:

It seems to me, it is not so much a matter of what paradigm some application is following (procedural, object oriented, functional). What determines whether an app is technical debt or competitive advantage is directly related to how much effort the people put into making it a "good" code.

I've worked at a (big) progress place, where they are still programming in V7/V8 style - permanently running into exactly the same issues that everybody did back then, and why OERA came about, and what OO was promised to fix... They are bad-taking progress, dreaming Java would be so much better! Well, they'll find out, if they try: if they do nonsense in Java as they do in Progress, the thing isn't even getting off the ground, not to talk about flying like an eagle!

More recently I've worked at another big place, there they follow only rudimentary coding conventions and tried to get a handle on their flawed system by encouraging OO progress. Well, the OO code is as flawed as the (pre-)procedural one - and causing as much, or even more problems!

To me, these two experiences are prime-examples that the paradigm does not guarantee good code!

Before that, I was able to implement a complete application all by myself. I did everything the way I wanted - no shortcuts, no quick and dirty. I followed all the "best practices" I came up with and encountered over the year. I put a tremendous emphasis on consistency, overall architecture, and detailed implementation style. I am still maintaining this application, adding enhancements, changing existing functionality.

It amazes me every time I have to deal with it, how quick it is to make changes, how stable the thing is, and how easy it is to find my way around it!  It was done in the procedural approach!

I have proven to myself, that good style, clear, concise, and well formatted code, following a good overall architecture and design are the most important thing about programming, and that the paradigm (i.e. procedural, object oriented, or whatever) is totally secondary.

As with everything, it takes time, much experiences, and plenty of bloody noses to become an expert at anything. It seems to me, switching every couple of years to a totally new approach or language can not be helpful in making you a master. It further seems, that sticking with one approach (and language) as much as possible is the best chance to ever reach master level - even though it is "just" being a master with that. But you are!

To me, being a master application developer does not necessarily mean knowing internals of the tool you're using, or knowing all the newest fads... but to be able to build something from scratch all the way, and it working well and being easy to maintain - it being a competitive advantage for the users...

Posted by agent_008_nl on 12-Aug-2019 05:34

Agreed Gus. Evidence based software-engineering is asked for since a long time. It is not easy to formulate the criteria to test and then the tests themselves are

equally hard / time consuming to perform. See google on evidence based software-engineering. We are stuck with subjective opinions. "Are the design patterns are really kludges to make up for language flaws?" Yes I think so, and this not a new thought. At least Peter Norvig stated this already in 1996  "Design patterns are bug reports against your programming language.". "16 out of the 23 patterns in the Design Patterns book (which is primarily focused on C++) are simplified or eliminated (via direct language support) in Lisp or Dylan."

 But many companies are stuck now with an OO ABL code base and have a problem maintaining it. What I propose is not a next fashion of oo (it's more weeding out some things like inheritance and depending patterns, see explanation in prev mails). Ports and adapters is natural for a functional language like haskell, not for oo abl. It is impossible to implement in oo abl. One might find some solutions for maintenance problems in it though.  Support for functional programming in ABL like making it possible to send a method/internal procedure/function reference as a parameter would be helpful.

Posted by James Palmer on 13-Aug-2019 10:48

[mention:769929d6588f4365a18fd9becf2d125e:e9ed411860ed4f2ba0265705b8793d05] It's beautiful, isn't it, building an application from scratch that works and is easy to maintain. Unfortunately, in most situations, the problems arise because it's not just one person producing or maintaining the code. And it's not just differences in coding standards that cause problems; differences in understanding and knowledge also plays a part. As soon as you get multiple people working on a project, no matter how good, you always introduce another level of complexity.

Posted by smat-consulting on 15-Aug-2019 00:58

Ha, yes,  James, that's the difference between prototyping in a controlled environment and real-life...

My observation, though, in real life is, that the lack of a good architecture (either due to outdated approach or lack of knowledge) and/or the lack of good coding conventions is rampant - and, it seems to me, the main culprit for much of the technical debt that is added on a daily basis.

I am eternally grateful for having been given the opportunity to prove that is the case - by making these things a top priority.

I believe that most developers want to write good code. I also believe, that if given convincing evidence, most rational people are happy and eager to do what's right. I do know, though, that we all are inherently lazy and change-averse.

Since I now have a concrete example to point at, I am hopeful again, that it might be possible to help people overcome their inner sloth and follow their "better knowledge", leading to less debt-buildup, if not actual debt reduction... well, hope dies last... ;)

Posted by agent_008_nl on 18-Aug-2019 11:33

Good presentation about evidence-based (Greg Wilson - What We Actually Know About Software Development, and Why We Believe It’s True):

Posted by gus bjorklund on 18-Aug-2019 12:44

It has been my experience that no matter what programming language i was using, i had to write everything at least three times.

my first attempt was no good because i did not understand the problem.

my second attempt was no good because i did not understand the solution.

my third attempt was, sometimes, adequate. while it might have been correct, it could still benefit from improvement.

then again, when looking at my code years later, i am rarely impressed.

Posted by agent_008_nl on 18-Aug-2019 15:40

> It has been my experience that no matter what programming language i was using, i had to write everything at least three times. my first attempt was no good

> because i did not understand the problem.

When I write a complicated program I first try to get the requirements clear. Sometimes I discover that I underestimated this  complexity and have to start thinking before coding any furthe. I'm not the holy virgin. As Leslie Lamport says "We need to understand our programming task at a higher level before we start writing code.". See (a text that brings more wisdom than the agile manifesto ;-)  text copied below:


I began writing programs in 1957. For the past four decades I have been a computer science researcher, doing only a small amount of programming. I am the creator of the TLA+ specification language. What I have to say is based on my experience programming and helping engineers write specifications. None of it is new; but sensible old ideas need to be repeated or silly new ones will get all the attention. I do not write safety-critical programs, and I expect that those who do will learn little from this.

Architects draw detailed plans before a brick is laid or a nail is hammered. But few programmers write even a rough sketch of what their programs will do before they start coding. We can learn from architects.

A blueprint for a program is called a specification. An architect's blueprint is a useful metaphor for a software specification. For example, it reveals the fallacy in the argument that specifications are useless because you cannot generate code from them. Architects find blueprints to be useful even though buildings cannot be automatically generated from them. However, metaphors can be misleading, and I do not claim that we should write specifications just because architects draw blueprints.

The need for specifications follows from two observations. The first is that it is a good idea to think about what we are going to do before doing it, and as the cartoonist Guindon wrote: "Writing is nature's way of letting you know how sloppy your thinking is."

We think in order to understand what we are doing. If we understand something, we can explain it clearly in writing. If we have not explained it in writing, then we do not know if we really understand it.

The second observation is that to write a good program, we need to think above the code level. Programmers spend a lot of time thinking about how to code, and many coding methods have been proposed: test-driven development, agile programming, and so on. But if the only sorting algorithm a programmer knows is bubble sort, no such method will produce code that sorts in O(n log n) time. Nor will it turn an overly complex conception of how a program should work into simple, easy to maintain code. We need to understand our programming task at a higher level before we start writing code.

Specification is often taken to mean something written in a formal language with a precise syntax and (hopefully) a precise semantics. But formal specification is just one end of a spectrum. An architect would not draw the same kind of blueprint for a toolshed as for a bridge. I would estimate that 95% of the code programmers write is trivial enough to be adequately specified by a couple of prose sentences. On the other hand, a distributed system can be as complex as a bridge. It can require many specifications, some of them formal; a bridge is not built from a single blueprint. Multithreaded and distributed programs are difficult to get right, and formal specification is needed to avoid synchronization errors in them. (See the article by Newcombe et al. on page 66 in this issue.)

The main reason for writing a formal spec is to apply tools to check it. Tools cannot find design errors in informal specifications. Even if you do not need to write formal specs, you should learn how. When you do need to write one, you will not have time to learn how. In the past dozen years, I have written formal specs of my code about a half dozen times. For example, I once had to write code that computed the connected components of a graph. I found a standard algorithm, but it required some small modifications for my use. The changes seemed simple enough, but I decided to specify and check the modified algorithm with TLA+. It took me a full day to get the algorithm right. It was much easier to find and fix the errors in a higher-level language like TLA+ than it would have been by applying ordinary program-debugging tools to the Java implementation. I am not even sure I would have found all the errors with those tools.

Writing formal specs also teaches you to write better informal ones, which helps you think better. The ability to use tools to find design errors is what usually leads engineers to start writing formal specifications. It is only afterward that they realize it helps them to think better, which makes their designs better.

We need to understand our programming task at a higher level before we start writing code.

There are two things I specify about programs: what they do and how they do it. Often, the hard part of writing a piece of code is figuring out what it should do. Once we understand that, coding is easy. Sometimes, the task to be performed requires a nontrivial algorithm. We should design the algorithm and ensure it is correct before coding it. A specification of the algorithm describes how the code works.

Not all programs are worth specifying. There are programs written to learn something—perhaps about an interface that does not have an adequate specification—and are then thrown away. We should specify a program only if we care whether it works right.

Writing, like thinking, is difficult; and writing specifications is no exception. A specification is an abstraction. It should describe the important aspects and omit the unimportant ones. Abstraction is an art that is learned only through practice. Even with years of experience, I cannot help an engineer write a spec until I understand her problem. The only general rule I have is that a specification of what a piece of code does should describe everything one needs to know to use the code. It should never be necessary to read the code to find out what it does.

There is also no general rule for what constitutes a "piece of code" that requires a specification. For the programming I do, it may be a collection of fields and methods in a Java class, or a tricky section of code within a method. For an engineer designing a distributed system, a single spec may describe a protocol that is implemented by code in multiple programs executed on separate computers.

Specification should be taught in school. Some universities offer courses on specification, but I believe that most of them are about formal specification languages. Anything they teach about the art of writing real specs is an accidental by-product. Teachers of specification should write specifications of their own code, as should teachers of programming.

Computer scientists believe in the magical properties of language, and a discussion of specification soon turns to the topic of specification languages. There is a standard language, developed over a couple of millennia, for describing things precisely: mathematics. The best language for writing informal specifications is the language of ordinary math, which consists of precise prose combined with mathematical notation. (Sometimes additional notation from programming languages can be useful in specifying how a program works.) The math needed for most specifications is quite simple: predicate logic and elementary set theory. This math should be as natural to a programmer as numbers are to an accountant. Unfortunately, the U.S. educational system has succeeded in making even this simple math frightening to most programmers.

Math was not developed to be checked by tools, and most mathematicians have little understanding of how to express things formally. Designers of specification languages usually turn to programming languages for inspiration. But architects do not make their blueprints out of bricks and boards, and specifications should not be written in program code. Most of what we have learned about programming languages does not apply to writing specifications. For example, information hiding is important in a programming language. But a specification should not contain lower-level details that need to be hidden; if it does, there is something wrong with the language in which it is written. I believe the closer a specification language comes to ordinary mathematics, the more it aids our thinking. A language may have to give up some of the elegance and power of math to provide effective tools for checking specs, but we should have no illusion that it is improving on ordinary mathematics.

Programmers who advocate writing tests before writing code often believe those tests can serve as a specification. Writing tests does force us to think, and anything that gets us to think before coding is helpful. However, writing tests in code does not get us thinking above the code level. We can write a specification as a list of high-level descriptions of tests the program should pass—essentially a list of properties the program should satisfy. But that is usually not a good way to write a specification, because it is very difficult to deduce from it what the program should or should not do in every situation.

Testing a program can be an effective way to catch coding errors. It is not a good way to find design errors or errors in the algorithm implemented by the program. Such errors are best caught by thinking at a higher level of abstraction. Catching them by testing is a matter of luck. Tests are unlikely to catch errors that occur only occasionally—which is typical of design errors in concurrent systems. Such errors can be caught only by proof, which is usually too difficult, or by exhaustive testing. Exhaustive testing—for example, by model checking—is usually possible only for small instances of an abstract specification of a system. However, it is surprisingly effective at catching errors—even with small models.

The blueprint metaphor can lead us astray. Blueprints are pictures, but that does not mean we should specify with pictures. Anything that helps us think is useful, and pictures can help us think. However, drawing pictures can hide sloppy thinking. (An example is the classic plane-geometry "proof" that all triangles are isosceles.) Pictures usually hide complexity rather than handling it by abstraction. They can be good for simple specifications, but they are not good for dealing with complexity. That is why flowcharts were largely abandoned decades ago as a way to describe programs.

If we do not start with a specification, every line of code we write is a patch.

Another difference between blueprints and specifications is that blueprints get lost. There is no easy way to ensure a blueprint stays with a building, but a specification can and should be embedded as a comment within the code it is specifying. If a tool requires a formal specification to be in a separate file, a copy of that file should appear as a comment in the code.

In real life, programs often have to be modified after they have been specified—either to add new features, or because of a problem discovered during coding. There is seldom time to rewrite the spec from scratch; instead the specification is updated and the code is patched. It is often argued that this makes specifications useless. That argument is flawed for two reasons. First, modifying undocumented code is a nightmare. The specs I write provide invaluable documentation that helps me modify code I have written. Second, each patch makes the program and its spec a little more complicated and thus more difficult to understand and to maintain. Eventually, there may be no choice but to rewrite the program from scratch. If we do not start with a specification, every line of code we write is a patch. We are then building needless complexity into the program from the beginning. As Dwight D. Eisenhower observed: "No battle was ever won according to plan, but no battle was ever won without one."

Another argument against specification is that the requirements for a program may be too vague or ill-defined to be specified precisely. Ill-defined requirements mean not that we do not have to think, but that we have to think even harder about what a program should do. And thinking means specifying. When writing the pretty-printer for TLA+, I decided that instead of formatting formulas naively, it should align them the way the user intended (see the accompanying figure).

It is impossible to specify precisely what the user intended. My spec consisted of six alignment rules. One of them was:

If token t is a left-comment token, then it is left-comment aligned with its covering token.

where terms like covering token are defined precisely but informally. As I observed, this is usually not a good way to write a spec because it is hard to understand the consequences of a set of rules. So, while implementing the rules was easy, debugging them was not. But it was a lot easier to understand and debug six rules than 850 lines of code. (I added debugging statements to the code that reported what rules were being applied.) The resulting program does not always do the right thing; no program can when the right thing is subjective. However, it works much better, and took less time to write, than had I not written the spec. I recently enhanced the program to handle a particular kind of comment. The spec made this a simple task. Without the spec, I probably would have had to recode it from scratch. No matter how ill-defined a problem may be, a program to solve it has to do something. We will find a better solution by thinking about the problem and its solution, rather than just thinking about the code.

A related argument against specification is that the client often does not know what he wants, so we may as well just code as fast as we can so he can tell us what is wrong with the result. The blueprint metaphor easily refutes that argument.

The main goal of programmers seems to be to produce software faster, so I should conclude by saying that writing specs will save you time. But I cannot. When performing any task, it is possible to save time and effort by doing a worse job. And the result of forcing a programmer to write a spec, when she is convinced that specs are a waste of time, is likely to be useless—just like a lot of documentation I have encountered. (Here is the description of the method resetHighlightRange in a class TextEditor: "Resets the highlighted range of this text editor.")

To write useful specifications, you must want to produce good code—code that is easy to understand, works well, and has few errors. You must be sufficiently motivated to be willing to take the time to think and specify before you start coding. If you make the effort, specification can save time by catching design errors when they are easier to fix, before they are embedded in code. Formal specification can also allow you to make performance optimizations that you would otherwise not dare to try, because tools for checking your spec can give you confidence in their correctness.

There is nothing magical about specification. It will not eliminate all errors. It cannot catch coding errors; you will still have to test and debug to find them. (Language design and debugging tools have made great progress in catching coding errors, but they are not good for catching design errors.) And even a formal specification that has been proved to satisfy its required properties could be wrong if the requirements are incorrect. Thinking does not guarantee that you will not make mistakes. But not thinking guarantees that you will.

Posted by gus bjorklund on 18-Aug-2019 16:06

well said, agent_008.

I have only one thing to add:

"Simplicity is a great virtue but it requires hard work to achieve it and education to appreciate it. And to make matters worse: complexity sells better."

-- Edsger W Dijkstra=

Posted by Thomas Mercer-Hursh on 18-Aug-2019 16:22

Of course, the ultimate in specification driven development is Model-Based Development! :)

Posted by agent_008_nl on 18-Aug-2019 17:34

Of course you know that I know my classics. But good to repeat that one here. And that other one that you used as sig at the peg for some time. "The competent programmer is fully aware of the strictly limited size of his own skull; therefore he approaches the programming task in full humility, and among other things he avoids clever tricks like the plague." Very applicable in times where monstertrucks ( are in demand

Posted by agent_008_nl on 18-Aug-2019 17:39

> Of course, the ultimate in specification driven development is Model-Based Development! :)

I think TLA+ (Lamports design) and other formal specification languages are quite different beasts. More ultimate also. ;-)

Posted by agent_008_nl on 19-Aug-2019 06:21

Here's a study from 2014 with title "The Extent of Empirical Evidence that Could Inform Evidence-Based Design of Programming Languages":

"How much empirical research there is that could guide a programming language design process to result in a language as useful to the programmer as possible?

That is the question I consider in this licentiate thesis, recognizing that such empirical research has not often been taken into account in language design. Answering
that question properly required me to conduct an over three years long systematic mapping study, which I now report in this thesis. [..]

There is clearly some empirical evidence on the efficacy of language design decisions that could inform evidence-based programming language design; however,

it is rather sparse. Significant bodies of research seem to exist only of handful of design decisions."

This thread is closed