MDA for enterprise computing

Posted by Phillip Magnay on 14-Sep-2006 08:47

Model driven approaches to database design, GUI development, and component-based application development have been around for quite some time. However, model driven methods for enterprise systems development - n-tiered architecture, SOA, EAI, B2B, middleware, etc, - are less mature.

But I'm thinking there is a positive in that that distributed enterprise systems development is new (or at least newish) to much of the Progress/OpenEdge community so there is less legacy to transform as there is in the case of existing business applications. So perhaps MDA is the key to starting enterprise systems development on the right foot. Certainly, PIMs & PSMs appear on the face to be a possible solution to dealing with the increasing pervasion of heterogeneous environments.

I'm a proponent of using MDA for enterprise system development but it is a road less traveled than that of MDA for application development and needs greater validation and evidence of business benefits. Perhaps this is an opportunity to develop and converge on some standards that would be valuable to the entire community.

All Replies

Posted by Thomas Mercer-Hursh on 20-Sep-2006 12:00

I'm not 100% clear on your question or request here, but let me throw in a penny or two worth of thoughts.

To me, any form of program generation has three main purposes:

1) It makes the generation of any predictable code automatic, saving labor and allowing the developer to concentrate on the more "interesting" aspects;

2) It means that large parts of the application are uniform, solid, robustly tested, and, if one puts in the effort, very full featured, resulting in a consistent, strong, rich overall application; and

3) Because large parts of the application come from the generator, it is possible to make significant changes in architecture by changing templates and regenerating without having to do any manual re-coding.

MDA does this one better through the PIM->PSM structure because there are multiple transforms, not just a singel generation stage, and thus one can introduce fairly significant architectural changes by replacing or changing a single transform without having to touch the other transform.

As such, I think it is ideally suited to the development of complex modern applications since the very hard work of figuring out what should happen is focused on the development of the transforms and the underlying framework. That work can be done by the best engineers who understand the problem space the best and their work can then be used by those understanding might be more in the application domain and yet produce extremely high quality code.

Posted by Phillip Magnay on 20-Sep-2006 12:54

I'm not 100% clear on your question or request here,

but let me throw in a penny or two worth of

thoughts.

No real question or request... just putting something out there and wondering what others thought.

To me, any form of program generation has three main

purposes:

1) It makes the generation of any predictable code

automatic, saving labor and allowing the developer to

concentrate on the more "interesting" aspects;

2) It means that large parts of the application are

uniform, solid, robustly tested, and, if one puts in

the effort, very full featured, resulting in a

consistent, strong, rich overall application; and

3) Because large parts of the application come from

the generator, it is possible to make significant

changes in architecture by changing templates and

regenerating without having to do any manual

re-coding.

Productivity. Quality. Agility. Absolutely.

MDA does this one better through the PIM->PSM

structure because there are multiple transforms, not

just a singel generation stage, and thus one can

introduce fairly significant architectural changes by

replacing or changing a single transform without

having to touch the other transform.

Yes. The challenge is to make this real, up-close, and personal with developers.

As such, I think it is ideally suited to the

development of complex modern applications since the

very hard work of figuring out what should happen is

focused on the development of the transforms and the

underlying framework. That work can be done by the

best engineers who understand the problem space the

best and their work can then be used by those

understanding might be more in the application domain

and yet produce extremely high quality code.

My original post was a high-level comparison of MDA for applications versus MDA for enterprise systems (whether such a distinction is useful is perhaps another question). I think the former is reasonably mature and therefore there exists more in the way of evidence of real business benefits. But much of what we do these days is not "application" development anymore. It's constructing enterprise systems out of a patchwork of technologies. Is the increasing prevalance of heterogeneous environments actually reinforciing the case for MDA? The PIM->PSM->code/artifact paradigm seems to suggest it should. If not, why? What are the barriers to its adoption as a methodology when one of its underlying premises is to provide a means to deal with multiple technologies and when that is the type of environment we regularly confront today?

Just wondering...

Posted by Thomas Mercer-Hursh on 20-Sep-2006 13:36

No real question or request

Ah, no Sony headphones from this thread, then!

Yes. The challenge is to make this real, up-close, and personal with developers.

Well, I do think the human factors side of things is an important issue ... but I don't equate this with the idea that, if programmer A is currently happily coding away in ABL that the only good or acceptable result is that programmer A is still happily coding away after new technology is introduced. As technologies change, evolve, and are replaced, efficiencies change and new skills come into the fore. This may mean fewer people, more people, or different people depending on the skiils of the individual.

My original post was a high-level comparison of MDA for applications versus

MDA for enterprise systems (whether such a distinction is useful is perhaps

another question).

I'm not sure I think it is a valid distinction. Both are applications ... one is just more complext than the other and that is a multi-dimensional continuum, not a simple distinction. Even within single, isolated applications there are enormous ranges of variation in difficulty. And, in the realm of multi-part cooperating applications there is a similar or greater range.

What I might suggest, though, is that the simpler an application, the easier it is to model, but also the fewer benefits to be gained from modeling. The more complex, the more difficult, but the greater the benefits.

So, yes, I think the modern complexity creates an even more compelling case for technologies like MDA than existed before.

Posted by Admin on 20-Sep-2006 14:05

So, yes, I think the modern complexity creates an

even more compelling case for technologies like MDA

than existed before.

Please give MDA a try in a real life example. Try to convert it to a running application. Try to do it with a big team of developers working on the same model.

The real problems start when you want to convert the model to executable code:

- how do you get the right generators/converters

- how do you get the right (and affordable) application framework

- how is support for debugging code back into the MDA-model.

See the "success stories" on OMG, http://www.omg.org/mda/products_success.htm.

Another nice quote from this article http://www-128.ibm.com/developerworks/webservices/library/co-omg/ from June 2001:

"...

We asked Dr. Richard Soley, CEO of the OMG, about this new market and he suggested that:

"traditional development tool vendors like IBM, Microsoft, Rational, and Computer Associates will probably release MDA tools in the near future. At the same time, it's likely that smaller vendors of process-management tools and rapid-application development tools will lead the charge to the model-driven approach. All application development has been moving to modeling in order to capture the rigorous specifications required for multiplatform distributed development. The OMG has simply provided an open, neutral, standardized architecture that will assure that everyone can do this in a compatible manner using the widely-accepted UML language."

In keeping with Dr. Soley's prediction, Adaptive, io-software, Kabira Technology, and Secant Technologies have all announced that they have released, or will be releasing, MDA-compliant products (see Resources).

The first MDA tools will be released before the end of this year. With any luck they'll began to have an impact in 2002, easing some of the burden of integrating enterprise applications in the corporate environment.

..."

So I guess it's not so easy to implement the proper tools...

Posted by Thomas Mercer-Hursh on 20-Sep-2006 14:20

Try to convert it to a running application. Try to do it with a big team of

developers working on the same model.

Yes, but also consider the success rate of any large programming project. Really, it is quite embarrasing.

To be sure, there is a lot of up front work involved in getting an MDA project going, but once one gets there, everything is incremental effort, not starting over again from scratch.

Posted by Admin on 20-Sep-2006 14:59

Yes, but also consider the success rate of any large

programming project. Really, it is quite

embarrasing.

Exactly! And the grass is greener at your neighbour's. So why would this new approach be any better in a large scale project. You hope, since it's something you haven't experienced yet!

I repeat myself when I say that I do see a place for modeling and code generators. Specifically for comprehensible and well defined aspects. On the other hand I have my doubts that an UML-model with complex OCL will be any better than plain old code

Posted by Thomas Mercer-Hursh on 20-Sep-2006 15:10

Actually, I think there are some very strong reasons why generator technology can outperform legions of programmers. One of the most obvious is that it is only the best and the brightest who will work on the transforms and frameworks and than can be a small team. Another is that enhanced productivity means that fewer total people are required, so the overall team is not so unwieldy. A related benefit is that a very small number of people is sufficient to cover any one sub-domain so that they can understand and share the issues of that domain in a way that isn't possible in large teams. Another is that since a lot of code comes out of a generator (if not all), that part of the code is stable and predictable, not what Charlie decided to write on Tuesday.

Speaking as the architect of nearly 2 million lines of ABL ... a very large part of which came from a generator technology produced with very small teams.

Posted by Phillip Magnay on 21-Sep-2006 08:32

I repeat myself when I say that I do see a place for

modeling and code generators. Specifically for

comprehensible and well defined aspects. On the other

hand I have my doubts that an UML-model with complex

OCL will be any better than plain old code

Yes. I think the challenge here is finding the most productive combination. For many system aspects, manual coding will always be more economical while for others it makes more economic sense to have machines generate the code. And that line will move around depending on the situation.

A key question here is: if you accept that code generation can play a significant part in your environment then which code generation paradigm should you adopt? Modeling/MDA is just one approach amongst many. But it is reasonably well-defined, has broad industry acceptance, and more and more tools are available especially for technologies outside OpenEdge. And many of these tools could readily be configured to support OpenEdge development. Isn't it possible that we may be putting ourselves in a competitive/productivity disadvantage if we do not leverage these methods and tools?

Posted by Thomas Mercer-Hursh on 21-Sep-2006 11:24

And that line will move around depending on the situation.

And, with on-going effort, the line will keep moving to more and more code coming from the generator. With our SDD tool, one of the guiding principles is that one would keep alert for any repeating patterns not covered by the current templates in order to add that pattern to the template so that the pattern could be covered by the tool instead of being hand coded.

If one thinks in terms of MDA with an Action Language which resembles a subset of ABL, then I see no reason not to push for 100% generation, recognizing that a part of the model is not really very different from code, but it is integrated into a whole, not tacked on.

Isn't it possible that we may be putting ourselves in a

competitive/productivity disadvantage if we do not leverage these methods

and tools?

In any given language, there are always going to be those who work in a very manual way and those who work with varying degrees of tool enhancement. It isn't so much a question of ABL as a whole becoming less productive than Java as a whole as much as it is a question of the relative productivity of the best tool enhancement in both realms.

This thread is closed