Ok, so the subject line isn't quite grammatically correct, but it got your attention
Some of the discussions in the other threads have started to go down a road of why aren't the samples/examples we're providing currently using the new OO extensions in the ABL.
So I guess I'd like to ask whether or not there is a huge demand for class based examples? Maybe this also leads to a bigger question, and one that should be in a poll, are you using OpenEdge 10.1? If not, when do you think you will be, 1 month, 2, 3, 6, 12...and if you are, or intend to, how many of you think you will move to an OO based design and implement in classes, and in what time frame?
Is it only me, or is this forum starting to become some form of big therapy session, where we ask and you tell us about your issues (or maybe that just says more about me) !!
Mike
PS. I just had an mental image of John Sadd stood in a white doctors coat, presenting at Exchange, very surreal !!
So I guess I'd like to ask whether or not there is a huge demand for class based examples?
Irregardless of there being a "huge demand", if PSC incorporates new technology in their product offering, it only makes sense to provide examples that show how to use it.
Leaving these examples out means the developer community has to figure it out for themselves, or calling TS looking for help.
If a developer figures they have better things to do with their time (ie "real work") than try to figure out a technology that may or may not help them get their job done, then the adoption rate'll be low and all PSC's work'll be for naught.
If a developer calls TS because there's no docs / examples, that's a resource load on TS, which they could've spent on other things.
So I guess I'd like to ask whether or not there is
a huge demand for class based examples?
Irregardless of there being a "huge demand", if PSC
incorporates new technology in their product
offering, it only makes sense to provide examples
that show how to use it.
Leaving these examples out means the developer
community has to figure it out for themselves, or
calling TS looking for help.
If a developer figures they have better things to do
with their time (ie "real work") than try to figure
out a technology that may or may not help them get
their job done, then the adoption rate'll be low and
all PSC's work'll be for naught.
If a developer calls TS because there's no docs /
examples, that's a resource load on TS, which they
could've spent on other things.
You mean people out there do "real work"
All totally valid points, and hence part of the rational behind having something such as OpenEdge Principles.
Now, would we save more of everyone's time if we produced more procedural based examples, rather than satisfying the need of the few that are at the front of the adoption curve? It's been proven time & time again, the fittest will always survive and the brightest will always work it out, but what about those that are outside of the 5%?
Don't read me wrong, at the end of the day we need both, but just as the N-Tier/SOA poll is showing (even if it is currently a total of 5 votes, so come on all you lurchers, get voting!), there is a danger that the guidance we produce is "too far" ahead of what the community is looking for, which wouldn't be good either. We've all heard about the people sat in ivory towers
Now, would we save more of everyone's time if we produced more procedural based examples, rather than satisfying the need of the few that are at the front of the adoption curve?
There needs to be a balance of both. Too much of any single topic / subject / etc. is bad.
It's been proven time & time again, the fittest will always survive and the brightest will always work it out, but what about those that are outside of the 5%?
And why should the "best and brightest" have to spend time figuring out information PSC could've easily provided in the first place?
... a danger that the guidance we produce is "too far" ahead of what the community is looking for...
Or too focused on a single topic / technology.
Well, I've been doing all new development in 10.1A since the start of beta and I will move to 10.1B as soon as the beta starts, so I am 100% OO for all new work.
While there may not be a huge percentage of the current community doing OO development yet, how do you expect them to get there if you don't provide leadership? Some of them will learn from Java and other language backgrounds, but many of them have no OO background and the best way for them to get that background is for you to show the way.
One of the biggest millstones around PSC's neck is all the people stuck on old versions. Some of those people are genuinely stuck because they have lapsed maintenance. While it would be nice to pull those people back into the family, they can't really be your focus because they provide no revenue and no future. Other people are stuck because they have customized version X of some VARs product so that they can't upgrade to later releases of either the Progress or VAR product. I don't have any answers for them other than the kind of stuff I am working on. But, there are a lot who are on old versions for no good reason -- don't break what works, licensing issues, etc. -- i.e., things which are understandable reasons, but which are long term things that are bad for PSC to allow keeping people on old versions.
Certainly one of the ways that I can think to motivate people considering moving forward is by providing examples that show how desirable it is.
Don't read me wrong, at the end of the day we need both, but just as the
N-Tier/SOA poll is showing (even if it is currently a total of 5 votes, so come
on all you lurchers, get voting!),
I, for one, haven't voted because I want both and the question is to simple to make a choice.
Don't read me wrong, at the end of the day we need
both, but just as the
N-Tier/SOA poll is showing (even if it is currently
a total of 5 votes, so come
on all you lurchers, get voting!),
I, for one, haven't voted because I want both and the
question is to simple to make a choice.
See, now life is full of difficult choices
The poll is intended to get some idea of what people think we should do 1st, not that we should do one over the other. It's fairly obvious that we need to do both, and classed based examples, and new UI work when it becomes available, etc, so I'm trying to asses peoples perceived views on priority.
>.....
While there may not be a huge percentage of the
current community doing OO development yet, how do
you expect them to get there if you don't provide
leadership? Some of them will learn from Java and
other language backgrounds, but many of them have no
OO background and the best way for them to get that
background is for you to show the way.
Now this raises an interesting question. Is our (Progress's) job to teach OO programming? Sure it's our job to teach how to use the OO features of the ABL, but does that mean we should also be teaching OO basics? Aren't there thousands of books & courses out there that could do that?
Certainly one of the ways that I can think to
motivate people considering moving forward is by
providing examples that show how desirable it is.
Agreed, this is a point I made in an earlier posting. We have to show the benefit of moving, rather that just shouting "Hey, we've released a new version with some cool stuff, aren't we clever"
The poll is intended to get some idea of what people think we should do
1st, not that we should do one over the other. It's fairly obvious that we
need to do both, and classed based examples, and new UI work when it
becomes available, etc, so I'm trying to asses peoples perceived views on
priority.
I think, though, that it is a false question in a number of ways:
1. The two are not really separate. Both SOA and N-tier are about learning how to package an application in appropriate units. A body of code that spans multiple layers doesn't make a good service. You have to address both issues simultaneously.
2. Even if they were separable, there is no reason to pursue either a whole batch of SOA topics and then a whole batch of N-tier topics, or vice versa ... one can do one, then another according to which individual topic seems important.
3. While it is refreshing and nice to have you asking, you also need to realize that PSC is taking a leadership position here. I.e., the presumption is that you know where we should be heading and you should be the one providing the guidance. Admittedly, I haven't always felt like PSC was doing its job in this regard and that the best guidance on how to do ABL development was coming from people outside the company, but you should be trying to think like leaders (even if you have to look outside for some guidance of your own). As a leader, you have to expect that the followers don't really know what they don't know. So, while our points of pain are important and you should be addressing them, you also need to be charting a path based on your vision.
Now this raises an interesting question. Is our (Progress's) job to teach OO
programming? Sure it's our job to teach how to use the OO features of the
ABL, but does that mean we should also be teaching OO basics? Aren't
there thousands of books & courses out there that could do that?
Thousands of books about other languages, but none about ABL. Feel free to make use of those books and courses where they seem useful, but recognize that most ABL developers are going to look for their guidance from the PSC documentation, so either you point them in the right direction or they end up wandering around and probably not getting very far.
Apropos of which, there are a lot of bad examples in the existing documentation which should get purged or modified.
Now this raises an interesting question. Is our
(Progress's) job to teach OO programming? Sure it's
our job to teach how to use the OO features of the
ABL, but does that mean we should also be teaching OO
basics? Aren't there thousands of books & courses
out there that could do that?
It's your job to deliver an intuitive ABL which shields the programmer from low-level tasks making it more future proof than the old 3GL (I consider C# + VS.NET a 4GL as well). When you claim you provide a highly productive environment and users claim they have a hardtime adopting a new architecture, something is wrong. Host-based 4GL in the 90-ties was very productive, but the community has been struggling with the 4GL since the client/server age.
Event-driven programming, dynamic widgets and dynamic queries were a big step forward for the Progress community, but it didn't make the 4GL code easier or more robust due to all string manipulations and the verbose syntax. The ProDataSet is a nice feature, but in the wrong hands it's even worse than binding to a direct database query.
So all the features might be there, but I think we (Progress community) want to make clear that you (Progress) have to create a more complex example with the right technology, before you understand why developers are struggling with certain ABL-concepts. I just have to name the BROWSE-widget on the Windows platform or unlogical restrictions to FUNCTION/ASSIGN statements (KB solution ID: 18223) and people understand what I'm talking about...
Theo.
I just have to name the BROWSE-widget on the Windows platform
And I suppose we won't even mention browse on ChUI!
Event-driven programming, dynamic widgets and dynamic queries were a big step forward for the Progress community, but it didn't make the 4GL code easier or more robust due to all string manipulations and the verbose syntax.
I don't think the problem is the existence of dynamic widgets and queries, but that PSC didn't take the next logical step with them and provide 4GL/ABL ways of managing them.
Queries, persistent procedures, and super procedures are some examples that were begging for something to bring out all their latent power. Since PSC didn't provide a standardized way to deal with them easily, I wrote my own. My procedure manager provides me with an OO-ish way of developing applications, and I can do a lot more a lot faster than I could've done without this tool.
I would much rather've used PSC's native implementation than have spent all that time writing my own.
The lost opportunities - if PSC had provided a procedure manager in 9.1 days, the language could've had OO-ish capabilities for years now instead of being introduced in 10.1. A query manager which facilitated standad query string specification could make setting up complicated queries a series of method calls on an object handle. Once the query was done, it could be fed to an boolean optimizer which would re-do it so it does better table searches.
and that's just off the top of my head...
Getting back to teaching OO - PSC doesn't have to teach all of the OO programming, but they should provide enough to show how each element of ABL OO development works, how they all are tied together, and can be used in a "real life."
Once that's accomplished, PSC could then provide references to other materials which do a better job of teaching the more advance concepts of OO programming, offer training courses, etc.
PSC doesn't have to teach all of the OO programming, but they should
provide enough to show how each element of ABL OO development works,
how they all are tied together, and can be used in a "real life."
I think the key element here is leading by example. If PSC provides good examples that translate well to real world work, then people will follow those examples and end up doing the right thing. If they can't identify with the examples or the examples are incomplete or even bad, then they either won't be followed or, worse yet, they will be.
This requires a lot of vigilance. An example focused on ProDataSets isn't going to convey a meaningful lesson if it is built on a schema as simple as sports2000 and it is going to convey the wrong message if the demo programs have UI mixed in with the data access. If it is too much to build all the layers for each example, then build the layer which the example is about cleanly and provide a test program which is clearly labelled as such, i.e., something which is very obviously not intended as part of an application, but exists merely to test a component and prove that it works as intended.
Similarly ... not to get on John Sadd's case ... but one really shouldn't be publishing MVC examples with references to buttons in the controller. It really doesn't take meaningful more work to make the controller in terms of actions and leave the view to the view. Indeed, it would be absolutely ideal to provide two different views which work with the same model and controller to really illustrate the principle that the pattern is all about.
Would be interesting to see alternative modified examples posted back from the community illustrating where a difference of opinion exists and highlighting the benefits of the alternative approach. Bear in mind these are reference materials and there is no one correct answer.
On the subject of productivity - as discussed in other threads the ABL has had to respond to the challenge of developing distributed n-tier and SOA applications. The job is just much harder these days, regardless of the language being used - there is much more to consider upfront and many more use cases that need to be solved.
I believe the key to reviving the productivity is with tools. The tools should hide the complexity as far as possible - with the complexity being isolated mostly to the definition of the target design patterns. If we had productive tools, then the reference examples would not need to be as extensive. The use of the tools would guide you through the best practices to a large extent.
Out goal is then to produce concrete examples / patterns for the tools that solve a number of common use cases. Our challenge is understanding what these are.
Just thought I'd add a few different angles to the discussion.
Ant
I believe the key to reviving the productivity is with tools.
What is your definition of "tools"?
But, they need to be the right tools. T4BL is an example of good intentions wrongly executed, so it contributes essentially nothing unless on happens to want to do what it does.
agree they need to be the right tools and highly extensible, customizable, configurable, etc. The T4BL in OE Architect was just a start and we did not get the time to build in the required flexibility. Rest assured future T4BL will not have this limitation. Our focus will be to start with fully extensible foundation building blocks and to build out from there.
I believe the key to reviving the productivity is
with tools.
What is your definition of "tools"?
Not sure how to answer this one. I am mainly refering to tools we have / will build as part of OpenEdge Architect as ways to productively develop OpenEdge applications. The tools will range from powerful code editors to modeling with roundtrip engineering to deployment tools, etc, etc. Anything that makes it easier and more productive to develop an OpenEdge application as well as helping you to do it according to best practices, e.g. conforming to a reference architecture.
does this make sense?
Ant
Is a configurable T4BL something I will see in 10.1B?
As it is now, I would have waited to release it.
This forum is not the correct place to talk about specifics in releases, but I would not expect such an extensible mechanism in 10.1B...
What we have in the product today is a prodataset generator which really just saves you hand coding the prodataset definitions. Many people have at least found this useful but it is not what our vision for T4BL will end up being.
does this make sense?
After I checked your profile, yes. It makes sense that a tool-meister would see things through a "Tools" colored glasses.
My thought is that - tools have their place too - but what I've always run into with tools that generate code is how constricting and limiting they are. Other tools - like the current dictionary maint stuff - aren't easy to work with, but I can live with them since I don't spend much time doing that kind of work.
For work with existing code and the like - if I had one priority - it would be an ability to do queries as to what's in the code. This means cross-referencing all the programs, storing the results in a database, and then having powerful tools that would help me find references to tables, fields, strings, and the like so when I'm ripping some program's guts out and replacing it with something I can easily do the impact analysis and code conversion.
I personally think something like this should be on PSC's front burner as it would be a major productivity win for it's developers. Not having that tool means developers have to resort to editor searches, grep, etc. which is "iffy" at best. Not having this tool leaves developers trying to walk a tightrope - blindfolded.
In putting my money where my mouth is - the "Code share" area has a copy of code I wrote to take an XREF / STRING-XREF set of files and turn them into temp-tables. It's not too far from there that you get a set of db tables. (In fact, I've already written code to do that, but it's not public yet).
As for the language there's much that either (a) needs to be in the language, and (b) allow the development community to develop and manage their own libraries of functionality.
My procedure manager helps with the "library" stuff, but that's only if people find it to download and apply it. OO code can help with that as well, but it has limitations of it's own.
Tim Kuehn
I recognize its limitations, but that isn't my point. I despise that .i approach to everything when it should be generating classes. There doesn't appear to be any way to get it to generate a class, so it is useless.
While I recognize some gaps and inconsistencies in the language and I certainly am looking for some OO extensions, I think there has been an historical tendency to put everything in the language and that is why it is so bloated now. Many things that we want could be addressed by tools or components. The trick is in making them flexible enough to suit different styles and architectures. If they go into the language, one has no flexibility at all.
No question that there is a lot of tools that could be added. I am hoping that the Eclipse platform will encourage some of this. Of course, that tends to imply tools written in Java, but one can't have everything.
Many things that we want could be addressed by tools or components.
Agreed - one of the (past) difficulties was scope-control of library functionality so they only applied where they were required, and only there.
OO should be able to help deliver that, as well as the procedure managed code for those who prefer to code the "old" procedural way.
The things I've accomplished with the procedure manager have been quite impressive - it makes coding easier, more fun, and less nit-picky about having to worry about untoward, unwanted interactions between code sections.
I'm looking forward to when I can do some OOABL and merge the best of both worlds.
agree they need to be the right tools and highly
extensible, customizable, configurable, etc. The T4BL
in OE Architect was just a start and we did not get
the time to build in the required flexibility. Rest
assured future T4BL will not have this limitation.
Progress doesn't have a good trackrecord when it comes to "finishing released features". I think that's one of the complaints in some of the forum threads...
Theo.
Not sure how to answer this one. I am mainly refering
to tools we have / will build as part of OpenEdge
Architect as ways to productively develop OpenEdge
applications. The tools will range from powerful code
editors to modeling with roundtrip engineering to
deployment tools, etc, etc. Anything that makes it
easier and more productive to develop an OpenEdge
application as well as helping you to do it according
to best practices, e.g. conforming to a reference
architecture.
A tool can help you with a specific task in the development process. But before you can create the tool, you must be pretty sure what it should do/solve. If you want tools all over the place, you need to have a clear picture of the architecture you want to address.
Sometimes tools can slow you down as well. Editing the AppServer properties file is quicker than using the admin tool and it's possible since the storage format is editable. You can imagine it might be more productive to generate a ProDataSet when you have that information in a homegrown repository rather than doing this via an entity designer tool. But when you generate code directly, you can't use the visual representation of the schema. Therefor it would be better to split the tools in two parts:
- a designer that stores the definition in an intermediate (XML-)file
- a compiler that converts the definition to target code (.p, .i, .cls, even r-code)
This is similar to .Net with it's DataSet-designer.
Or are you thinking more in the direction of template based code generators, like http://www.llblgen.com/defaultgeneric.aspx ?
Theo.
Not sure how to answer this one. I am mainly refering
to tools we have / will build as part of OpenEdge
Architect as ways to productively develop OpenEdge
applications. The tools will range from powerful code
editors to modeling with roundtrip engineering to
deployment tools, etc, etc. Anything that makes it
easier and more productive to develop an OpenEdge
application as well as helping you to do it according
to best practices, e.g. conforming to a reference
architecture.
A tool can help you with a specific task in the development process. But before you can create the tool, you must be pretty sure what it should do/solve. If you want tools all over the place, you need to have a clear picture of the architecture you want to address.
Sometimes tools can slow you down as well. Editing the AppServer properties file is quicker than using the admin tool and it's possible since the storage format is editable. You can imagine it might be more productive to generate a ProDataSet when you have that information in a homegrown repository rather than doing this via an entity designer tool. But when you generate code directly, you can't use the visual representation of the schema. Therefor it would be better to split the tools in two parts:
- a designer that stores the definition in an intermediate (XML-)file
- a compiler that converts the definition to target code (.p, .i, .cls, even r-code)
This is similar to .Net with it's DataSet-designer.
Or are you thinking more in the direction of template based code generators, like http://www.llblgen.com/defaultgeneric.aspx ?
Theo.
I think, though, that it is a false question in a
number of ways:
1. The two are not really separate. Both SOA and
N-tier are about learning how to package an
application in appropriate units. A body of code
that spans multiple layers doesn't make a good
service. You have to address both issues
simultaneously.
Agreed, and maybe it is a false question, but it certainly got some discussion going
2. Even if they were separable, there is no reason to
pursue either a whole batch of SOA topics and then a
whole batch of N-tier topics, or vice versa ... one
can do one, then another according to which
individual topic seems important.
Don't get me wrong, I'm not trying to say that these are mutually exclusive topics, but I also get the impression that there is still some work to be done in getting the basic principles of modern architecture design and implementation across, which make me also pause for a second longer then I might have, to make sure that what SOA material we do produce does a better job of achieving this.
3. While it is refreshing and nice to have you
asking, you also need to realize that PSC is taking a
leadership position here. I.e., the presumption is
that you know where we should be heading and you
should be the one providing the guidance.
Admittedly, I haven't always felt like PSC was doing
its job in this regard and that the best guidance on
how to do ABL development was coming from people
outside the company, but you should be trying to
think like leaders (even if you have to look outside
for some guidance of your own). As a leader, you
have to expect that the followers don't really know
what they don't know. So, while our points of pain
are important and you should be addressing them, you
also need to be charting a path based on your vision.
I agree that we have to take leadership, and yes we have to put forwards our vision, but it helps no-one if we form that vision in a vacuum, sat in ivory towers So part of the 'vision' for OpenEdge Principles is that of a Community Process, where yes we (Progress) produce guidance and material, but then fully expect the 'community' to take the material, use-it, abuse-it, augment-it, and feedback, so we can better refine the material for the greater good of everyone!
So I'm glad you find it 'refreshing' that we ask, and believe you me, we will continue to ask. Before joining Progress as an employee, I was a partner for 13 years, and to be honest I don't remember being asked that often! So we are trying to change things, and OpenEdge Principles is one of the more visible results of that. Do we have some catching up to do, sure. Will we get it right first time, probably not, will we please everone, I very much doubt it! But I can promise you that we will continue to try to make sure that the material we produce is useful and needed.
- a designer that stores the definition in an intermediate (XML-)file
Or, in the case of UML, as XMI
- a compiler that converts the definition to target code (.p, .i, .cls, even
r-code)
One of the virtues of the MDA concept of transforms from a PIM (Platform Independent Model) to PSM (Platform Specific Model) is that one can have multiple transforms defined for the same components. Not only does this mean that one can generate DDL from one transform and code from another, but it could easily mean that one provided both .p and .cls versions of transforms and one used the one that one wanted. As long as the system was open, one could adjust it to one's own preferences and needs.
Don't get me wrong, I'm not trying to say that these are mutually exclusive
topics, but I also get the impression that there is still some work to be
done in getting the basic principles of modern architecture design and
implementation across,
To be sure ... and one of the better ways to communicate that kind of understanding is through example. But, bad examples make for bad understanding, so one has to be very careful to provide the right kind of examples!
Actually, I suppose that one of the reasons that I keep harping on the subject of getting sample code using OO is that I think there is a kind of impedence mismatch between .p code and the concepts. Yes, we all know that we can imitate a number of OO concepts with .p code and some of us have been doing it for years, but it is a limited imitation and there is a lot about OERA and SOA which is an absolute natural for OO structures. If I have a domain object, for example, I have a nicely encapsulated representation and I can see how it is derived from the data access objects, how it is consumed by the BL objects, and how it is used by the UI objects. If I have a .i for a temp-table used all over the place, I don't have encapsulation or layers.
I agree that we have to take leadership, and yes we have to put forwards
our vision, but it helps no-one if we form that vision in a vacuum, sat in
ivory towers
To be sure, one of the historical gripes is that decisions about product development were made by some person or committee that paid no attention to end-user input ... witness the flap over ChUI dynamic browse! But, I also don't think that you get big picture product vision merely by assembling what users say, either.
To me, it is a lot like my relationship to my own customer base. I have 26 years of domain experience in distribution applications; probably 35 in financial applications. When my users asked for something, I was always figuring out the real business problem underlying their request. It was not uncommon for me to be providing advice that had nothing to do with code, but that more to do with altering a business process. And, when code was the answer, the code was architected to solve a broad class of related business problem, not just the narrow thing which was the original request. Those kind of solutions are highly resilient to changed perceptions and needs and often useful in ways that were not originally imagined. Providing that kind of leadership requires listening, but it also requires vision and deep understanding, deeper really than the people making the requests in many cases.
Now this raises an interesting question. Is our
(Progress's) job to teach OO programming? Sure it's
our job to teach how to use the OO features of the
ABL, but does that mean we should also be teaching OO
basics? Aren't there thousands of books & courses
out there that could do that?
I think Progress needs to provide the option of a one-stop training shop for all the products.
Today Progress offers a 1 day introduction to HTML training and expect people to have that basic HTML experience before attending a WebSpeed course. I guess most people probably don't attend the HTML course (because everyone can write HTML today, right?). That said, I think it's important that Progress offers it at least. It's doesn't have to cover advanced topics (XHTML compliance, JavaScripting, AXAJ, etc).
I think the same applies to OO (and incidentally SOA). I believe that Progress should offer the introductory courses as an optional prerequisite The last thing we want is people learning OO with another language. They might learn about things we don't offer in the ABL (garbage collection, multiple inheritance, etc).
Just my 2 Swiss rappen.
Jamie
I personally think something like this should
be on PSC's front burner as it would be a
major productivity win for it's developers.
Not having that tool means developers have to resort
to editor searches, grep, etc. which is "iffy" at
best. Not having this tool leaves developers trying
to walk a tightrope - blindfolded.
Of course, RoundTable takes care of a bunch of this stuff if you can live with the formal approach to development that RoundTable enforces (versioning, tasks, etc)
In putting my money where my mouth is - the "Code
share" area has a copy of code I wrote to take an
XREF / STRING-XREF set of files and turn them into
temp-tables. It's not too far from there that you get
a set of db tables. (In fact, I've already written
code to do that, but it's not public yet).
It's good to see some other alternatives too
Jamie
Of course, RoundTable takes care of a bunch of this stuff if you can live with the formal approach to development that RoundTable enforces (versioning, tasks, etc)
The downside is you have to follow RT's lifecycle process to get that functionality. I'm not saying it's bad, but there's a decent body of shops out there which use other SCMS tools, and are left out.
Personally, I think a standalone XREF tool which isn't tied to a particular SCMS would be a great product to have.
If enough customers offered to fund development of such a tool, I could easily write one myself.
Personally, I think a standalone XREF tool which isn't tied to a particular
SCMS would be a great product to have.
Of course, the tool has been written more than once, just integrated into this or that toolset. Where it should really be these days is integrated into the Eclipse IDE.
Of course, the tool has been written more than once, just integrated into this or that toolset. Where it should really be these days is integrated into the Eclipse IDE.
There should be a stand-alone tool, written in ABL, which can be integrated into Eclipse. My idea is that ABL developers in general could use and update the tool & database w/out having to know Java.
> Personally, I think a standalone XREF tool which isn't tied to a particular
SCMS would be a great product to have.
Of course, the tool has been written more than once, just integrated into this or that toolset. Where it should really be these days is integrated into the Eclipse IDE.
I guess it would be really easy to replace the intergration part since this would be separate from the core capabilities of this tool
Given that this is a string manipulation problem, I don't think it should be written in ABL any more than the rest of the Eclipse IDE is or should be written in ABL. It needs to call the compiler to get the XREF output, but obviously that part is already there.
I've already written a parser which takes the XREF files and turns them into a set of temp-tables, and posted it to the code share and PEG Utilities pages.
From there, it's conceptualy straight-forward to get a database of program data, and then write the query tools.
I've got one too ... had it for years and years, but to database tables. It is conceptually simple, so one wonders why PSC didn't do it themselves a million years ago.
FYI: OpenEdge 10.1B will include a new XREF output in XML format, in addition to the current XREF listing.
But, that is just a display of a single program's output, right? Not a searchable database of the whole project?
FWIW, I can't figure out why PSC didn't do one of these 20 years ago.
Yes, in 10.1B one .xref.xml file per program. We're working on the option to have one .xref.xml file per compilation unit, after 10.1B.
Not a searchable database of the whole project?
Although not fully equivalent to what we are discussing here, remember that OpenEdge Architect's Metacatalog can be used for XREF at project level.
What is the difference between one per program and one per compilation unit?
While I haven't looked at it much yet, isn't the Metacatalog something that requires annotating the code versus the whole concept of an XREF database is that it is empirical.
A compilation unit may be one program, a list of programs, the programs in a directory or set of directories.
Meta Catalog: No, the meta catalog does not necessarily need anotations, it can 'index' any source element. Its database is populated by content builders, see below...
Excerpt from OpenEdge Architect's online Help:
Introducing the Meta Catalog
The OpenEdge Architect's Meta Catalog is an index that enables you to find where elements are used in your application. You can find where a temp-table is defined and where it is used in your application. You can find all the procedures and functions in your application. You can find where those procedures and functions are called. You can also add your own annotations to the code and have them included in the index. You can use this index to simplify analyzing the impact of proposed changes and carrying out those changes.
You can configure different catalogs for different tasks. You might want to have the data for each project in a different catalog. Alternately, you might want a catalog that stores data on function and procedure calls for all your projects together. You can also configure a master catalog linked to your software code management (SCM) system to provide a complete view of all your applications.
The Meta Catalog is a design-time tool. A catalog never stores any data that cannot be extracted from the source code by the content builders. Any time the catalog has stale data in it, you can eliminate the stale data by replacing the catalog. For example, if you remove files from a project and no longer want data from those files in the catalog, just rebuild the catalog using the catalog's definition from the Meta Catalog preferences.
The Meta Catalog uses content builders to extract the data from source files. The Architect provides several predefined content builders.
There are two tools for searching through a catalog's data. The Meta Catalog Explorer provides a treeview representation of the data. The Meta Catalog Search allows you to create and save queries on the data. You can open files for editing from either the treeview or the Meta Catalog Search results view.
A compilation unit may be one program, a list of programs, the programs in
a directory or set of directories.
One might want to think of a different term since I think the usual understanding is that unit = one, i.e., a single .p, .w, or .cls. In my compile tool, I provide for explicit compile lists, called that.
Supposing one designates an entire application for compiling with XREF and selects the XML output. Does this mean that one ends up with one XML file with all of the XREF info for each program compiled? That sounds monstrous and very difficult to use. Putting it in a database would seem to be essential for making it useful.
Re the meta-catalog, it looks like some checking around in there should happen, but I guess I would ask ... if it is not complete, why not complete it?
And, it seems that one might package it in such a way as to make its potential more apparent.
If we could go back to the tools discussion for just a few minutes. We're discussing (nothing more currently - don't get excited yet) the idea of some standalone code analysis and conversion tools in Eclipse. Likely written in Java as Eclipse plug-ins, and likely not packaged as part of any formal release. It would be on a "take it or leave it" basis with minimal documentation and minimal support. Source code would likely be available to anyone that wants it.
We've got two goals in mind with this discussion:
1) Motivating and making it easier for people to move code forward (I seem to remember a reference to "millstone" somewhere earlier in this thread).
2) Attempting to "jumpstart" a community effort to provide inexpensive utility plug-ins for OpenEdge Architect.
So I'd simply ask if the participants on this thread would a) find such an effort valuable, and b) consider becoming active participants if the community aspect of that really took off.
standalone code analysis and conversion tools in Eclipse.
Something like Proparse/Prolint/Prodoc?
with minimal documentation
So we'd have to reverse engineer the code to figure out what it did?
I'd simply ask if the participants on this thread would a) find such an effort valuable
I'm still not clear on what kind of analysis / conversion these tools would do. That they'd be in Java would weigh negatively since I don't know the language yet.
I personally don't "need" that much support, but there needs to be enough documentation that I don't have to waste large amounts of time trying to reverse engineer the original developer's thought-process. If "minimal documentation" means "here's the code, figure it out yourself", then I'd probably pass.
I'm with Tim in that I find the idea of minimally documented, unsupported code tossed out there to be less than appealling. I think you should decide instead to do one of three things:
1) Decide that the functionality is important and core enough that you should simply do the job and support it. Taking the partial solution of the current meta-catalog builders and turning that into a full XREF searchable cross reference tool is a good example of this kind of low hanging fruit.
2) Decide that PSC would like to make some investment and/or that you have already made an investment, possibly through the consulting group, which you would like to share and to contribute this material to an open source project ... not one with the flaws of POSSE ... so that we can all contribute and all benefit.
or
3) Decide that there is a particular partner who can use what you have done and finish it into something useful and to work with them to see that this product becomes available. You might or might not require that it be free and you might or might not actually distribute it. This might be appropriate with things like the OE extensions to Enterprise Architect which were presented at Exchange.
.....
I personally don't "need" that much support, but
there needs to be enough documentation that I don't
have to waste large amounts of time trying to reverse
engineer the original developer's thought-process. If
"minimal documentation" means "here's the code,
figure it out yourself", then I'd probably pass.
As Niel says these are currently only discussions, but yes there would obviously need to be enough documentation to be useful, but it wouldn't necessarily go to the depth that the product documentation does today. The more importatnt point I think is the concept. Would there be value in providing certain functionality as Eclipse Plug-ins, outside of a product release. This has certain implications as Niel suggests, such as documentation and support, but it also means being free of the product release cycle, which could also be seen as a benefit.
As Niel says these are currently only discussions, but yes there would obviously need to be enough documentation to be useful, but it wouldn't necessarily go to the depth that the product documentation does today.
I have no problem with "less" docs than what a product release has, but there has to be enough in there that a good developer can easily figure out what's what and so figure out how to adapt or extend the code to do whatever they want.
If the developer who wrote the code was available to answer questions about where to find certain functonality, or what the intent was behind certain blocks of code, that'd be even better.
The more importatnt point I think is the concept. Would there be value in providing certain functionality as Eclipse Plug-ins, outside of a product release.
For any ABL developers that know Java, I'm sure there could be.
This has certain implications as Niel suggests, such as documentation and support, but it also means being free of the product release cycle, which could also be seen as a benefit.
There's still the matter of managing what gets into the code and what doesn't.
What I think the minimum standard would be enough support in terms of docs and access to the original developer(s) that I could get into the code, figure it out, and update it w/out spending large amounts of time having to reverse-engineering it. This wouldn't require a large amount of resources on PSC's part, but they would need to be there.
Having said all that, I have to say I don't know Java, spend almost all of my time in Unix ABL, so I probably wouldn't be participating anyway.
I expect there are quite a few things that could be useful, not only Eclipse plug-ins, but also other tools. On PSC's side, some of these might arise as internal projects which are not destined to become products, but which are still useful, possibly as a result of a consulting engagement. But, PSC is not the only possible source or contributor. Why not put these out as open source projects so that the OE community can help?
I expect there are quite a few things that could be
useful, not only Eclipse plug-ins, but also other
tools. On PSC's side, some of these might arise as
internal projects which are not destined to become
products, but which are still useful, possibly as a
result of a consulting engagement. But, PSC is not
the only possible source or contributor. Why not put
these out as open source projects so that the OE
community can help?
Having them as open source projects is also part of the discussion, because yes, the OE community can help a great deal. What we need however is the right process in place to handle such an undertaking, we all remember POSSE, and so hopefully some lessons have been learnt. But this is one of the many obvious advantages of moving to the eclipse platform that we should look to maximize the potential of.
I might suggest that one of the lessons learned from POSSE was PSC hasn't really tried open source yet ... i.e., POSSE was so heavily dominated by the on-going product issues of PSC, that there wasn't really any place for the rest of the community to develop its own directions and initiatives. As I get the thrust here, we are talking about code which will never be a part of a PSC product. If so, then I think it would be highly appropriate to go to a real open source model in which PSC was a contributor, but not at all the only one, and where the community was open to new projects, revisions, etc. without that overruling dominance.
Personally, I had some ideas to participate in POSSE with, but never could figure out how to use the tools or where everything was.
Right, so it also needs to avoid being monolithic. I don't think that is probably a big problem with the kind of thing that I think Mike is discussing because he is talking about small, self-contained tools, not a huge collection of interconnected application.
Since Open Source isn't really an OERA topic, I've started a new one in the "open forum" forum.
That thread is here http://www.psdn.com/library/thread.jspa?threadID=2251&tstart=0
Right, so it also needs to avoid being monolithic. I
don't think that is probably a big problem with the
kind of thing that I think Mike is discussing because
he is talking about small, self-contained tools, not
a huge collection of interconnected application.
Yes, we're not talking huge monolithic bits of work here. (See the other thread http://www.psdn.com/library/thread.jspa?threadID=2251&tstart=0 for a reply on some of the points you have raised).