In short...I'm in need of a bit of advice on how to structure our application, so any advice greatly appreciated.
Current situation:
Several identical databases for each business division, each with its own appserver. Client - 90% web-client connecting through to the required appserver.
This is great, except in the instances where we want to access data from multiple databases at the same time - e.g. for a web application where the client has little or no notion of the separation of the underlying data.
Proposed solution:
The current train of thought is to create a multi-system appserver that handles requests to each of the above appservers as required. It would consolidate the data from each appserver and send back to client - The client could be a webspeed broker generating web content, a web-client app, or a web service relaying the response via SOAP etc.
This *seems* like a good idea, but I'm not totally convinced...and any other ideas/suggestions are welcome!
We are running OE 10.2B and have recently made big changes to the appserver structure - including permanent client-broker connections, and removing large persistent procedures - this has seen a big improvement in appserver response times, particularly over the WAN. This project is another piece of the jigsaw, making the information provided by all the appserver routines in place available to multiple applications - reducing redundant code and increasing data consistency accross the board. - Historically several bits of code have done the same thing but for different applications....grrr!
Chris
Sounds like you might be a natural for the multi-tenant database in 11.0. Consolidate everything into one database; use the client principle object to assign roles to user; identify users which can only see one set of data and ones which can see consolidated views. One DB, one AppServer set up (even if you have multiple instances for load balancing).
Sounds like you might be a natural for the multi-tenant database in 11.0. Consolidate everything into one database; use the client principle object to assign roles to user; identify users which can only see one set of data and ones which can see consolidated views. One DB, one AppServer set up (even if you have multiple instances for load balancing)
That was also the first that jumped into my mind, when I read the post. Also since multi tenancy will allow access to more that one tenant at a time.
So the question is, if he can wait until the end of the year. The request didn't sound too urgent...
And, of course, such a transition will take some time to program and test .... but well before there is deployable 11.0, there is beta and even some technology sharing programs which have started already. So, it may not be long that one would have to wail to get started.
I had also thought about the multi tennant DB but given the release dates and time required for testing I had not read much into it.....** I'll be off to read notes on the new version tomorrow!
The requirement for the actual data is pretty urgent, though there are ways and means of getting it out short-term without going down a best practice route (these "short-term" methods have a habit of turning into long-term methods though ). What I'm keen to avoid is developing something that is convoluted/doesn't scale or doesn't have the flexibilty to delivery to other applications.
The multi-tenant stuff sounds interesting though, I'll have a read and see exactly what's involved.
Hi Chris
I have a question around the data that you're aggregating together. Is it a limited set of data, or could the originating request be anything? Are all the AppServers running the same code or does each have a slight variant? The reason I ask is to try and see why the extra level of AppServer performing the dispatch & aggregation. Is there a possibility to have one aggregated AppServer that is connected to all the DB's rather than calling each AppServer individually? Of course this could be relatively straight forwards if your talking about a limited dataset, but not if it could be any random data request.
Mike
To me, it sounds like a classic multi-tenant problem. Given that good support for multi-tenancy is around the corner, it would be a shame to develop an alternate solution which would, most likely, then become a permanant solution which was not as good as it could have been had you used the multi-tenant approach. Moreover, it is going to take you some development to implement any approach, so why not invest that work in the best solution. In particular, there is no need to wait for V11 to implement authentication via client-principle ... you can do that now.
So, I would ask the powers that be and yourself two questions:
1. When do we really need this in production? (I mean *really* since want and must have are often quite different)
2. Is there some simple stopgap measure I can implement now that takes little work which will extend the time for "really need".
Hi Mike,
Is it a limited set of data, or could the originating request be anything?
Not sure what this means.....the data returned from two DBs may well have the same unique keys, and I would somehow need to distinguish where it came from in most cases. At the moment it populates a unique field to each record when it receives it from the originating appserver (not ideal as having to read through a TT twice).
I would expect that 30% of the current appserver code will be reused for multiple access purposes. The rest of it is specific to the current web-client application and *probably* wouldn't be required for any apps requiring access to multile DB's.
Are all the AppServers running the same code or does each have a slight variant?
Identical code, same source.
The reason I ask is to try and see why the extra level of AppServer performing the dispatch & aggregation.
Indeed and that's one of the main reasons I'm skeptical about my proposal
Is there a possibility to have one aggregated AppServer that is connected to all the DB's rather than calling each AppServer individually?
This could be an option - presumably rather than connecting to appservers as required the main appserver would connect to one DB, then the next etc? e.g.
for each db:
connect to db
run getorders.p(output tt-order).
disconnect from db
for each tt-order.
create tt-multi.
buffer-copy...
tt-multi.division = "XYZ".
end.
end.
or if they were all connected I'd need to reference each DB in the code? (correct me if I'm wrong ):
for each db1.orders:
create tt-order.
...
end.
for each db2.orders:
create tt-order.
...
end.
etc....
Thanks,
Chris
1. When do we really need this in production? (I mean *really* since want and must have are often quite different) - couldn't agree more!
2. Is there some simple stopgap measure I can implement now that takes little work which will extend the time for "really need".
Yes to both questions, the initial requirementch is for a couple of web pages. I just know that these requests will grow legs (think PDAs/external Apps etc) and want to have something in place so that in the future getting the data out is easy - or at least in some sort of sensible structure.
OK ... based on that extensive spec ... my instinct for a stopgap might be to create a new AppServer which was connected to all the databases and create code for that in which all table references were qualified with DB aliases. This lets you leave your existing code alone which you explore clent-principle authentication and multi-tenancy. The client-principle stuff you can implement in your regular code when ready since it doesn't require 11.0. Then, explore multi-tenancy during the beta. With multi-tenancy, all your existing code which references only the table name will work as it does now because the authentication will identify the domain to which it applies. You will need to rework the multi-database stuff to new authentication which is cross domain and get rid of the DB references there, but hopefully you won't have very much of it before 11.0 arrives.
BTW, come to PUG Challenge Americas and attend the session about client-principle! http://www.pugchallenge.org/agenda.html
There will also be sessions on multi-tenancy, of course.
Hi Chris
My original point about the limited data was really aimed at asking how much code would you need to change if you were to consolidate to a single AppServer for these requests. If you had a huge number of possibilities in terms of the query requests and therefore a large amout of code to change your original plan might be more effective in the short term at least. But if it's only a small amout of coding, then I'd go for the single appserver with multiple db's connected using aliases along side your existing AppServers and rather than doing a connect/disconnect each time which could prove expensive, go with something more like your 2nd coding example.
Mike
BTW, one of my assumptions is that the need for multi-DB access at this point is driven mostly by web apps which are new functions. Therefore, a new AppServer for those web apps could connect to all DBs and be written with explicit DB references in all table references. This would be all new, although perhaps based on existing functions.
BTW, one of my assumptions is that the need for multi-DB access at this point is driven mostly by web apps which are new functions. Therefore, a new AppServer for those web apps could connect to all DBs and be written with explicit DB references in all table references. This would be all new, although perhaps based on existing functions.
With proper use of aliases the possible reuse of existing code might actually be a lot higher.
Although initially it will be a small amount of code, the eventual 30% would be a fair chunk. I wouldn't fancy changing it all on the off-chance that one of the DBs is no longer required or another one is added...though by that time we may well have gone down the multi-tenant option. Food for thought though thanks for your advice.
Chris
Code reuse will play a big part in this....more often than not different bits of code are doing the same thing for diiferent apps, sometimes in a slighlty different way. This causes its own problems with consistency etc (what shows on the web-client app should reflect what is shown in the web app and on the PDA etc).
My understanding of the OP's remarks is that code now says FOR EACH CUSTOMER and the appropriate set of customers is determined by the DB attached whereas for the new requirement the need is FOR EACH CUSTOMER to mean all customers in all DB. With multi-tenancy, this is simply a question of authenticating to an appropriate domain. Without multi-tenancy, it is going to require looping thought code while changing DB connections or, more likely for reasonable performance FOR EACH Alias1.Customer; FOR EACH Alias2.CUSTOMER, etc. to search all databases. To reuse a lot of existing code, one would have to be connecting and disconnecting databases, which seems likely to be very slow.
Besides, something like this probably means building a temp-table of data to return to the web application so the multiple successive qualified name approach should do that pretty efficiently.
Got to look into this client-principal stuff too....time to dig out the manuals!
Besides, something like this probably means building a temp-table of data to return to the web application so the multiple successive qualified name approach should do that pretty efficiently.
That's the point! Using a DB name qualifier for static database access won't help a lot! It does not only mean that you have to modify existing code, you'll also have to duplicate it. Not nice.
If on the other side, you'd compile with a a logical DB name of "CURRENTTENANT", and than at runtime
CREATE ALIAS CURRENTTENANT FOR DATABASE department1 .
RUN gettemp-table-report.p (OUTPUT reporttemptable).
CREATE ALIAS CURRENTTENANT FOR DATABASE department2 .
RUN gettemp-table-report.p (OUTPUT reporttemptable APPEND).
The temp-table would contain data from department1 and department2 - without any code change. Just a wrapper. It's the same way the data dictionary R-Code works against any given database and multiple databases at the same time. Using the Alias DICTDB.
We need more insight in the existing code to come to a final conclusion here.
I suppose the question you have to ask yourself is how much code can you reasonably develop between now and the time 11.0 is released. If you decide that mutli-tenancy is the right solution, you can develop and even deploy your authentication scheme based on client-principle even before 11.0 is available. Then, get in the 11.0 beta program or even ask about one of the technology sharing programs which precede beta and you can test, experiment, develop, etc. so that you are ready to hit the ground running with FCS.
Any new code you develop for a new AppServer which uses multiple loops with aliases to get at all databases, you can isolate the data acquisition part ... good idea anyway ... and set it up to build temp-tables. Then, when 11 is available, all you need to do is to restructure the actual loops through the data to use one loop and multiple domains via authentication. If you are doing the 11.0 beta, no reason not to develop that alternate code as you go along so that when 11.0 is available all you do is to swap versions and you are off and running.
OK, but reengineering to improve code reuse is really a separate issue than your problem with multiple AppServers and DBs. To be sure, the better packaged your code is, the more easily you can make nimble responses to architecutural change. E.g., if you migrate to data access components which hide the access from the rest of the application, then you should be able to swap a multi-alias, multi-DB version for a simpler multi-tenant version when 11.0 is available and the rest of the application will never know the difference .... it will just work.
Let me know if you would like to talk about that kind of transformation strategy, but we probably shouldn't branch in that direction in this thread ... which is remarkably coherent so far!
No dispute that this is one way to fill that temp-table, but it is still going to require change when the multi-tenant stuff comes in. With multi-tenant and a user that is authenticated for multiple domains, a single for each will go through all departments.
tamhas schrieb:
No dispute that this is one way to fill that temp-table, but it is still going to require change when the multi-tenant stuff comes in. With multi-tenant and a user that is authenticated for multiple domains, a single for each will go through all departments.
Why?
To my understanding of the proposed MT implementation, the only thing that will require a modification will be the new wrapper. Here the CREATE ALIAS will be replaced by something new. The heart and soul of the solution, the existing and well tested reporting logic, would not require any modification now for the ALIAS based implementation and a potential multi-tenant based solution. That will be considered a very huge advantage - and that's one of the design goal of the whole MT enterprise.
But details about this should be discussed in a more apropriate forum than this here.
Any anyway. It's all guessing without knowing more about the interfaces and the kind of the existing code. For now, I'm sure Chris got more things to think about than he wanted :-)
90% of the current appserver code returns a temp-table, 5% dataset and the rest just values. And 90% are indivudual .p files - we've recently removed the majority of persistent procedures.
I quite like the alias idea, this way we can add/remove databases by user without having to change the code at that point in time, something like this:
for each tt-db no-lock:
create alias mydb for database value(tt-db.dbalias).
run orders.p (output table tt-order append).
delete alias hire.
end.
I'm sure Chris got more things to think about than he wanted
Pretty much...though I think longer term multi-tenant is the way to go along with the client-principal stuff.
We're in pretty much the same sittuation. We take 3 approaches:
- I have one appserver that has connections to all databases. Code that runs against this appserver is entirey dynamic. It's not something you want to code for on a monday morning or friday evening, but for a couple of specific applications that deal with data in serveral databases it results in absolutely wonderfull code that does great things with very few lines and is actually more clear than it would have been with aliases. The reason this can be interesting sometimes is that your code shows it's working on several databases and how it does that, it's easy to follow if the selection of the database (and even table in our case) is important and the actual logic once the database is selected is very simple. This will not be desireable in the average business application. It also requires very carefull design, coding and testing as you have no compile time errors. All errors will be at runtime.
- Most logic works the way you suggest, moving from one appserver to another. The biggest disadvantage I see is that it becomes next to impossible to control transactions across databases. While you probably can't rely on transactions being consistent across databases if a server crashes even on one appserver, it becomes very hard if not impossible to roll back changes reliably across several appservers.
- The approach Mike suggested (with the CREATE ALIAS) is also nice, I occasionally use this on the first appserver (the one where I normally use all dynamic code) to port existing code to it faster, but I'm always a bit reluctant to do so, because I fear I may make a mistake while setting the alias and accidentally run code against the wrong database.
In the end, OE11 multitenancy is indeed what we really want for this.
Pretty much...though I think longer term multi-tenant is the way to go along with the client-principal stuff.
and
In the end, OE11 multitenancy is indeed what we really want for this.
I guess, that's all good news to Progress.
Also I'm quite sure, that the multi-tenant or context-aware AppServer of OE11.1 will also be a good fit to these kind of implementations. There'll aways be something new in future OE releases that one it keen on using. OE in 5 years will be a better fit for some of todays challenges than OE11 will be. But we need to be kept busy with some things today as well... :-)
Good news indeed :-)
There are certainly things you can do now in preperation for OE 11, such as starting to use the Client-Principal which will reap benefits in your application now and even more when we ship OE 11.
But I agree with the general concensus that this sounds like a great use-case for MT. Chris, contact me off line (mormerod@progress.com) if you'd like to participate in our current Tech Previews we have running on OE 11 and I'll see if we can get you into the program.
Mike
I don't think there is a real disagreement here. I think we both agree that:
1. Encapsulation of the data retrieval aspect of the code will minimize and localize the changes required, potentially making them trivial, depending on the actual requirement; and
2. *Some* change is going to be required to move from aliases on multi-database to multi-tenant on one database, even if that change is fairly trivial.
In particular, the core routine may need changing as well since my guess is that one needs to include some identifier in each record to indicate which entity it came from. In multi-tenant, that would naturally be the tenant ID, but that won't be available in the current databases.
Nothing quite like knowing where you are going more than one day in the future to help make good decisions!
I'm glad to see you offer that, Mike. I didn't know if it might be too late for the tech preview or not, but certainly getting hands on this early is very much the right sort of thing, especially since there are things he can do to get ready that can be implemented on 10.2B.
And again, come to PUG Challenge Americas in June since we have several sessions to help you on your way. If you can't make it physically, then attend remotely!
In this case, the CREATE ALIAS approach is attractive since:
1. It lends itself to re-use of existing code (or improvement of both multi-DB and single-DB code);
2. It is dead simple to implement for a short time use without having to develop new skills; and
3. It is easily converted into code which will do the right thing for a multi-tenant DB.
Nothing quite like knowing where you are going more than one day in the future to help make good decisions!
How philosophical.
In this case, the CREATE ALIAS approach is attractive since:
1. It lends itself to re-use of existing code (or improvement of both multi-DB and single-DB code);
2. It is dead simple to implement for a short time use without having to develop new skills; and
3. It is easily converted into code which will do the right thing for a multi-tenant DB.
4. it's known to work in scenarios like this since decades.
You know, I'm very open to early adoption of new features. But it's my personal understanding of a good advice to also offer a plan B in case the new feature won't be that stable from day 1 on. MT is touching sooo many grounds in the OpenEdge product, that it would almost be foolish to assume that there won't be any issues at all with FCS.
True and hopefully that will be clear before they try to convert! But, this CREATE ALIAS bandaid is a prefectly workable solution even if they need to keep doing it for an extra year because the amount of rework is very small.
True and hopefully that will be clear before they try to convert! But, this CREATE ALIAS bandaid is a prefectly workable solution even if they need to keep doing it for an extra year because the amount of rework is very small.
That's what I call a perfect temporary solution the only risk is that it's good enough and there will be no priority to move to MT then. It's important to focus on the real solution and commit yourself to use the better solution when it's available and stable.
mikefe wrote:
True and hopefully that will be clear before they try to convert! But, this CREATE ALIAS bandaid is a prefectly workable solution even if they need to keep doing it for an extra year because the amount of rework is very small.
That's what I call a perfect temporary solution the only risk is that it's good enough and there will be no priority to move to MT then. It's important to focus on the real solution and commit yourself to use the better solution when it's available and stable.
NO, you mean people really do that? Keep temporary fixes around in production! I'm shocked :-)
It would be funny ... if it weren't that virtually every site is a patchwork of do-it-quick mods some of which are still running 20 years later, themselves patched and patched again with equally short term responses.