Complex data room authorization in SDO

Posted by 302218 on 13-Jun-2012 07:22

Dear all,

I am facing the requirement to pimp a legacy Dynamics application ( OpenEdge 10.1c ) running with the AppServer to be compliant with the requirements for data room authorization and waiver checking and I am looking for ideas. Maybe somebody has done something similar in the past.

The requirement is that I have to use a central service ( Web Service interface ) into which I need to pass in a list of client identifiers in XML format and as a response get a OK/NOK flag for each entry in the list. The OK/NOK flag is depending on several factors, for example the role of the user, from which country the request is originating, has the client signed a waiver for cross border access and does the user hold the necessary entitlements for the profiling depth.

Therefore I cannot enhance the query string of the SDO. Instead I must "somehow" intercept the data retrieval logic of the SDO after the RowObject Temp-Table has been populated on the server side and before it will be delivered to the client. As I may not pass out the restricted rows to the client ( it is against the law in some juridictions for client data to leave the countires physical borders unless the client has singed a waiver ) I would have to delete them from the RowObject Temp-Table before it gets passed out to the client - but that would probably leave the context management ( batch position, next row ident, etc. ) of the SDO in a limbo state.

I would be very please if somebody would share some ideas here.

Thanks in Advance and Best Regards,

Richard.

All Replies

Posted by Håvard Danielsen on 15-Jun-2012 10:03

302218 wrote:

Dear all,

I am facing the requirement to pimp a legacy Dynamics application ( OpenEdge 10.1c ) running with the AppServer to be compliant with the requirements for data room authorization and waiver checking and I am looking for ideas. Maybe somebody has done something similar in the past.

The requirement is that I have to use a central service ( Web Service interface ) into which I need to pass in a list of client identifiers in XML format and as a response get a OK/NOK flag for each entry in the list. The OK/NOK flag is depending on several factors, for example the role of the user, from which country the request is originating, has the client signed a waiver for cross border access and does the user hold the necessary entitlements for the profiling depth.

Therefore I cannot enhance the query string of the SDO. Instead I must "somehow" intercept the data retrieval logic of the SDO after the RowObject Temp-Table has been populated on the server side and before it will be delivered to the client. As I may not pass out the restricted rows to the client ( it is against the law in some juridictions for client data to leave the countires physical borders unless the client has singed a waiver ) I would have to delete them from the RowObject Temp-Table before it gets passed out to the client - but that would probably leave the context management ( batch position, next row ident, etc. ) of the SDO in a limbo state.

I would be very please if somebody would share some ideas here.

Thanks in Advance and Best Regards,

Richard.

Hi Richard,

If the filter criteria require that you compare the role of user or country and such with values on each record then it seems that the best aproach is to add these to the query expression. Can you clarify why you cannot use the query? How would you handle this do this if you had to use SQL to retrieve the data?

Posted by 302218 on 18-Jun-2012 00:08

Hello Harvard,


Thanks for you reply – long time no hear.


Unfortunately the information on which the decision is made whether the user is authorized to see specific client information is maintained in a system called "client information" on the IBM z/OS platform and must be called from outside the main frame via a web service. That is true also for the entitlements management system. Any system storing CID ( client identifying data ) must utilize strong authentication and location aware access control. As Progress does not support the SSL client certificate ( like for example Oracle PKI ) I can't implement proper strong authentication and must use third party services to know where the request originates from. Bottom line: Apart from the fact that there is no way to express the data room authorization in a single query string, there is no way I am allowed to store the information on which it is based in a Progress database where the application data is stored in. The only link to the client data I have in the database is the so called STID ( standard identification symbol maintained on the z/OS main frame which is a 16 by unique ID ).


The only way to implement data room authorization and therefore avoid an issue in the operational risk inventory is to use the central authorization service provided as a web service. To use the service properly I must know which rows are in the batch so that I can build the proper XML document - therefore I can only build the list when the data is fetched from the database. The way the SDO does it, the only way I see is to intercept the data retrieval logic when the RowObject Temp-Table is populated and either delete the rows for which the user does not hold the necessary entitlements or mark them as restricted. Anyway I must ensure that these rows are never sent out to the client.


So far I have tried a version removing the offending rows from the RowObject Temp-Table in the server-side logic of the SDO in an override of transferRows. After run super I call the data room authorization logic which does the web service call and removes rows from the RowObject Temp-Table accordingly. The solution is not bullet proof yet but it seems that it does work without big issues with AppServer but there are all kind of error messages when I run it client/server, especially when the first row in the RowObject Temp-Table is removed. Lucky me, it will only run with AppServer in production.


I am looking for ideas whether there is a better way to do this without having to re-write the whole application.


Did I mention that I am working for a Swiss bank?


Thanks for your reply and best regards,
Richard.

Posted by Peter Judge on 18-Jun-2012 10:28

Richard,

What version are you on? If OE 11, have you looked at multi-tenancy? How successful this would be depends on how the data is structured. But if it's a fit, the changes you have to make would revolve around changes to the security model, and to using the client-principal object.

-- peter

Posted by 302218 on 19-Jun-2012 01:24

Hello Peter,

Thanks for your response. I did consider the multi-tenancy feature ( we are about to upgrade to OE 11 by the end of this year, now we are on OE 10.1c ) but there is no way that we are allowed to store client identifying or security relevant information in the OpenEdge database unless it supports strong authentication. Full stop. Furthermore, there is no way to structure the data that way as it depends on several factors whether the user is allowed to see the client data: That involves what kind of business process it is, the entitlement he/she is holding ( profiling depth on client data ), whether he/she is strong authenticated and whether it is a cross border request and whether the client has signed a waiver to allow cross border access.

This is easily the most paranoid environment I have ever worked in.

Thanks and Best Regards, Richard.

Posted by Peter Judge on 19-Jun-2012 08:21

"just because you're paranoid doesn't mean that they're not out to get you"

I hoped - but not expected - that multi-tenancy would satisfy your data structure needs.

If this constraint holds for only some data, you might want to look at switching from SDOs to DataViews: this approach gives you a very clearly-defined boundary between client and server (see adm2/serviceadapter.p). Although this doesn't help with the need to filter the data, it makes it easier to have a single point of modification.

Another advantage of this approach is that you can use a ProDataset and add an AFTER-ROW-FILL callback, in which you can perform your validation: this means you will not need to read all the data again.

As to the data, would you be able to flag the data itself (either in the table itself, or a secondary table that you can join to)? Using the DB engine to perform the filtering is going to be significantly more efficient/faster than having to iterate over the result set again.

Unfortunately, I'm not sure there's an easy way to do what you need to.

10.1c ) but there is no way that we are allowed to store client

identifying or security relevant information in the OpenEdge database

unless it supports strong authentication. Full stop.

Does 'strong authentication' mean 'not 3rd party' (ie must be able to authenticate in the platform)?

-- peter

Posted by Håvard Danielsen on 19-Jun-2012 08:45

A transferRows override seems to be the place to access and manipulate the RowObject temp-table after it has been populated. This should in theory even work with local (client/server)  mode or with state-aware mode, but I wouild consider myself lucky if I only needed to deal with a stateless AppServer.

There are basically two problems with this approach, one that should be easy to resolve and one that is difficult to resolve.

The first problem is that the batch/context settings get messed up if you filter out the first or last record. This should be possible to resolve by adjusting these properties when the deletion of the first or last record occurs. If you delete the first record then you need to adjust FirstRowNum and FirstResultRow and if you delete the last record you need to adjust the LastRowNum and LastResultRow. Note that you must NOT change these if the first or last record already is on the client, so If the request is appending then you need to know which direction it is appending and avoid changing the properties in the other end. The transferRows plAppend parameter identifies this. Note that NEXT and PREV requests are appending even if RebuilldOnRepos is true.  (You may also take a look in deleteRow, since it has to resolve the same issues on the client)

The second problem is that the SDO will return fewer rows than requested. The worst case scenario would probably be a case where you ended up filtering all records. This may confuse users, but the main problem is that UIs that use browsers may end up with fewer rows than what is available in the viewport and disable scrollbars, which again will prevent users from using the scrollbar to fetch another batch. Note that this problem is not limited to the ABL browser, as many .NET and web UIs have the same issue with scrolling components. The transferRows has a fill parameter that is set to true to tell the SDO to always return a full batch also at the end of the batch whenever this problem may occur.

I suspect the safest way to resolve this is to add a loop, so that you can read another batch whenever necessary. A more simple approach may be to ensure that you use a large batch size. (Dynamics default batch sizes are typically way too small)          

---

Another approach may be to move the filtering into transferRowsFromDB. This would allow you to only create the temp-table record when it passes your secondary test. The filtering would need to be done before the hRowObject:BUFFER-CREATE(). You should make sure that any information you need is requested before the loop, since any call to functions or procedures inside this loop will have a  very negative effect on performance. Probably more so than looping through the temp-table after the fact. The biggest difference with this approach would be that you would need to filter on the database buffer handles, so the logic would likely be somewhat complex since the database query can have many buffers.  (The procedure is defined as private, so there are som issues in figuring out how to use your own version).

---

I believe the best approach is to use the DataView as mentioned by Peter Judge. The drawback of this is that you need to implement all server side logic yourself, since we do not provide any server side code out of the box. The visual components should still work as long as long as you change the containers to use DataViews instead of SDOs.

Message was edited by: Havard Danielsen

Posted by 302218 on 19-Jun-2012 09:02

Strong authentication means the authentication is perfomed based on something you have and something you know. The something you know would be a password and the something you have in my case is a personalized smart card holding a SSL client certificate that the user uses to log into his machine ( Windows PC ). That SSL client certificate must be presented to and veryfied by the backend service -- be it a web server, application server or database server -- during the SSL handshake and any communication between the client and the backend service must not leave that secure SSL channel unless strong authentication is performed again.

The call to the Web Service that performs the data room authorization costs 0.5 seconds per call. Therefore it is not efficient to call the service for every row fetched from the database ( 50 rows standard batch size - you do the math ... ). That's why my current thinking is to send a whole batch to add the latency only once per fetched batch.

Flagging the data is practically impossible. The only way to get it would be via the end of day processing during the night - but the entitlements may change during the day, plus I would need to take the location ( 2-digit country code ) from where the request originated and whether the customer has singed a waiver that country into account.

Right now I am doing a faked strong authentication. By faked I mean that within the 4GL I use the Internet Explorer OLE automation object to request a specific web page and thus sending the SSL client certificate to that Web Server which in turn verifies it. As soon as I have the necessary verification I connect to the OpenEdge AppServer. On the OpenEdge AppServer I can only trust that a strong authentiation has been performed in the first place ...

In an ideal world the OpenEdge runtime and the AppServer would support the SSL client certificate, plus I would have the possibility to access the HTTP header when the AppServer is connected via the AIA. The I would not have to go down that rocky road ...

Thanks and Best Regards, Richard.

Posted by Peter Judge on 19-Jun-2012 10:51

Strong authentication means the authentication is perfomed based on

something you have and something you know. The something you know would be

a password and the something you have in my case is a personalized smart

card holding a SSL client certificate that the user uses to log into his

machine ( Windows PC ). That SSL client certificate must be presented to

and veryfied by the backend service -- be it a web server, application

server or database server -- during the SSL handshake and any

communication between the client and the backend service must not leave

that secure SSL channel unless strong authentication is performed again.

Have you looked at the client-principal object at all? It was introduced in 10.1A as part of the Auditing functionality, but you can use it outside of that context too. It's basically the ABL's equivalent of a token. There's an Identity Management book in the docset which may be useful.

I mention this to avoid you having to schlep the data to yet another service for filtering.

-- peter

Posted by 302218 on 20-Jun-2012 00:04

I do use the client-principal object. I have customized the AppServer connection manager accordingly. Via the Internet Explorer OLE Automation object I navigate to my navigation Web Site which is actually hosted on a WebSpeed Broker where I can extract the properties from HTTP header which had been enriched with these by the standard authentication infrastructure. There I build the CPO and pass it out back to the AppServer connection manager which then connects to the AppServer passing in the CPO. I then use the CPO on every request to authenticate against the database in the activate and logout in the deactivate.

The problem with this approach is that after the authentication has taken place I close the strong authenticated SSL channel and open another one to the AppServer which is not strong authenticated anymore as Progress does not support the SSL client certificate. Although this approach might seem tight it doesn't hold water -- as I have the CPO now in the memory of the client machine, somebody could hack it, steal the CPO and use it with malicious intent.

The root problem is, that whatever I do before connecting to the AppServer, I can't strong authenticate the conneciton to the AppServer or the database and that is the reason why we don't develop any new GUI running with the Progress runtime anymore - instead we are using a GWT based GUI served by a Java servlet that -- for the time being -- uses the OpenEdge AppServer. So far we are pretty happy with that but we already got questioned by technical architecture why we still use any Progress in here, but we manage to keep the OpenEdge flag high. But when it comes to security -- especially client data confidentiality -- there is not much to negotiate with operational risk management when the CEO read in the newspaper about another case where client data has been stolen and a CD was offered to foreign tax authorities ...

Of course legacy applications are a different story and are not under pressure that much, nevertheless we must tighten the Dynamics environment used by them as much as possible.

Thanks for your response and Best Regards,

Richard.

Posted by 302218 on 20-Jun-2012 00:40

Thanks! That is very useful information to me!

I will try a few variants and will keep you updated.

Thanks and Best Regards, Richard.

Posted by Mikael Hjerpe on 05-Feb-2016 06:59

302218 - Richard..   How did it go with the strong authentication  ?    ( ie smart card client cert.. over to abl client then over to the abl server side. and verify the user there )    

This thread is closed