Sunday, November 27, 2011

A handshake_failure on ClientHello and TLS vulnerability

Recently, while integrating web services between two servers, we had a situation with TLS handshake when our server acted as client. In order to establish a TLS session with a server, the client would first send ClientHello message. The server immediately terminated the handshake with the alert handshake_failure. The solution involved a bit of guesswork on our side since there was no support on the server side who could debug on their side and provide some feedback. Following log snippet shows the situation (line #32) with debug enabled (on Tomcat/JBoss).
Studying the debug, we realized that perhaps the cipher suites used (line #21) to start the handshake were not negotiated by the server. In TLS, the handshake_failure alert is only used when no common cipher can be negotiated [5]. The developers of the server sent us a web service client project in SOAPUI to prove that their web services are up and running and could be tested remotely using SOAPUI using transport level security. Enabling the SSL debug in SOAPUI, we found that the cipher suite used was indeed different and was also a superset of the cipher suite our HTTPS client used to start the negotiation. Indeed, cipher suite sent for negotiation is composed with lowest common denominator in mind with with 40 bit encryption as shown below with assumption that negotiation would bring agreement between the client and the server.
However, if the server terminates the handshake with alert handshake_failure, the server might perhaps be looking for different ciphers, compression algorithm or version since this happens in response to ClientHello. There could be a reason for such behavior. Doing a bit of digging, we found that a couple of years back a vulnerability was found in the TLS protocol that could be exploited during the renegotiation phase. When this vulnerability was detected, as a first line of defense, it was suggested to turn off the renegotiation and a solution [4] was proposed to fix the problem in the protocol. Our server used a version of Java with the fix which is listed on JSSE Reference Guide.

Our suspicion was that perhaps the other server was not set up for renegotiation. So, we changed our HTTPS client code to use the cipher suite that incorporated much larger suite as follows.
This cipher suite was accepted by the server. In fact, it was SSL_RSA_WITH_RC4_128_MD5 that was picked up by the server.
  1. JSSE Reference Guide 
  2. Authentication Gap in TLS negotiation
  3. Understanding the TLS Renegotiation Attack
  4. RFC 5746 - TLS Renegotiation Extension 
  5. SSL and TLS Designing and Building Secure Systems, Eric Rescorla

Tuesday, November 22, 2011

Bypassing security on web application by using unlisted HTTP method

Typically web environments allow to configure authentication and access control over a web resource for a list of HTTP methods. For example, a Java Servlet descriptor (web.xml) for a J2EE web application allows to configure security constraints as following : Here, the web application developer would expect that HTTP GET and POST methods invoked on any web resource (/*) of the application would be subject to authentication constraints as defined and it is also expected that the rest of the HTTP methods would not be allowed to be accessed. In other words, any GET or POST requests without proper authentication to the JBossAdmin realm would be blocked and would get a 401 error.

Vulnerability 

The HTTP protocol also supports other methods such as HEAD, PUT, DELETE, etc. Here is the catch. As these methods are not listed in the security constraint shown in above example, the HTTP requests with these methods are still allowed to pass through (by vulnerable web environments) without any authentication or authorization(!). Such requests are typically handled by the default GET handler if no method specific handler is specified. Exactly this vulnerability is exploited by a worm in order to launch a denial of service (DoS) attack. It also seems to have affinity to JBoss's jmx-console :(. It uses HTTP HEAD method on jmx-console to plant seeds of one or more DoS attacks on the infected environment as well as out on the net.


What is (known) to be affected?

According to RedHat/JBoss advisory CVE-2010-0738, this vulnerability is applicable to the following JBoss software.

  • JBoss Application Server (AS) 4.0.x, 5.1.x
  • JBoss Communications Platform 1.2
  • JBoss Enterprise Application Platform (EAP) 4.2, 4.3, 5.0
  • JBoss Enterprise Portal Platform (EPP) 4.3
  • JBoss Enterprise Web Platform (EWP) 5.0
  • JBoss SOA-Platform (SOA-P) 4.2, 4.3, 5.0

The advisory says that the default setup of the out of the box web applications such as jmx-console, web-console, etc. of JBoss products listed above enforces incomplete security constraints to ensure authenticated access to the administration user id, defined within these products. A remote attacker, without having access to usernames and passwords, could misuse this setting to trigger arbitrary actions in the context of the operating system user running the Java process and potentially harm confidentiality integrity and availability.
According to Bypassing Web Authentication and Authorization with HTTP Verb Tampering from Aspect Security, this vulnerability might exist in the following since these are known to silently transfer all HTTP HEAD requests to GET handler.
  • IIS 6.0
  • WebSphere 6.1
  • WebLogic 8.2
  • JBoss 4.2.2
  • Tomcat 6.0
  • Apache 2.2.8

Solution

The best solution for any Java web applications is to either remove all __ from the __ and thus trigger security constraints on all HTTP methods or to list all HTTP methods in the security-constraint as shown in the following example.
Better or

Thursday, November 25, 2010

Fall 2010



Colors, Fall 2010, Vermont and Massachusetts.

Monday, November 15, 2010

Avoiding browser popup for 401

If you are writing a web application that consumes RESTful web services which enforce HTTP basic authentication, you may face a problem where the browser may pop up a dialog box on the authentication failure (HTTP status: 401) before even your error handler code is called. This happens especially when the web application does not have any controller on the server side. For example, an application written using a client-side JavaScript framework such as Ext-JS.

The browser's HTTP user-agent obviously follows the HTTP protocol which says the following for 401.
The request requires user authentication. The response MUST include a WWW-Authenticate header field (section 14.47) containing a challenge applicable to the requested resource.

14.47 WWW-Authenticate

The WWW-Authenticate response-header field MUST be included in 401 (Unauthorized) response messages. The field value consists of at least one challenge that indicates the authentication scheme(s) and parameters applicable to the Request-URI.

where the contents of a challenge may itself contain a comma-separated list of authentication parameters. The authentication parameter realm is defined for all authentication schemes:


With some trial and error, we found that the pop up is triggered not due to the presence of 401 but due to the presence of the challenge.



So, as a web service developer, if you want to help service consumers disable the pop up and still send 401, you could use a trick. Replace Basic with your own scheme, e.g. xBasic as shown below.



To do this with Spring Security, you would want to override the commence method of the default
BasicAuthenticationEntryPoint.



Write your own entry point, e.g. MyBasicAuthenticationEntryPoint as shown below.



Then plugin your entry point into the basic authentication filter as follows:



Thanks to Venkat Mantirraju in helping figure this out.

Friday, October 29, 2010

Bulk delete

Sometimes there is a need to send an entity body for HTTP DELETE. The use case is that of applying DELETE on one or more entity of the resource in one shot. For example, a Web GUI (consumer of a web service) might list one more entity of a resource and allow the user to select one or more of these for delete. If you are using Gmail, it allows you to select one or more email and offers a DELETE action on the selected emails.

Typically DELETE works as follows


where id is the id of the email entity.

If you are using Apache HTTP Client, it will not allow to send entity body with DELETE operation, correct me if I am wrong. In that case, you have to overload POST with query parameter such as following:



and then you could send entity body with the request.

On the server side, if you are using a JAX-RS container, based on which container you are using, the annotations you use would differ.

Apache CXF JAX-RS

Apache CXF automatically dispatches requests with query _method=delete to a method on the resource that is annotated with @DELETE. More on this could be found at Overriding HTTP methods.

RESTEasy
If you are using JBoss RESTEasy, you would need to write POST as follows



Note that EmailCollection could very well have just ids of emails to be deleted. It is received as an entity body.

Response

Typically, for POST, a location header is sent back with the URI that points to the id of the newly created resource. There would be deliberated departure from that style in this case. Because, it is a bulk delete operation tunneled over POST, we would not send any location header. Also, we would not respond back with 201 on success, instead we would send 200 assuming that a transaction was applied to delete all the entities. If the transaction fails, 500 should be sent and transaction should be rolled back. More on this later...

Monday, August 23, 2010

SEDA in Streametics

Recently, I came across Matt Welsh's "A Retrospective on SEDA". Matt writes
If I were to design SEDA today, I would decouple stages (i.e., code modules) from queues and thread pools (i.e., concurrency boundaries). Stages are still useful as a structuring primitive, but it is probably best to group multiple stages within a single "thread pool domain" where latency is critical.

SEDA was one of the research projects that had influenced Streametics' technology. Streametics was the company I had co-founded in 2006. To jump start on the SEDA based implementation, I had started with the code base of Commons Pipeline and made several modifications to it. One change relevant to the above discussion was that of an application handler. This handler was introduced to separate out the application logic from a queue and thread pool of the stage. The stage driver would call out to the application and pass on the event.

Second modification was about allowing the stage designer to receive events synchronously from the previous stage without the queue -> thread pool route. This was done to allow the application designer to group multiple stages such that the processing of the event passing through these stages happens in the same thread without any context switch.

We had also externalized the pipeline (or route) configuration in XML. One could design the whole Streametics application by designing pipelines of stages and connecting these with endpoints and adapters. This way, one could form a graph of stages in the application using a tool. We had a library of stages, endpoints and adapters. The application developers could also use simple APIs to write these components on their own. We used typed XML-Java binding (XMLBeans) at the time of deployment. In a hindsight, we could have used Spring beans framework to wire-in these components.

When we were developing Streametics' event engine, Apache Camel was in very early stage.Camel supports SEDA. However, it required writing routes in Java then. Since then it now allows to do the wiring using Spring Beans. Oh well, there goes a little retrospective on Streametics...

 

Wednesday, August 4, 2010

RESTful web services in CollectionSpace

I have left the CollectionSpace project. However, I occasionally check out what my former colleagues there are up to. We had started work on an early draft for RESTful API documentation a few months ago. Subsequent versions are getting better, you could find it over at here. Because we had to support schema extensibility, you may find multi part payloads in examples cited. However, the draft does a nice job in describing the request and response payloads as well as error codes.

Thanks to Aron Roberts who has been diligently maintaining and updating it regularly and to Patrick Schmitz in shepherding the work. I hope it becomes a good reference for other projects using the RESTful web services.

Saturday, July 17, 2010

Considering Apache CXF ...

Recently, I completed my engagement with the CollectionSpace project and joined a small company in the south bay. Here I am again where we have to choose a web service framework to implement RESTful web services.

In CollectionSpace, we had selected JBoss's RESTEasy. One reason for that was that our deployment environment included JBoss due to the requirements driven by a major 3rd party software we had to use. I liked the simplicity of RESTEasy and also its quality. There were enough plugins to extend the framework. The proxy based client-side library was very useful in writing service consumers for testing purposes. Bill Burke and I had worked on a project more than 14 years ago. He then went JBoss route and I had joined BEA. Anyway, it was good to catch up with him. It was easy to rely on his work.

However, unlike CollectionSpace, here, there are legacy SOAP-based web services used for B2B transactions. These services are implemented using Metro - SUN's RI for JAX-WS. Metro does not support JAX-RS. For that, one would have to use Jersey. These are two different web service frameworks.

I have been investigating if Apache CXF could be used. Here is my impressions so far...will keep you posted. The following is not necessarily a comparison. These are just my notes with some perspective on using a web service framework.
  1. CXF supports both JAX-WS and JAX-RS. Same stack.
  2. It is also possible to use the same service implementation to support for JAX-WS and JAX-RS annotations. Don't know about complexities till I play with such an implementation
  3. Spring integration. This is something I missed in RESTEasy. It is good to describe the meta data for a service outside of Java code.
  4. CXF's advanced search capabilities looks really interesting. It has built-in support for FIQL. I look forward to using it. It'd be interesting to figuring out if SearchCondition could be represented as a resource to which access could be controlled.
  5. There is a nice built-in support for sub-resource locators. Not very difficult to do without it but nice to have it in a framework.
  6. Schema validation could be at service or at provider levels...again configurable with Spring beans.
  7. CXF supports 3 types of clients : Proxy-based, HTTP-centric and XML-centric. Many times, it is good to have these choices as Proxy-based clients and JAXB could hide some real scenarios. e.g. using JAXB on the client does not require one to b64 encode content if proper XML type is used.
  8. It is possible to configure implementation as singleton or prototype

Overall, these look promising to consider CXF. We will know more when we actually start using it.

Saturday, June 12, 2010

Tunneling DELETE over POST for deleting specific relationships

One of the problems with the relationship management approach in  as described in Part II of my post earlier was that DELETE was very coarse grained. The following would delete all the relationships between the given permission (id:e62d2306-518b-414b-9591) and roles.



This might not be desirable. However, as I mentioned DELETE does not take any payload, so it is not possible to provide a list of relationships using DELETE. Here, the approach of tunneling DELETE over POST could be useful. The overloaded POST would look as follows:



Now we can send a payload for deleting specific relationships. Here, the ?_method=delete would indicate to the service that DELETE operation should be performed for the given payload which might look as follows :



This operation would delete a relationship between given permission (id:e62d2306-518b-414b-9591) and given role (id:814ed9d4-3315-44fc-993b) only.

Thursday, May 27, 2010

User managed access

While attending the 10th Internet Identity Workshop last week, I came across an interesting initiative that I had absolutely no idea about. I am glad that I attended. It is called User Managed Access (UMA). It is about letting a user choose the access control policies for resources offering his/her personal data (personal information, address book, bookmarks, blog posts, comments, etc.). It is explained much better in UMA Explained.

"... a web user (authorizing user) can authorize a web app (requester) to gain one-time or ongoing access to a resource containing his home address stored at a "personal data store" service (host), by telling the host to act on access decisions made by his authorization decision-making service (authorization manager)."

If you are familiar with enterprise access control, the following might help.

"In enterprise settings, application access management often involves letting back-office applications serve only as policy enforcement points (PEPs), depending entirely on access decisions coming from a central policy decision point (PDP) to govern the access they give to requesters. This separation eases auditing and allows policy administration to scale in several dimensions. UMA makes use of this separation, letting the authorizing user serve as a policy administrator crafting authorization strategies on his or her own behalf."

The UMA work group has protocol spec, scenarios and reference implementation.

I plan to follow this work. Having just checked in the default set of permissions for resources for CollectionSpace, I wonder how and when a Host (such as CollectionSpace) would make permissions for a user's data available to an Authorization Manager (AM). UMA envisions that the AM could be a third party web-based service, but would that be practical from a performance perspective?

 



Reblog this post [with Zemanta]

Sunday, May 23, 2010

Relationship management in RESTful web services - Part II

In the Part I on this topic, I described an approach that makes relationship between two resources a first-class resource. In this post, I would describe the second approach I had mentioned in Part I, the relationship as a sub-resource.

Relationship as a sub-resource
Here, the relationship is managed as a sub resource of the resource that acts as object in the relationship. Let's take an example.

Roles ROLE_COLLECTIONMANAGER (id: 814ed9d4-3315-44fc-993b) has permission (id: e62d2306-518b-414b-9591) to access a resource named intakes

Permission as object in relationship

If the object of the relationship is a permission and subject being role, i.e. a permission to access resource named intakes is given to role ROLE_COLLECTIONMANAGER, it could be created using the following API. We assume here that both the permission and role are already created using respective POST operations and we are only trying to relate these over here.



Here, e62d2306-518b-414b-9591 is the id of the permission. The sub-resource is permroles that indicates the context of the relationship, i.e. the relationship is between permission and role and not permission and user for example.



As you might notice, we have added some additional data about the permission (resourceName) as well as the role (roleName). This is optional and for convenience purposes only. Also, it would be possible to associate one or more roles with the same permission in a single POST request. So, associating both ROLE_COLLECTIONMANAGER and ROLE_CURATOR (id: 3772624d-1ab3-4e47-a26d-191fc6437410) with the same permission would look like the following.



Role as object in the relationship

The same data structure could also be used with role as an object in the relationship. That is, ROLE_COLLECTIONMANAGER has permission(s) to access intakes (and collectionobjects) resource. This could be accomplished using the following API. Note that id 814ed9d4-3315-44fc-993b represents ROLE_COLLECTIONMANAGER.





Here, POST does return an id to comply with RESTful architecture, however, because relationship is not treated as a first-class resource, this id is meaningless for subsequent operations such as GET and DELETE. Let's assume that we received id 123 for now. Also, more subjects could be associated with the same object using the POST operation again as follows. That means, POST also acts as PUT.





The GET operation would use the id of the object (permission:e62d2306-518b-414b-9591 or role:814ed9d4-3315-44fc-993b) in the relationship to access the relationships with all the subject(s). The GET operation would return the same data as was posted but with a union of all subject(s).



Note the last element of the URI, 123, it is just a filler for an id as the relationship is not a first-class RESTful resource.



All 3 roles associated with a permission are returned.

According to the RESTful architecture, the DELETE should only take an id of the resource to be deleted. The following operation would delete relationships between the given object and all its subjects.



This is good and bad. It is good from efficiency purposes in the sense that all the relationships between subjects and given object could be deleted in one shot. However, it is bad that individual relationships between a subject and given object cannot be deleted as there is no way to uniquely identify such a relationship.

Overall, I like the sub-resource based approach as it is more convenient to navigate to relationship(s) from an object in the relationship. Also, it supports bulk operations, i.e. associating more than one subjects with an object with a single request is possible and GET returns relationship between an object and all the subjects that object is related to in the context of that relationship.

However, this is a bit un-RESTful approach as the identifier returned from POST is meaningless for the GET and DELETE operations. Also, DELETE is not fine-grained, i.e. deleting a relationship between an object and a subject is not possible. It would be better if there was a way to make that possible (see an alternative). The approach described in Part I does not have this problem. Perhaps a hybrid is possible? Would a hybrid approach more complex? I am eager to hear your viewpoints. Feel free to send me comments if you have taken an approach which does not suffer from the limitations I have described.


Reblog this post [with Zemanta]

Monday, May 17, 2010

would you use webfinger?

I was at the Internet Identity Workshop #10 today. I have been to an unconference only once before. It looked like a chaos initially but it was organized chaos. I liked the format. Moreover, I liked how open the participation was both from the presenter and the listeners.

Anyway, while I was slightly familiar with OpenID and OAuth, I am just getting familiar with some of the problems of the initial versions of the Open ID and OAuth 1.0a. Came across several initiatives ... one of which is WebFinger.

"WebFinger is about making email addresses more
valuable, by letting people attach public metadata to them. That
metadata might include:
Eran Hammer-Lahav describes the rationale for the same over here. I like number of arguments he makes, however, I am stuck on the following ...

"The arguments against email as identifiers usually include concerns
over spam and privacy
..."

At least with the Http URI, I don't have to worry about spam. Indeed, there is a phishing problem, but as far as one knows how to protect against it, it might be manageable. How do I know that the email address I am giving to some site in order to enable it to fetch my public meta data won't be misused? Am I missing something here?





Monday, May 10, 2010

User profile management in a social application

In my opinion, user profile management is one of the most important aspects of any social application. Social graphs are made of people and relationships between people. Not only the user profile allows an application developer to capture the information about a person, it is also an important dimension in deducting various kinds of user and usage focused analytics. Such analytics is very important for a social application. If designed well, it could also help increase the application's user base. I will explain about this further later.

User profile management should include the following functionality.
  1. Account management including lifecycle management
  2. Support for multiple identity providers (IdP)
  3. Login / Logout
  4. Single sign-on (OpenID, Facebook Connect)
  5. Profile management
  6. Authorization (Native, OAuth)
  7. ?
Account management
From an application developer's perspective, every user of the application must have an account regardless of how the user logs in (user/password, OpenId or Facebook auth). An account could have minimal information such as display name, email, status, timestamps for creation and modification times. Account may not necessarily have user's profile information.

The lifecycle events associated with an account could be registration, activation, deactivation and finally deletion.

Support multiple IdP
Any social application should support at least 3 types of identity providers in my opinion. A local IdP, OpenID IdP and Facebook.

The local IdP comes handy when potential users of the application do not have any OpenID or do want to create and maintain a profile with the application. The local IdP is usually implemented as identifying users with username and password. Database is generally used as a realm.

The application should also support one or more OpenID IdPs. Many users may not want to create one more online identity to login to the application. They may want to use one of their existing identities managed by a 3rd party identity provider (Yahoo!, Google, myOpenId, etc.) using the OpenID protocol. An application would act as a relying party (RP) that relies on identities asserted by the 3rd party IdPs.

Lastly, any social application developer may want to tap into Facebook's 400MM user base. Unfortunately, Facebook does not support open standard such as OpenID. So, the application has to support proprietary FB authentication protocol (FB Connect).  

See Stackoverflow or Plaxo login screens to check how these applications support multiple IdPs.

Login / Logout
This is a an obvious feature of any web application that would want to offer personalized services. No need to describe anything here except that a user should be able to login using an id managed by the local IdP or by a foreign IdP after due assertion of that id. Other functions would include ability to reset password (for local IdP only), "remember me" among other things. Session management (session expiration, persistent session, etc.) would be required as well as protection against session fixation attacks.

Logout for a user logged in using OpenID or FB Connect would require logging out locally from the application and destroying session context related to the application that is relevant to the user logging out. Note that for a social application, the short comings of a log out feature would not only expose the user but the user's social activities and social graph as well.  

Single sign-on
For a social application, user experience is very important. Offering single sign on could improve user experience right away. If the application user has logged into some other web application using OpenID or FB Connect, that user may not have to sign in again to your application (within the timeframe set by the OpenID IdP or FB) if you support OpenID or FB. Again, check out Stackoverflow or Plaxo.

Profile management

The profile data of a user is important for any social application as
it acts as a very important dimension in various analytical services
the application could provide to the users, insights to improve its own
services and to interested 3rd parties. Account may hold minimal information about the user. An application may have a user profile to hold other user-specific information. This depends on the application but it could include information such as first and last name, nick name, address(s) (land and web), land and mobile phone(s), email(s), instant messenger id(s) and other demographics information as required by your application.

If the user logs in using OpenID, it is possible to populate some of these using OpenID's attribute exchange protocol at the time of login. FB Connect also has APIs to retrieve FB user's profile data per user's privacy settings. Such data could also be retrieved securely with user's consent after the login using the OAuth protocol.

Authorization
Authorization deserves its own post. I will cover authorization in my subsequent post.

If you are using a Java based platform on the server side of your application, you may want to look at Apache Shiro based Nimble project. Nimble is a Grails plugin that uses Shiro underneath. It provides most of the features I have mentioned here except OAuth. It also provides customizable user interface and security tags to insert into the user interface.

Thursday, May 6, 2010

Social identity theft - protecting your personal brand

I recently attended a talk given by an OpenId foundation member. I would not say here where and who gave the talk. What I came to know is that the big web-based identity providers (IdP) such as Google and Yahoo! have embraced OpenId for almost couple of years now and they would want you to use your id/account as many places where open id is supported. Facebook wants you to do the same but their protocol is very proprietary. Anyway, this is good news!
  1. This helps a lot in increasing user registration at a relying party (a social media application) web site because users can sign in using their OpenId. 
  2. It also helps those folks who want to use OpenId to sign in where ever they could because they don't want to keep track of and maintain various social identities and profiles at various social media web sites. 
  3. Lastly it helps in further service authorization and sharing of content using OAuth.
However, this also makes these folks vulnerable to social identity theft. If your social identity is stolen, not only you are vulnerable but your social graph might be vulnerable too (without any reason). This is more dangerous.
I came across an article "How to Combat Social Identity Theft and Strengthen Your Online Personal Brand" where the author recommends creating a separate profile and identity at each social media site . This is like having a separate password for each web site...hard to remember and maintain but it restricts the vulnerability to a single profile/identity/website if stolen. Some folks even provide a service to manually go and create profiles in your name at some 150 web sites! Indeed, what guarantee they give that they won't misuse this information, disgrunted employees could be found everywhere right! And finally there are tools like KnowEm which automate the process by helping find availability of the username at>350 websites and also help create profiles (with subscription service). Many other such tools are listed here.

I would think that one should carefully use identities maintained by the big guys. I would also keep more than one OpenId handy to use at various social media web sites so if any one of these is stolen, at least the vulnerability is contained to only those sites where it was used. However, indeed there could be a better solution...looking forward to your comments, thoughts and suggestions.

Monday, May 3, 2010

Relationship management in RESTful web services - Part I

CollectionSpace offers various RESTful web services for managing meta data for physical/digital objects. Defining RESTful interfaces for various entities in the system is straight forward. Create, Read, Update, Delete, List and Search (CRUDLS) operations on these entities could be easily be mapped to the POST (create), GET (read, list and search), PUT (update) and DELETE (delete) methods of HTTP. For example, see the RESTful APIs for the Role service.

CollectionSpace also has a requirement to support relationships between several entities. For example, there could be one to many relationship between a collection object and a loan object. On a more domain-agnostic side of the services, a permission might be for one or more roles and a role might be related to one or more permissions. Implementing relationships between the RESTful resources is not so straight forward and might need some due diligence. Let's take each of the above two use cases to model relationships in two different ways and talk about pros and cons of each. The two approaches are :
  1. Relationship as a first-class RESTful resource
  2. Relationship as a sub-resource
I'll cover only the first part in this entry. Feel free to comment on the blog if you have implemented relationships in different and better way.

Relationship as a first-class RESTful resource With this approach, relationship becomes a RESTful resource that supports CRUDLS operations. It is a bit unintuitive to think about relationships like this but it is the most flexible approach. Here, each relationship meta data has the following components:
  • Object of the relationship
  • Subject of the relationship
  • Type of the relationship
For example, a relationship between a collection object entity and a loan entity in English could be:
The object in the relationship is the collection object entity and the subject of the relationship is the loan entity. The types should be namespace qualified. This is derived from the RDF model but perhaps only in parts. The relationship web service would support the following methods :
The advantage of such a data structure is that one can relate anything to anything without worrying about semantics. It is also the biggest disadvantage. Relationship being a first-class RESTful resource, each relationship has a distinct id of its own using which it could be retrieved and updated. However, there are many problems.
  1. Unlike the RDF model, the predicate is missing, so it is hard to machine learn how entities are related. The web service would have to do a lot of validation to make sure semantically the relationships are possible between the given entities and are correct.
  2. It is not possible to determine cardinality in the relationship
  3. If RESTful URIs are used, there is no need to provide separate type and id. For example, http://collectionspace.org/services/collectionobjects/814ed9d4-3315-44fc-993b would uniquely identify the object in this relationship.

I'll cover the 2nd approach in my subsequent blog entry.
Reblog this post [with Zemanta]

Thursday, April 29, 2010

Account/user management live in CollectionSpace

I am glad to see the Account service having a face on the CollectionSpace UI. Thanks to the TorontoU and the CambridgeU (UK) teams for pulling this off. It is available in the upcoming 0.6 release to play with at a test server. Eventually it will be moving to the demo. The project release documentation has a nice walk through with screenshots.

Wednesday, April 28, 2010

RESTful management interfaces for security services

In the past, I used WebLogic Security extensively while at BEA. We were building security in the "layered" products : WebLogic Integration and AquaLogic Service Bus. The runtime APIs at our disposal were the Java APIs weblogic.security.* and for management the JMX APIs in weblogic.management.security.*. The JMX APIs are remotable, indeed behind the firewall, but remotable.

Indeed, we had to build security console using JSP/JSF/Struts/etc. so the security administrators of these products could manage users, accounts, roles, user-role mappings, permissions/policies, keys and certificates, etc. The console implementation would use JMX APIs underneath. Alternately, application developers could build their own administration consoles by directly using the JMX APIs behind the firewall.

In the open source, there are good options available for enterprise security such as Spring Security and Apache Shiro. These have non-remotable management APIs in Java. However, I could not find any remotable management interfaces that could be easily accessed from a web-based console over HTTP. So, for CollectionSpace, we built management interfaces using REST. These include

3 entity resources
  1. Account (also manages a simple IdP using DB realm)
  2. Role 
  3. Permission 
and 2 relationship resources

  1. AccountRole a sub resource accessed from the account service
  2. PermissionRole a sub resource accessed from the permission service

Your feedback

If you think these management interfaces would be useful in other projects or if you have suggestions, please send me an email at [sanjay dot dalal at gmail dot com]. We could perhaps extract these out from CollectionSpace and make them available through a separate open source project with Apache 2 license.

Tuesday, April 20, 2010

Authorization service RESTful interface

Yesterday, I checked in the last piece of functionality to complete the first pass at the RESTful interface of the authorization service in CollectionSpace. There is a short description of these APIs on the wiki. Yes, I am aware of the confusion behind the term authorization. I have described the terms used on the wiki.
  1. Role
  2. Permission
  3. Role - Permission relationship (available from Permission service, /permissions/{id}/permroles)
  4. Account - Role relationship (available from Account service, /accounts/{i}/accountroles)
Indeed, there is a separate web service for account management with its own RESTful interface. The CollectionSpace security runtime exposes APIs to enforce access control from the CollectionSpace services runtime and SPIs to plug in various providers. I will be using Spring Security ACLs as the underlying service provider.

More on the enforcement of the permissions in future entries. I will also write more about my experiences and approaches I took in implementing relationships between RESTful resources, e.g. relationship between Role and Permission.

Saturday, March 27, 2010

Security Resource, Permission and Administration

First, let's clarify what is a resource in terms of security. According to XACML, a resource is data, service or system component for which access is requested. WebLogic Security defines resource as any software component, such as a server, a service, an application, or an application artifact that can be secured using security roles and policies. There are subtle differences between XACML's definition and WebLogic's definition, e.g. a method to secure on an EJB is considered a resource in WebLogic while for XACML that method is an action on an EJB resource.

While Spring Security supports several resources to protect such as a URL, a method invocation and an (domain) object, it lacks a uniform abstraction for representing the same in its architecture. It also lacks an abstraction to represent a permission on a resource. Let me give examples for what I mean.

Defining and enforcing permissions for URL resources requires configuring FilterSecurityInterceptor with permissions defined in the form of intercept-url elements including URL patterns and allowed roles.

1:    <bean id="filterInvocationInterceptor" class="org.springframework.security.web.access.intercept.FilterSecurityInterceptor">  
2:      ...  
3:      <!--property name="securityMetadataSource" ref="cspaceMetadataSource"/-->  
4:      <property name="securityMetadataSource">  
5:        <sec:filter-security-metadata-source>  
6:          <sec:intercept-url pattern="/**" access="ROLE_USERS"/>  
7:        </sec:filter-security-metadata-source>  
8:      </property>  
9:    </bean>  

Defining and enforcing permissions for method resources using MethodSecurityInterceptor requires listing permissions using a set of properties including qualified methods as names and allowed roles as value.

1:  <bean id="bankManagerSecurity"  
2:    class="org.springframework.security.access.intercept.aopalliance.MethodSecurityInterceptor">  
3:  ...  
4:   <property name="securityMetadataSource">  
5:    <value>  
6:     com.mycompany.BankManager.delete*=ROLE_SUPERVISOR  
7:     com.mycompany.BankManager.getBalance=ROLE_TELLER,ROLE_SUPERVISOR  
8:    </value>  
9:   </property>  
10:  </bean>   

Spring ACL for Domain Object Security has an abstraction for Permission but actually the permission is required to be configured programatically using abstractions such as Permission, ObjectIdentity and Sid (security identity). It offers no management interfaces to configure or administer permissions declaratively.

1:  <security:global-method-security pre-post-annotations="enabled">  
2:    <security:expression-handler ref="expressionHandler"/>  
3:   </security:global-method-security>  
4:   <bean id="expressionHandler"  
5:     class="org.springframework.security.access.expression.method.DefaultMethodSecurityExpressionHandler">  
6:      <property name="permissionEvaluator" ref="myPermissionEvaluator"/>  
7:   </bean>  

The first two examples above show that Spring Security neatly allows to define permissions exactly where those permissions would be enforced. This makes it really easy to configure permissions. However, it also makes it really difficult to administer permissions independently as there are no distinct and unified administrative interfaces to do so. Now, if you had abstractions for Resource and Permission, you would be able to administer these separately and enforce them from FilterInvocation, MethodInvocation and PermissionEvaluator too.

A typical simple non-hierarchical resource would have the following information:
  • Type of the resource (URL, method, object, class, db table, etc.)
  • Id to uniquely identify the resource
  • Pattern (e.g. com.mycompany.BankManager.delete* or /**)
A typical permission might have the following
  • Resource on which access is defined
  • A set of principals (roles and users) it is applicable to
  • An effect (permit or deny)
How this can help in administering and enforcing permissions? I will write about that in my next post...

Saturday, March 20, 2010

Spring ACL for RESTful web services

I had mentioned before that the easiest way to provision permissions for URIs using Spring Security is by adding intercept-urls and corresponding expressions in security metadata for a filter in a web application. It is also possible to define security metadata in file/database for these URIs so that permission could be changed dynamically after the webapp is deployed. Here is a sample using a property file named url.properties (credits: custom security metadata)for more details.

1:  <bean id="filterInvocationInterceptor" class="org.springframework.security.web.access.intercept.FilterSecurityInterceptor">  
2:      <property name="authenticationManager" ref="authenticationManager"/>  
3:      <property name="accessDecisionManager" ref="httpRequestAccessDecisionManager"/>  
4:      <property name="securityMetadataSource" ref="cspaceMetadataSource"/>  
5:    </bean>  
6:    <bean id="cspaceMetadataSource" class="org.collectionspace.services.authorization.spring.CSpaceSecurityMetadataSource">  
7:      <property name="urlProperties">  
8:        <util:properties location="classpath:urls.properties" />  
9:      </property>  
10:    </bean>  

The url.properties includes a list of permissions. It might look something like following.

1:  /accounts/**=ROLE_ADMINISTRATOR  

And the CSpaceSecurityMetadataResource looks as follows:


1:  import java.util.Collection;  
2:  import java.util.Properties;  
3:  import org.springframework.security.access.ConfigAttribute;  
4:  import org.springframework.security.access.SecurityConfig;  
5:  import org.springframework.security.web.FilterInvocation;  
6:  import org.springframework.security.web.access.intercept.FilterInvocationSecurityMetadataSource;  
7:  public class CSpaceSecurityMetadataSource implements FilterInvocationSecurityMetadataSource {  
8:    private Properties urlProperties;  
9:    public Collection<ConfigAttribute> getAllConfigAttributes() {  
10:      return null;  
11:    }  
12:    public Collection<ConfigAttribute> getAttributes(Object filter)  
13:        throws IllegalArgumentException {  
14:      FilterInvocation filterInvocation = (FilterInvocation) filter;  
15:      String url = filterInvocation.getRequestUrl();  
16:      String method = filterInvocation.getHttpRequest().getMethod();  
17:      //get the roles for requested page from the property file  
18:      String urlPropsValue = urlProperties.getProperty(url);  
19:      StringBuilder rolesStringBuilder = new StringBuilder();  
20:      //fetch roles from property file that could invoke the given http method on given url
21:      return SecurityConfig.createListFromCommaDelimitedString(rolesStringBuilder.toString());  
22:    }  
23:    public boolean supports(Class<?> arg0) {  
24:      return true;  
25:    }  
26:    public void setUrlProperties(Properties urlProperties) {  
27:      this.urlProperties = urlProperties;  
28:    }  
29:    public Properties getUrlProperties() {  
30:      return urlProperties;  
31:    }  
32:  }  

Limitations :


Http Method

Unlike a web application, for the RESTful web service, it is important to know the http method (e.g. POST is used for creating representation for a resource) in addition to the URI (e.g. /accounts/) from the invocation context. Using this two part information, you should retrieve a list of roles (comma separated) that are allowed to invoke the given HTTP method on the given URI. Note that for this reason the example url.properties file shown earlier is not enough as it does not include any HTTP methods in permission definition.

Sub-resources and permission inheritance

According to the HATEOS principle, it is very common to expect that a representation received from a resource could include additional URIs to retrieve representations of related resources. For example, to retrieve the user profile associated with an account, one could invoke a GET request on URI /accounts/{accountid}/users/{userid}. Security administrator may want to inherit permissions for the users resource from the accounts resource. Using a flat permission model such as shown above in the url.properties, it is not possible to define inheritance.

Therefore, I am exploring if a URI could be considered a domain object and Spring ACL could be used to define permissions. Will keep you posted ...