OAuth 2.0 and the Road to Hell

By Eran Hammer Thursday July 26, 2012

They say the road to hell is paved with good intentions. Well, that’s OAuth 2.0.

Last month I reached the painful conclusion that I can no longer be associated with the OAuth 2.0 standard. I resigned my role as lead author and editor, withdraw my name from the specification, and left the working group. Removing my name from a document I have painstakingly labored over for three years and over two dozen drafts was not easy. Deciding to move on from an effort I have led for over five years was agonizing.

There wasn’t a single problem or incident I can point to in order to explain such an extreme move. This is a case of death by a thousand cuts, and as the work was winding down, I’ve found myself reflecting more and more on what we actually accomplished. At the end, I reached the conclusion that OAuth 2.0 is a bad protocol. WS-* bad. It is bad enough that I no longer want to be associated with it. It is the biggest professional disappointment of my career.

All the hard fought compromises on the mailing list, in meetings, in special design committees, and in back channels resulted in a specification that fails to deliver its two main goals – security and interoperability. In fact, one of the compromises was to rename it from a protocol to a framework, and another to add a disclaimer that warns that the specification is unlike to produce interoperable implementations.

When compared with OAuth 1.0, the 2.0 specification is more complex, less interoperable, less useful, more incomplete, and most importantly, less secure.

To be clear, OAuth 2.0 at the hand of a developer with deep understanding of web security will likely result is a secure implementation. However, at the hands of most developers – as has been the experience from the past two years – 2.0 is likely to produce insecure implementations.

How did we get here?

At the core of the problem is the strong and unbridgeable conflict between the web and the enterprise worlds. The OAuth working group at the IETF started with strong web presence. But as the work dragged on (and on) past its first year, those web folks left along with every member of the original 1.0 community. The group that was left was largely all enterprise… and me.

The web community was looking for a protocol very much in-line with 1.0, with small improvement in areas that proved lacking: simplifying signature, adding a light identity layer, addressing native applications, adding more flows to accommodate new client types, and improving security. The enterprise community was looking for a framework they can use with minimal changes to their existing systems, and for some, a new source of revenues through customization. To understand the depth of the divide – in an early meeting the web folks wanted a flow optimized for in-browser clients while the enterprise folks wanted a flow using SAML assertions.

The resulting specification is a designed-by-committee patchwork of compromises that serves mostly the enterprise. To be accurate, it doesn’t actually give the enterprise all of what they asked for directly, but it does provide for practically unlimited extensibility. It is this extensibility and required flexibility that destroyed the protocol. With very little effort, pretty much anything can be called OAuth 2.0 compliant.

Under the Hood

To understand the issues in 2.0, you need to understand the core architectural changes from 1.0:

  • Unbounded tokens - In 1.0, the client has to present two sets of credentials on each protected resource request, the token credentials and the client credentials. In 2.0, the client credentials are no longer used. This means that tokens are no longer bound to any particular client type or instance. This has introduced limits on the usefulness of access tokens as a form of authentication and increased the likelihood of security issues.
  • Bearer tokens  - 2.0 got rid of all signatures and cryptography at the protocol level. Instead it relies solely on TLS. This means that 2.0 tokens are inherently less secure as specified. Any improvement in token security requires additional specifications and as the current proposals demonstrate, the group is solely focused on enterprise use cases.
  • Expiring tokens - 2.0 tokens can expire and must be refreshed. This is the most significant change for client developers from 1.0 as they now need to implement token state management. The reason for token expiration is to accommodate self-encoded tokens – encrypted tokens which can be authenticated by the server without a database look-up. Because such tokens are self-encoded, they cannot be revoked and therefore must be short-lived to reduce their exposure. Whatever is gained from the removal of the signature is lost twice in the introduction of the token state management requirement.
  • Grant types - In 2.0, authorization grants are exchanged for access tokens. Grant is an abstract concept representing the end-user approval. It can be a code received after the user clicks ‘Approve’ on an access request, or the user’s actual username and password. The original idea behind grants was to enable multiple flows. 1.0 provides a single flow which aims to accommodate multiple client types. 2.0 adds significant amount of specialization for different client type.

Indecision Making

These changes are all manageable if put together in a well-defined protocol. But as has been the nature of this working group, no issue is too small to get stuck on or leave open for each implementation to decide. Here is a very short sample of the working group’s inability to agree:

  • No required token type
  • No agreement on the goals of an HMAC-enabled token type
  • No requirement to implement token expiration
  • No guidance on token string size, or any value for that matter
  • No strict requirement for registration
  • Loose client type definition
  • Lack of clear client security properties
  • No required grant types
  • No guidance on the suitability or applicability of grant types
  • No useful support for native applications (but lots of lip service)
  • No required client authentication method
  • No limits on extensions

On the other hand, 2.0 defines 4 new registries for extensions, along with additional extension points via URIs. The result is a flood of proposed extensions. But the real issues is that the working group could not define the real security properties of the protocol. This is clearly reflected in the security consideration section which is largely an exercise of hand waving. It is barely useful to security experts as a bullet point of things to pay attention to.

In fact, the working group has also produced a 70 pages document describing the 2.0 threat model which does attempt to provide additional information but suffers from the same fundamental problem: there isn’t an actual protocol to analyze.

Reality

In the real world, Facebook is still running on draft 12 from a year and a half ago, with absolutely no reason to update their implementation. After all, an updated 2.0 client written to work with Facebook’s implementation is unlikely to be useful with any other provider and vice-versa. OAuth 2.0 offers little to none code re-usability.

What 2.0 offers is a blueprint for an authorization protocol. As defined, it is largely useless and must be profiles into a working solution – and that is the enterprise way. The WS-* way. 2.0 provides a whole new frontier to sell consulting services and integration solutions.

The web does not need yet another security framework. It needs simple, well-defined, and narrowly suited protocols that will lead to improved security and increased interoperability. OAuth 2.0 fails to accomplish anything meaningful over the protocol it seeks to replace.

To Upgrade or Not to Upgrade

Over the past few months, many asked me if they should upgrade to 2.0 or which version of the protocol I recommend they implement. I don’t have a simple answer.

If you are currently using 1.0 successfully, ignore 2.0. It offers no real value over 1.0 (I’m guessing your client developers have already figured out 1.0 signatures by now).

If you are new to this space, and consider yourself a security expert, use 2.0 after careful examination of its features. If you are not an expert, either use 1.0 or copy the 2.0 implementation of a provider you trust to get it right (Facebook’s API documents are a good place to start). 2.0 is better for large scale, but if you are running a major operation, you probably have some security experts on site to figure it all out for you.

Now What?

I’m hoping someone will take 2.0 and produce a 10 page profile that’s useful for the vast majority of web providers, ignoring the enterprise. A 2.1 that’s really 1.5. But that’s not going to happen at the IETF. That community is all about enterprise use cases and if you look at their other efforts like OpenID Connect (which too was a super simple proposal turned into almost a dozen complex specifications), they are not capable of simple.

I think the OAuth brand is in decline. This framework will live for a while, and given the lack of alternatives, it will gain widespread adoption. But we are also likely to see major security failures in the next couple of years and the slow but steady devaluation of the brand. It will be another hated protocol you are stuck with.

At the same time, I am expecting multiple new communities to come up with something else that is more in the spirit of 1.0 than 2.0, and where one use case is covered extremely well. OAuth 1.0 was all about small web startups looking to solve a well-defined problem they needed to solve fast. I honestly don’t know what use cases OAuth 2.0 is trying to solve any more.

Final Note

This is a sad conclusion to a once promising community. OAuth was the poster child of small, quick, and useful standards, produced outside standards bodies without all the process and legal overhead.

Our standards making process is broken beyond repair. This outcome is the direct result of the nature of the IETF, and the particular personalities overseeing this work. To be clear, these are not bad or incompetent individuals. On the contrary – they are all very capable, bright, and otherwise pleasant. But most of them show up to serve their corporate overlords, and it’s practically impossible for the rest of us to compete.

Bringing OAuth to the IETF was a huge mistake. Not that the alternative (WRAP) would have been a better outcome, but at least it would have taken three less years to figure that out. I stuck around as long as I could stand it, to fight for what I thought was best for the web. I had nothing personally to gain from the decisions being made. At the end, one voice in opposition can slow things down, but can’t make a difference.

I failed.

We failed.

spacer

Some more thoughts…



  1. spacer Mark Atwood says:
    July 26, 2012 at 2:31 am

    I can’t decide if I should feel guilty for dropping out immediately after IETF San Francisco, or if I should feel grateful I didn’t waste any time on the OAuth 2.0 fight. Every now and then I would see it discussed at an IIW, and be invited back into looking at the mailing list again, but the energy and focus I loved about the OAuth 1.0 experience just wan’t there, and I could see all the crap forming (what you call the WS- way) that we intentionally kept chopping away when writing 1.0.

    Eren, when we meet again next, I will buy you a beer, for your heroic efforts.

    To all that come after, who have to implement 2.0, we are deeply sorry. It wasn’t our fault.

  2. spacer Johannes Wagener says:
    July 26, 2012 at 3:05 am

    Sad to hear! Thanks for all the hard work.

  3. spacer Peter Verhas says:
    July 26, 2012 at 4:24 am

    >a new source of revenues through customization

    This is your key point and the driver factor to fabricate this standard the way it gets as per your article. This goal is perfectly aimed and hit with the mix of details and the lack of that and with the anticipated coming different standards and non-standard solutions.

    • spacer Jan Henkins says:
      July 28, 2012 at 1:10 am

      Peter Verhas:

      What are you saying? Your points are very unclear…

      • spacer Paul Leddy says:
        July 29, 2012 at 12:59 am

        Ditto.

        • spacer ceteco says:
          August 3, 2012 at 8:20 am

          Could be enterprise language?.

    • spacer JHaag says:
      August 6, 2012 at 11:47 am

      Wow. I hope this is not an indication of how the new documentation will read.

    • spacer Blessed Geek says:
      August 11, 2012 at 7:44 pm

      Peter, I have no idea what you are trying to say.
      Are you writing in human-readable encrypted code that only your intended audience would understand?

    • spacer Ankit Soni says:
      August 17, 2012 at 7:16 pm

      I think I managed to decode it:

      Literal translation:
      “>a new source of revenues through customization
      The OAuth2 specification aimed to help enterprises get moar money, and it succeeded by writing a specification the way enterprises usually do: incredibly detailed while managing to impose no concrete restrictions/specifications”. I think the implication here is that OAuth2 is essentially meant to help enterprises cover their own ass should they get caught with their pants down. The specification is vague enough that it can be implemented shoddily, preferring time and flexibility over actual security if required, but the specification exists so that it can be blamed when something goes wrong. Excellent point.

      • spacer Matt Tagg says:
        September 2, 2012 at 7:12 am

        lol’d so hard at Peter Verhas’s comment, trying to understand wtf he was saying! He’s either an epic troll… or . spacer

  4. spacer Jean-Jacques Dubray says:
    July 26, 2012 at 4:46 am

    Eran,

    we are in 2012, and there are two things we are sure of, a) large companies have a vested interest to fail standards (ebXML, WS-*, UML, BPMN/BPEL…), they mastered the process to achieve that goal, b) WYOS (write your own spec) works, I don’t see why a well written spec, with the feedback of its users/implementers can’t take off. You no longer need “them”

    • spacer platypusfriend says:
      July 28, 2012 at 12:18 pm

      I loved this article, by the way. Just curious… what’s the benefit of failing UML?

  5. spacer Jonathan S. says:
    July 26, 2012 at 4:54 am

    First, I would like to commend you for admitting that, despite the hard work of yourself and others, OAuth 2.0 is a failure. It takes a strong person to admit failure these days.

    But we also should all acknowledge the biggest failure: the entire web stack. Today, it is nothing more than one hack layered upon another, from the very bottom to the very top. HTTP is full of miserables kludges, from its support for caching to its support for compression. Cookies are an ugly hack. SSL/TLS are another hack. But none of those are as bad as JavaScript, which is among the worst scripting language ever devised, even if people who only know JavaScript and PHP claim otherwise. HTML5 is a severe regression from rigor and sensibility of XHTML, and CSS has always made practical layout and styling far more difficult than it ever should be. Given that it’s a patchwork of hacks, the security is horrid.

    Security can’t be tacked on 15 or 20 years later, like has been attempted with OAuth. It is, of course, possible to semi-successfully add another hack to the existing pile of hacks that make up the web, like was done with OAuth pre-2.0. But it’s damn near impossible to go beyond mere hacks, as we’ve seen with OAuth 2.0. To make something like OAuth 2.0 succeed, you need to throw away essentially all of the existing web stack, and rebuild it from scratch, but taking security into account from the very beginning. If you did that, however, you’d find that hacks like OAuth wouldn’t even be needed in the end!

    • spacer Eran Medan says:
      July 26, 2012 at 10:57 am

      I agree with most of what you say, and the feeling is that the web stack is something that went out of control, I agree that CSS is not the simply joy I would expect it to be (try to align a div horizontally or god forbid vertically without 3 visits to stackoverflow and jsfiddle) HTML5 is not answering many things we would hope it would be, you have too many JS frameworks (most of them are great) and even though I disagree about JavaScript (well, I’m just reading JavaScript the good parts, so I’m biased) I agree that having to console.log everything just to learn what this Ajax call returns (or reading the documentation if there are any) is a pain, which doesn’t really exist in Java / Scala or other strong typed languages. I was a strong supporter of Flex, and hoped JavaFX will succeed, but it turns out the web has life of it’s own, the developer community, the Rails one especially in my opinion, has created something very elegant in a world without elegance, things like SASS/SCSS/LESS, things like CoffeScript, like Ruby on Rails / Grails / Django / Play framework, and let’s not forget jQuery, and even Dojo and the commercial, but great sencha / ExtJS, the people who did the web standards might have made errors, but the enterprise, the standards organizations are just a little behind, that’s all, they have much more legacy stuff (people and machines) to satisfy. I think what needs to happen is to have the “hacker community” meet the “software engineer II” community and share ideas, concepts and not just be “us” and “them” corporate software developers solve hard, real problems, they just use XML and SOAP and XSLT and all those WS-* nonsense because they are a little behind, that’s all. and also because no one have shown them the right way, I work in a corporate, financial corporate, and people do want to learn, they do get excited hearing about Node.js, and about Thrift and protocol buffers, they like seeing the benefits of MongoDB, they appreciate the speed of Ruby on Rails / Grails and in some rare cases, they eventually adopt new things. I think instead of “hating Java” and overengineered software, hating windows SOAP and relational databases (except MySQL) people should just show the enterprise engineers that there are other ways, most of them simply don’t know as they have no time or simply think the world revolves around Java EE / Spring / Hibernate…

      • spacer Anonymous says:
        July 26, 2012 at 4:22 pm

        Oh god, my kingdom for a few paragraph markers!

      • spacer David Lawrence says:
        July 26, 2012 at 5:53 pm

        Just a helpful nudge with your js debugging (assuming you’re debugging in browser and not on a nodejs server app). Webkit developer tools, available in both safari and chrome, allow you to set breakpoints and debug your js much as you might debug compiled code in an ide. No need to console log all that stuff. I believe firebug in Firefox provides the same functionality but its been a long time since I’ve used it.

        • spacer Scott M. says:
          July 28, 2012 at 5:32 pm

          If you are an Eclipse user then there are packages you can install which give you all the bells and whistles of other IDE’s while coding in javascript. Including syntax coloring, smart indenting, and best of all it includes an AST parser which gives you context sensitive information, aka intelisense.

      • spacer platypusfriend says:
        July 28, 2012 at 12:20 pm

        (try to align a div horizontally or god forbid vertically without 3 visits to stackoverflow and jsfiddle)

        …HA! Been there… so true

      • spacer kjs3 says:
        July 28, 2012 at 2:37 pm

        Was there a point in there?

        • spacer Robert Folkerts says:
          July 30, 2012 at 6:04 am

          This thread is about ‘web stack as one big hack”. The parent comment illustrates one hackish aspect of the stack. At least that is my inference.

        • spacer Fletch says:
          August 6, 2012 at 12:38 am

          Well, it’s a big square of text, so I guess there is one point at each corner. So four.

        • spacer MG says:
          September 1, 2012 at 12:43 am

          Apparently not for you.
          Do you have any ?

      • spacer Chris Scott says:
        July 29, 2012 at 10:33 pm

        Well-said.

    • spacer Chris Adams says:
      July 28, 2012 at 9:33 am

      I used to feel similarly to you until I thought more about why so many heralded better technologies failed:

      XHTML failed because that “rigor and sensibility” never existed: in practice, XHTML was unusable for any real web site unless you gave up all hope of validation – due entirely to the various standards committees in the XML ecosystem trying to build in enormous amounts of complexity in an attempt to solve every problem in advance. In practice, this caused no problems except for people who wasted time trying to produce valid XHTML and the rest of the web converged on HTML5 because it solves problems authors actually have. It turns out that not penalizing people for moving ahead of a committee is a great way to make new things which normal people want to use

      HTTP might be miserably kludgey in your opinion but it’s solved more real problems than any other protocol in existence because of its simplicity and extensibility. Remember CORBA? Remember SOAP and the million WS-* protocols? All heralded as better ways to build Serious Applications but so complex that in practice no two implementations worked while simple HTTP applications work routinely on harder problems with an insane number of different clients.

      JavaScript: again, not great but it had enough flexibility that people have been able to bootstrap more advanced constructs than most critics thought were possible. Various self-proclaimed better languages tried to claim that position but again, they all failed because they added huge amounts of complexity to every application with little real benefit. Remember client-side Java, with 10-100x the code for any example, littered with AbstractFactoryFactory and developers at Sun full of hubris deciding that all existing graphics APIs were unusable so they needed to invent their own?

      As far as security goes, again, there are some areas for improvement but no other system in the history of computing has offered millions of people anything remotely as secure as the current web, warts and all.

      The simple fact is that building massive heterogenous distributed systems is hard: it’s easy to build castles in the air but actually building them takes longer and costs more than expected with generally little to show at the end. Much as people like to think otherwise, we’re simply not smart enough to build enormous systems top-to-bottom without constant real world feedback. You get that by making designs as simple as possible to solve a problem which real people are demonstrated to have and iterating quickly. Trying to redesign the web stack top to bottom is our field’s quest for perpetual motion.

      • spacer Eric Liu says:
        August 2, 2012 at 1:50 am

        Well said.

      • spacer Bill Burke says:
        August 2, 2012 at 4:04 pm

        Javascript doesn’t count as there is no other feasible option in the browser.

      • spacer KatieP says:
        August 6, 2012 at 9:25 am

        Well said.. People keep on looking for a silver bullet for a complicated universe. And with the security not even built into the in

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.