Explaining the OAuth Session Fixation Attack

By Eran Hammer Thursday April 23, 2009

spacer There is a pretty good story behind this. That is, how we found and managed the OAuth protocol security threat identified last week. In many ways, the story is much more important and interesting than the actual technical details of the exploit.

For everyone involved, this was a first-of-a-kind experience: managing a specification security hole (as opposed to a software bug) in an open specification, with an open community, and no clear governance model. Where do you even begin?

But right now, I know you want the technical details.

If you are reading this, I assume you have a basic understanding of how OAuth works. If you don’t please take a few minutes to read at least the first two parts of my Beginner’s Guide to OAuth.

The first part will give you a general overview and second will take you through the user workflow. It is the workflow that is important here. The rest of the guide deals with security and signatures, but the content of these posts, surprisingly, is not involved in this attack.

I’ll start with what this attack is not about; it does not involve stealing usernames or passwords; it does not involve stealing tokens, secrets, or Consumer Keys; in fact, no part of the OAuth signature workflow is involved. This is not a cryptographic attack. And it does not violate the principal of a user granting access to a specific application. All that remains intact.

You might think I am stalling, but it is critical for people to understand both what is broken, as well as what is still secure (that is, no known exploits identified). It is important because even after this is made public, there will be plenty of OAuth-powered services up and running and I want you to feel safe using them.

Understanding the exact details of security threats is important in order to prevent exploits and fix the specification. But it is equally important to know what is working well and can be trusted.

For example, many applications use OAuth for 2-legged requests that do not involve user authorization and are unaffected by this threat. As a community effort, a big part of our work is to build adoption and support for the protocol. We need to fully disclose threats, and also make sure the reputation of the protocol is not damaged by misunderstandings and speculations.

The identified threat is a session fixation attack, empowered by a social engineering attack. The basic idea is that an attacker tricks an application using an OAuth API (a Consumer) to give it access to someone else’s resources (via an Access Token). The attacker never gets the Access Token itself, just the ability to use the application. But since the application has the Access Token, the attacker can use it to access the victim’s resources.

The OAuth authorization workflow consists of three steps:

  1. The user initiates the flow by requesting the application to access his resources. This triggers the application to request a Request Token from the provider and redirect the user to the provider for authorization.
  2. The user signs into the provider and grants access to the application. If the user is educated about phishing attacks, he can verify that he is in fact giving his password to the provider and no one else.
  3. After granting access, the user is redirected back to the application to complete the flow and enable it access to his data.

The only thing that binds these three steps together is the Request Token in the authorization URI and callback URI. In both cases, the call is not signed and easy to guess. The problem is, there is nothing at all to enable the consumer or provider to ensure that these three steps are performed by the same user. While the application can use cookies and other session management tools to ensure that the first and last step are performed by the same user, it has no way to make sure the middle step is the same as the first or last.

A normal OAuth Flow looks like this:

spacer

An attacker starts the process by using an application account and asking to initiate the flow, which results in the application obtaining a Request Token and redirecting it to the provider. At this point, the attacker stops and takes note of the Request Token included in the redirection URI. The attacker may also modify the callback but we will get to that later.

The attacker then uses social engineering to trick a victim into following that link (the authorization URI from the redirection). This can be as simple as a blog post with a review of the application, inviting people to try it out. When someone clicks on that link, they are sent to the provider to authorize access.

Since this is what he wanted to do, the victim will not realize that he should have started at the application itself, and will continue to sign into the provider. Because of how we train people to look for phishing attacks, even an educated user will notice that he is at the right place.

The provider will then ask the user to grant access, identifying the right application. This will increase the comfort level of the user, since so far, everything checks out. The right provider and the right application. But as soon as the user grants access, the attacker can construct the callback URI (from either learning how the application works or from the callback parameter provided in the redirection URI) and return to the application.

At this point, the attacker’s application account is associated with a valid and authorized victim Access Token. Remember: the attacker does not have the Access Token or any of the secrets. It was never in possession of the Request Token secret, and it does not make signed OAuth requests. All it has is an application account associated with the victim’s resources.

The Attack Flow looks like this (simplified):

spacer

Let’s talk about callbacks a bit. OAuth includes an optional callback parameter allowing Consumers to set a callback when they send the user to the provider for authorization. Providers can implement callback support in a couple of ways:

  1. Allow any callback at all.
  2. Allow any callback within a pre-registered (and verified) domain.
  3. Allow only a static pre-registered callback and ignore the parameter.
  4. Not allow callbacks at all, requiring manual user action.

If the provider uses the first option, the attacker can simply change the callback. Remember, the authorization step is not signed, allowing the attacker to change the callback parameter. This will send the user to a page controlled by the attacker, letting it know when it can go back to the application and finish the job.

If the provider uses the second or third option, this becomes a race condition as to who will make it back to the application first: the attacker or the victim. Now, since the attacker has no way of knowing exactly when the user grants access, it will have to try multiple times until it works. It is up to the Consumer application to handle such brute force attacks properly by voiding the entire transaction. But this requires the application to expect and handle this case.

Another point to consider in the second option is that sometimes domains have an open redirector which will redirect a request based on a given parameter. For example, this is done to count how many people click on ads or outbound links. An attacker can use such redirectors to set the callback to a URI at the pre-registered domain, which in turn will send it to an attacker controlled site.

If the provider uses the fourth option, it is similar to the previous two, but the race condition becomes a lot easier since it will take the victim longer to go back to the application (no automatic redirection back). Also, in manual ‘press here when done’ cases, applications tend to allow multiple attempts because users make more mistakes. In such cases, the application will simply try to exchange the Request Token with an Access Token and will fail until the access is authorized by the user. If the provider doesn’t void the transaction on the first failed attempt, it increases the exposure.

Now, we talked about the attack and we talked about callbacks. But that is not all.

Even if the attacker doesn’t get to be first to return to the application or find out when access has been granted, the application may have a critical design flaw. Some applications use the identity of the user who started the flow when they associate the Access Token with the local account. What this means is that even if the victim is the one to return to the application, the application will still attach his Access Token to the attacker account because it will look up the Request Token in the database and use that information to continue.

If we put it all together, even if an application:

  • Makes sure to use the account of the user coming back from the callback and not that of the one who started
  • Checks that the user who starts the process is the same as the one who finishes it
  • Uses an immutable callback set at registration

It is still exposed to a timing attack trying to slide in between the victim authorizing access and his browser making the redirection call. There is simply no way to know that the person who authorizes is the same person coming back. None whatsoever.

And that’s pretty much it.

To get you started thinking about a solution, a few community members are thinking about fixing this by adding an unpredictable callback parameter generated at the Service Provider to the callback URI on return. The Consumer will need this new extra parameter to exchange the Request Token for an Access Token, ensuring that the real user has to return to the application to complete the flow.

We’ll meet back here tomorrow to continue talking about ideas on how to fix it with a protocol revision. Meanwhile, think about this for a bit, join the conversation on the OAuth mailing list, talk about it with your coworkers, or blog your thoughts.

Be safe.

Brian Eaton, Dirk Balfanz, and Chris Messina provided valuable feedback to this post.



  1. spacer Mike Cohen says:
    April 23, 2009 at 7:22 am

    Wouldn’t verifying that all requests come from the same IP address and having them expire after a certain length of time offer some level of protection?

  2. spacer Eran Hammer-Lahav says:
    April 23, 2009 at 8:18 am

    Whose IP are you going to track? The attacker is performing the first and last step, so his IP will match. The provider has no way of knowing the IP of the person at the application side. Nor do they have a way of communicating it.

  3. spacer Praveen Alavilli says:
    April 23, 2009 at 11:15 am

    If the Consumer explicitly checks whether a callback has been received for a given {Request Token + user session } combination before it tries to exchange request token with access token, doesn’t that help in reducing the race conditions ? Of course the consumer must disregard the request token, if a callback is received for a Request token that has not yet been authorized – since the callback shouldn’t have been called until the user authorizes the token in normal cases.
    But again this requires callback to be made mandatory, which doesn’t work for non-browser based applications.

  4. spacer Eran Hammer-Lahav says:
    April 23, 2009 at 11:25 am

    But how will the Consumer know that whoever it is making the callback (which is known and easy to imitate) is the same person who authorized access? If the Consumer invalidates everything, including any Access Token already issues in connection with the transaction when something goes wrong, it will help. But that means it has to keep Request Tokens around longer to look them up.
    The thing about the race condition is that you can make it much harder to exploit, but you can’t close it.

  5. spacer David Recordon says:
    April 23, 2009 at 3:25 pm

    Thanks for writing this post and the attacker flow drawing is definitely useful!

  6. spacer Erhan Kartaltepe says:
    April 24, 2009 at 1:30 pm

    Your post here and the followup on the OAuth site explain the session fixation attack quite well. The diagrams go a long way! As a silver lining, it is worth pointing out that the fact that this attack was even detected shows OAuth is gaining traction (and the scrutiny that comes along for the ride).
    For a “quick fix” to this issue, developers may want to consider using the MashSSL protocol underneath OAuth. There’s a post on blog.ics.uta.edu that covers how this particular attack can be mitigated in that way.
    Erhan

  7. spacer bernd.eckenfels.net says:
    April 25, 2009 at 1:30 pm

    The Flickr API seems to not suffer from this problem:
    a) it redirects only to a static (APIKey specific) callback
    b) the callback receives a token (frob) non predictable from provider
    c) only the consumer can sign an API request to turn that small token into a full one
    There is no such thing as a session started with the victim in this protocoll (only with the consumer of course).
    Greetings
    Bernd

  8. spacer Erhan Kartaltepe says:
    April 25, 2009 at 9:00 pm

    Ha, that should be blog.ics.utsa.edu. Silly typos!

  9. spacer Drummond Reed says:
    April 26, 2009 at 6:16 pm

    Eran, I agree, the diagrams are very helpful. But shouldn’t the second-to-last step of your first diagram say, “User Redirected Back to Consumer – consumer/callback/oauth_token=abc123“ ?
    =Drummond

  10. spacer www.google.com/accounts/o8/id?id=AItOawmTD3ExjQLPWQ2aSBAY8JjCAA1FrUqv-Ws says:
    April 27, 2009 at 8:00 am

    Comparing the client IP who requested “request token” and the client IP who requested authorization by log-in would help, wouldn’t it?

  11. spacer Eran Hammer-Lahav says:
    April 27, 2009 at 8:14 am

    Nope. All it will do is make sure the user who started the process is the same as the user who is finishing the process. Depending on how you implement the callbacks, this might be limited to a small time window, but still not fully preventable. Also, asking the user to log in again upon coming back is bad user experience, and assumes there is a local sign-in to begin with.

  12. spacer Eran Hammer-Lahav says:
    April 29, 2009 at 10:09 am

    Yes, you are correct. Diagrams have been updated. Thanks!

  13. spacer ersin.myopenid.com says:
    May 19, 2009 at 7:25 am

    Hey y’all!
    We’re using OAuth for RESTful Doodle ( doodle.com/about/APIs.html ) and are thinking about potential solutions as well.
    In the attack, the consumer is interacting with user Eve (the attacker) and the service provider is interacting with user Alice (the victim).
    Is the following a viable defense?
    1. When the consumer asks for a request token, it MUST include who this token is going to be for in a human-readable fashion (“requesting access to user who is known to me as eve@consumer.com“).
    2. When the service provider asks for user Alice’s authorization, it MUST show her who this token is going to be for in a human-readable fashion (“By authorizing this request, you not only grant access to consumer, but also to eve@consumer.com. If you are not eve@consumer.com, please abort!”).
    3. Hopefully, Alice will check whether she would grant the consumer access for herself (“alice@consumer.com”) or not and act accordingly.
    Thanks,
    Paul

  14. spacer Eran Hammer-Lahav says:
    May 19, 2009 at 7:41 am

    Hi Paul,
    This will just make the social engineering part of the attack more complex (it still assumes the user pays attention to the text of the authorization page which isn’t always true). It also requires that the consumer maintain some sort of identity for its users which isn’t always the case.
    We now have a draft proposal for solving the attack called Revision A. Check it out: oauth.googlecode.com/svn/spec/core/1.0a/drafts/3/oauth-core-1_0a.html

  15. spacer ersin.myopenid.com says:
    May 30, 2009 at 6:55 am

    Thanks a lot, Eran!
    I’m going to point our Jazoon audience ( jazoon.com/en/conference/presentationdetails.html?type=sid&detail=7860 ) to the draft proposal as well.
    Hope to see you there,
    Paul
    PS: We came up with the talk last year, when there was not as much good material on OAuth available as there is now. Hence the abstract …

  16. spacer www.google.com/accounts/o8/id?id=AItOawlpJrD2TnKPHjGzY0MZPaDlnJKrbBSjvuA says:
    June 23, 2009 at 1:23 am

    When is the final draft going to be released with the fix to this problem? I am trying to promote/support OAuth through my company’s web services. thx.

  17. spacer steelman says:
    October 23, 2009 at 3:11 am

    As far as I remember and associate right, the first (the simplier) version of OpenID had a similar flaw. So now there is cryptographic signature employed to ensure the authenticity of the returning request. This is the “check that the user who starts the process is the same as the one who finishes it” part. Am I right?

  18. spacer Rodrigo Gama says:
    March 26, 2010 at 8:17 am

    Eran, I was thinking a little about this and got really amazed by the simplicity of the attack. Then I thought of something even simpler (maybe somebody realized this before and I’m being just silly, but I guess it doesn’t harm to ask).
    What if, in the callback URL, I cloned the provider’s login page with a ‘Username/Password didn’t match’? This would surely trick the user into re-typing the username/password.

    I began writing about it, but thought it would be better to talk about it with you beforehand.

  19. spacer Andrés G. Aragoneses says:
    December 17, 2010 at 7:21 am

    Is there a typo here? Shouldn’t “the attacker of the victim” be “the attacker OR the victim”?

    • spacer Eran Hammer-Lahav says:
      February 3, 2011 at 11:19 am

      Thanks. Fixed.

  20. spacer Dave Bacher says:
    May 26, 2011 at 3:59 pm

    LOL again…

    For service-to-service transactions, you have an originating party, a relying party and an authorizing party. The OP, RP and AP. For service-to-service transactions, referrer tracking will eliminate the attack (referrer will be wrong 100% of the time if the user clicks a link in e-mail… if an application is sitting in the middle, then the attack is irrelevent since the application most likely controls the browser being used to perform the OAUTH in the first place, since it doesn’t save access tokens as files).

    for user-to-service transactions, the IP address is “one I’ve seen the user on before.” If I’ve not seen the user on an IP address before, then I have to ask them “hey, is this OK” out of band. That’s how you protect data. If an access occurs from an IP address you’ve not seen before, getting out-of-band confirmation does not require a lot of effort — send an e-mail, SMS, IM, etc. — and makes sure that it’s OK to release the information.

    The whole issue with Google, Yahoo, Facebook, etc. is that they don’t get this fundamental thing. Any time you authorize anything to access any data, you’re intentionally creating a leak. What you are doing with OAUTH — or similar protocols — is trying to route the leak away from the structure over to a drain, so you don’t ruin the carpeting. You’re attempting to leak out the data, intentionally.

    If you leak out data when you’re not supposed to, that’s your fault.

    The worst case is an application under the user’s control, running on their computer, because that’s the easiest target. The second worst case is a network like Google or Facebook that’s popular, but has staff/administrators who would just as soon publish all their data on the public Internet with no controls at all, and then apollogize for it afterwards.

    The deal is — just because you’re issuing this OAUTH token does not imply that any application that gets it should be able to use it. It is perfectly reasonable — given that effectively this is just a password — that you would combine this with some second factor.

    For example, if you require a client certificate to be used when issuing the OAUTH token, you could easily bind that certificate to that token, and require the two to always be provided together. Or for service-to-service, you could require a client certificate, and require it to match the RDNS.

    user-to-service will forever be insecure just due to how it works, and user’s abundant happiness to install whatever software you point them at. O.o

    But seriously — if the attacker grabs the link and tosses it in an e-mail, the referrer won’t match up. Since I had to hit the OAUTH endpoint to get the temporary token already, you know what client ID I’m using to make the request — and so you know what the referrer should be. The attacker’s referrer will be some other value.

    For the user-to-application case, the link won’t have the right user-agent — it’ll have a browser user agent instead of the application’s user agent — and so again, you should be able to catch that as well. It seems to me like it would be reasonable to require (for user-to-app cases) some PPID in the HTTP headers as well, which the attacker would not posess and therefore would prevent them from using the resultant token. Browser fingerprinting could be used instead, and is reasonably accurate.

    I mean the whole thing here is that the protocol’s job is really just to issue the token. Renewals, expiration policies, link expirations, point of orogination restrictions, etc. are all reasonable features — but aren’t OAUTH’s responsibility to cover.

    For example, when I issue OAUTH tokens, they always (100% of the time) have an expiration date. I don’t allow the user an option of “forever,” and I don’t allow over 365 days. I do allow “renew and tell me” — which means they get notified at least once a year, and get reminded of who has tokens that can act on their behalf.

    As with all remote access — doesn’t matter the technology — I generate an out-of-band authorization if I see a new IP address, and in some cases even if it is an IP address I’ve seen before. Additionally, for service-to-service transactions, when I establish the client ID, I require RDNS. And the shared secret only works from that RDNS.

    And yes, it means if someone is in a shared hosting environment, other servers might be able to use their client ID. But that’s still less than “any server on the public Internet that happens to have stolen the token.”

    Especially since for the 90% case, if someone breaks in to the server and gets the stored tokens, they’ll also get the other stuff.

    We use an alternative technique for our client app, incidentally, because we do not want to open a web browser, because users have enough to worry about without us popping up some skinned-browser that doesn’t give them enough information. It’s like the irresponsible use of brs to collect OpenID information. Sure, it looks great — but where the hell does that form submit to? Think I can’t fake the Facebook authorization prompt with a great deal of accuracey, or the pop up login box? Because I’m pretty sure that once you’re in an br, none of the major browsers give you any indication as to where your data is going.

    (for this same reason, I hate login forms that point at https URL’s embedded in http pages, displaying locks on forms, etc. — redirect the user to the other site, so they have an address bar… or use a pop-up window and open a whole new browser window… but this idea of rendering prompts inside, from some third party website, just rubs me wrong — I don’t see any way that it could ever end well)

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.