Friday, 5 May 2017

OAuth Worm II - The revenge

We all know about this massive Google Doc Phishing Attack that hit about 1 million accounts right?

Image from
Well this really "sophisticated attack" (really??) was based on a really spread Internet procol named OAuth. It also turns out that during the early stage of the standardization someone reported this very own attack vector but as often happens he was ignored. 

Back in 2015 I also reported another "hidden feature"

 of OAuth that turns an OAuth server into an open redirector that makes phishing a piece of cake.

Yesterday I was twitting about this and today I decided to imitate Eugene Pupov (lol) and work on my master thesis project. So this is the resulting mail. Have fun:

Fret not, go ahead and click and you will be redirected to:

that is not a Github page but is rather controlled by me.

Well that's all folks. For more OAuth phishing follow me on Twitter

P.S. for the record me and some other folks from the OAuth working group proposed a draft  (3 years ago!!!) that should close this open redirector. Let's see.


Thursday, 20 April 2017

Meh : CSRF in Facebook Delegated Account Recovery

Note this is going to be a quick post.

This year, at Enigma 2017 Conference, Facebook introduced a way to move Account Recovery beyond Email and the "Secret" Question.
After the presentation the moved operationally and presented the first integration partner : Github.

These days I have seen a lot of press around this and both Facebook and Github open sourced their implementation and specification (also presented at F8).
Well it turned out that Facebook side was susceptible to Cross Site Request Forgery.
Really simple explanation:

<img src="">

Then is enough for the victim to visit and will have a new Github Token of the attack under

You might said: nice but whats the threat here?
Indeed is exactly what Facebook replied. Despite it they fixed the issue adding an additional confirmation page.

For the record the threat here is a Login CSRF to a Github account that is kind of

That's all folks. For more Meh follow me on Twitter.

Monday, 10 April 2017

CSRF in Facebook/Dropbox - "Mallory added a file using Dropbox"

tl;dr  Facebook Groups offers the option to upload files directly from the Dropbox account. This integration is done using the OAuth 2.0 protocol and suffered from a variant of the classic OAuth CSRF (defined by Egor Homakov as the the Most Common OAuth2 Vulnerability),  see video below:


 Facebook Groups offers the option to upload files directly from the Dropbox account:

This will allow to surf via browser the Dropbox account 

and post a specific file to the group. 
This integration is done using a variant of the OAuth 2.0 protocol seen in this blog many many times. But once more, OAuth is an access delegation protocol standardized under the IETF umbrella. A typical OAuth flow would look like:
From “OAuth 2 In Action” by Justin Richer and Antonio Sanso, Copyrights 2017

Usually the client initiates the OAuth flow in the following way:

From “OAuth 2 In Action” by Justin Richer and Antonio Sanso, Copyrights 2017

then after that the resource owner has authorized the client the authorization server redirects the resource owner back to the client with an authorization code:
From “OAuth 2 In Action” by Justin Richer and Antonio Sanso, Copyrights 2017

Then the OAuth dance continues....

Facebook/Dropbox integration

In the Facebook/Dropbox integration Dropbox is the client while Facebook is Authorization/Resource server.

The flow is a pretty standard OAuth flow with an exception. Being Dropbox the client he would be in charge of initiate the dance, but the reality is:

Indeed is Facebook that initiates the flow doing:

Everything else is as supposed to be:

CSRF in OAuth 2

The eagle-eye reader will sure notice that the initiation link, aka

lacks one really important piece (in OAuthland) namely the state parameter. This parameter is, according to the OAuth core specification:

An opaque value used by the client to maintain state between the request and callback. The authorization server includes this value when redirecting the user-agent back to the client. The parameter SHOULD be used for preventing cross-site request forgery (CSRF).

The best way to see this CSRF account in action is through a picture:

From “OAuth 2 In Action” by Justin Richer and Antonio Sanso, Copyrights 2017
You can also find a great introduction to this attack in the the Most Common OAuth2 Vulnerability by Egor Homakov. 

CSRF in Facebook/Dropbox integration

Before to describe the specific attack we need to highlight one really important thing. The classic protection against CSRF in OAuth (aka the use of the state parameter) would not work in this case. The reason is due the fact that, as we have seen already, the flow is initiated "weirdly" by Facebook and not Dropbox. So there is no way to have Dropbox checking that the right state parameter is bounced back. So wazzup? The attacker will forge a page with a malicious link (containing his own authorization code) in

<img src="
#_=_" />

and after the victim visits this address his Dropbox upload file post will be done with the name of the attacker!! See:

But wait a second, why this is actually the case? Well it turns out that it was a strange issue in Dropbox and the access token was cached indefinitely. So once the crafted authorization code was bound with the victim resource owner than no matter a legit authorization code was actually employed, Dropbox will not trade it and continue to use the old malicious access token to post the file to Facebook!!

Disclosure timeline

Little rant. Reporting integration issues is always a challenge. Is not always clear who the culprit is. In this case the culprit was clearly Dropbox while the victim was Facebook. The paradox was the being Dropbox not affected by the issue it was not extremely interested to hear about this issue. On the Facebook side even if they were clearly the target they could not do much without the help of Dropbox. And me ? Well I was right in the middle :)

13-01-2017 - Reported to Facebook security team.
14-01-2017 - Reported to Dropbox security team via  Hackerone.

Dropbox part I 

15-01-2017 - Dropbox replied: "This is a bug in Facebook's use of our API rather than the Dropbox API itself."
15-01-2017 - I replied to Dropbox saying: "Is not Facebook using Dropbox API but it is quite the opposite."
15-01-2017 - Dropbox replied: "I will take a look again and reopen if we decide its valid." and -5 points!!!!!!!!
15-01-2016 - While I do not care too much about those point I replied to Dropbox saying: having -5 points reputation for this is rather frustrating.....
15-01-2016 - Dropbox reopened the report and closed as Informative (so got +5 points back :))


from 20-01-2017  to 25-02-2017 - Back an forth between me and Facebook in order to have them to reproduce the issue.
25-02-2017 - Facebook closed the issue saying: "We're able to reproduce the behavior you described, but this may be an issue on the Dropbox side (in particular the /fb/filepicker endpoint) which we do not control."
04-03-2017 - Asked Facebook if there is any chance they can contact Dropbox and explain the situation.

Dropbox part II 

07-03-2017 - Reported (once more) to Dropbox security team via Hackerone.
22-03-2017 - Dropbox rewarded asanso with a $1,331 bounty.

10-04-2017 - Public disclosure. 


This was quite a ride with an happy end eventually! I would like to thank the Facebook and Dropbox security teams and specially Neal Poole from Facebook Security.

That's all folks. For more OAuthy goodies, follow me on Twitter.

If you like OAuth 2.0 and/or you want to know more about it here you can find a book on OAuth that Justin Richer and myself have been writing on the subject.

Monday, 13 March 2017

Critical vulnerability in JSON Web Encryption (JWE) - RFC 7516

tl;dr if you are using go-jose, node-jose, jose2go, Nimbus JOSE+JWT or jose4j with ECDH-ES please update to the latest version. RFC 7516 aka JSON Web Encryption (JWE) hence many software libraries implementing this specification used to suffer from a classic Invalid Curve Attack. This would allow an attacker to completely recover the secret key of a party using JWE with Key Agreement with Elliptic Curve Diffie-Hellman Ephemeral Static (ECDH-ES), where the sender could extract receiver’s private key.


In this blog post I assume you are already knowledgeable about elliptic curves and their use in cryptography. If not Nick Sullivan's A (Relatively Easy To Understand) Primer on Elliptic Curve Cryptography or Andrea Corbellini's series Elliptic Curve Cryptography: finite fields and discrete logarithms are great starting points. Then if you further want to climb the elliptic learning curve including the related attacks you might also want to visit . Also DJB and Tanja talk at 31c3 comes with an explanation of this very attack (see minute 43) or  Juraj Somorovsky et al's research can become handy for learners.
Note that this research was started and inspired by Quan Nguyen from Google and then refined by Antonio Sanso from Adobe.


JSON Web Token (JWT) is a JSON-based open standard (RFC 7519) defined in the OAuth specification family used for creating access tokens. The Javascript Object Signing and Encryption (JOSE) IETF expert group was then formed to formalize a set of signing and encryption methods for JWT that led to the release of  RFC 7515 aka  JSON Web Signature (JWS) and RFC 7516 aka JSON Web Encryption (JWE). In this post we are going to focus on JWE.
A typical JWE is dot separated string that contains five parts:
  • The JWE Protected Header
  • The JWE Encrypted Key
  • The JWE Initialization Vector
  • The JWE Ciphertext
  • The JWE Authentication Tag
An example of a JWE taken from the specification would look like:


This JWE employs RSA-OAEP for key encryption and A256GCM for content encryption :

This is only one of the many possibilities JWE provides. A separate specification called RFC 7518 aka JSON Web Algorithms (JWA) lists all the possible available algorithms that can be used. The one we are discussing today is the Key Agreement with Elliptic Curve Diffie-Hellman Ephemeral Static (ECDH-ES).  This algorithm allows deriving an ephemeral shared secret (this blog post from Neil Madden shows a concrete example on how to do ephemeral key agreement).
In this case the JWE Protected Header lists as well the used elliptic curve used for  the key agreement:

Once the shared secret is calculated the key agreement result can be used in one of two ways:

1. directly as the Content Encryption Key (CEK) for the "enc" algorithm, in the Direct Key Agreement mode, or

2. as a symmetric key used to wrap the CEK with the A128KW, A192KW, or A256KW algorithms, in the Key Agreement with Key Wrapping mode.

This is out of scope for this post but as for the other algorithms the JOSE Cookbook contains example of usage for ECDH-ES in combination with AES-GCM or AES-CBC plus HMAC.


As highlighted by Quan during is talk at RWC 2017 :

Decryption/Signature verification’ input is always under attacker’s control

As we will see thorough this post this simple observation will be enough to fully recover the receiver’s private key. But first we need to dig a bit into elliptic curve bits and pieces.

Elliptic Curves

An elliptic curve is the set of solutions defined by an equation of the form

y^2 = ax^3 + ax + b

Equations of this type are called Weierstrass equations. An elliptic curve would look like:

y^2 = x^3 + 4x + 20

In order to apply the theory of elliptic curves to cryptography we need to look at elliptic curves whose points have coordinates in a finite field Fq. The same curve will then look like below over Finite Field of size 191:

y^2 = x^3 + 4x + 20 over Finite Field of size 191

For JWE the elliptic curves in scope are the one defined in Suite B and (only recently) DJB's curve.
Between those, the curve that so far has reached the higher amount of usage is the famous P-256 (defined in Suite B).
Time to open Sage. Let's define P-256:

The order of the curve is a really huge number hence there isn't much an attacker can do with this curve (if the software implements ECDH correctly) in order to guess the private key used in the agreement. This brings us to the next section:

The Attack

The attack described here is really the classical Invalid Curve Attack. The attack is as simple as powerful and takes advantage from the mere fact that Weierstrass's formula for scalar multiplication does not take in consideration the coefficient b of the curve equation:

y^2 = ax^3 + ax + b

The original's P-256 equation is

As we mention above the order of this curve is really big. So we need now to find a more convenient curve for the attacker. Easy peasy with Sage:

As you can see from the image above we just found a nicer curve (from the attacker point of view) that has an order with many small factors. Then we found a point P on the curve that has a really small order (2447 in this example).
Now we can build malicious JWEs (see the Demo Time section below) and extract the value of the secret key modulo 2447 with complexity O(2447).
A crucial part for the attack to succeed is to have the victim to repeat his own contribution to the resulting shared key. In other words this means that the victim should have his private key to be the same for each key agreement. Conveniently enough this is how the Key Agreement with Elliptic Curve Diffie-Hellman Ephemeral Static (ECDH-ES) works. Indeed ES stands for Ephemeral-Static were Static is the contribution of the victim!
At this stage we can repeat these operations (find a new curve, craft malicious JWEs, recover the secret key modulo the small order) many many times and collecting information about the secret key modulo many many small orders.
And finally Chinese Remainder Theorem for the win!
At the end of the day the issue here is that the specification and consequently all the libraries I checked missed to validate that the received public key (contained in the JWE Protected Header) is on the curve. You can see the Vulnerable Libraries section below to check how the various libraries fixed the issue.
Again you can find details of the attack in the original paper.

Demo Time




In order to show how the attack would work in practice I set up a live demo in Heroku. In is up and running one Node.js server app that will act as a victim in this case. The assumption is this: in order to communicate with this web application you need to encrypt a token using the Key Agreement with Elliptic Curve Diffie-Hellman Ephemeral Static (ECDH-ES). The static public key from the server needed for the key agreement is in

An application that want to POST data to this server needs first to do a key agreement using the server's  public key above and then encrypt the payload using the derived shared key using the JWE format. Once the JWE is in place this can be posted to . The web app will respond with a response status 200 if all went well (namely if it can decrypt the payload content) and with a  response status 400 if for some reason the received token is missing or invalid. This will act as an oracle for any potential attacker in the way shown in the previous The Attack section.
I set up an attacker application in .
You can visit it and click the 'Recover Key' button and observe how the attacker is able to recover the secret key from the server piece by piece. Note that this is only a demo application so the recovered secret key is really small in order to reduce the waiting time. In practice the secret key will be significantly larger (hence it will take a bit more to recover the key).
In case you experience problem with the live demo, or simply if  want to see the code under the hood, you can find the demo code in Github:

Vulnerable Libraries

Here you can find a list of libraries that were vulnerable to this particular attack so far:
Some of the libraries were implemented in a programming language that already protects against this attack checking that the result of the scalar multiplication is on the curve:

* Latest version of Node.js is immune to this attack. It was still possible to be vulnerable when using  browsers without web crypto support.

** Affected was the default Java SUN JCA provider that comes with Java prior to version 1.8.0_51. Later Java versions and the BouncyCastle JCA provider are not affected.

Improving the JWE standard

I reported this issue to the JOSE working group via a mail to the appropriate mailing list. We all seem to agree that an errata where the problem is listed is at least welcomed.This post is a direct attempt to raise awareness about this specific problem.


The author would like to thanks the maintainers of go-jose, node-jose, jose2go, Nimbus JOSE+JWT and jose4j for the responsiveness on fixing the issue. Francesco Mari for helping out with the development of the demo application. Tommaso Teofili and Simone Tripodi for troubleshooting. Finally as mentioned above I would like to thank Quan Nguyen from Google, indeed this research could not be possible without his initial incipit.

That's all folks. For more crypto goodies, follow me on Twitter.

Monday, 28 November 2016

All your Paypal OAuth tokens belong to me - localhost for the win

tl;dr  I was able to hijack the OAuth tokens of EVERY Paypal OAuth application with a really simple trick.


If you have been following this blog you might have got tired of how many times  I have stressed out the importance of the redirect_uri parameter in the OAuth flow.
This simple parameter might be source of many headaches for any maintainer of OAuth installations being it a client or a server.
Accepting the risk of repeating myself here is two simple suggestions that may help you stay away from troubles (you can always skip this part and going directly to the Paypal Vulnerability section):

If you are building an OAuth client,  

Thou shall register a redirect_uri as much as specific as you can

i.e. if your OAuth client callback is then

  • DO register 
  • NOT JUST or
If you are still not convinced here is how I hacked Google leveraging this mistake.

Second suggestion is

The ONLY safe validation
method for redirect_uri the
authorization server should
adopt is exact matching

Although other methods offer client developers desirable flexibility in managing their application’s deployment, they are exploitable.
From “OAuth 2 In Action” by Justin Richer and Antonio Sanso, Copyrights 2016
Again here you can find examples of providers that were vulnerable to this attack

Paypal Vulnerability

So after this long premise the legitimate question is what was wrong with Paypal ?
Basically like many online internet services Paypal offers the option to register your own Paypal application via a Dashboard. So far so good :). The better news (for Paypal) is that they actually employs an exact matching policy for redirect_uri  :)
So what was wrong ?
While testing my own OAuth client I have noticed something a bit fishy. The easier way to describe it is using an OAuth application from Paypal itself (remember the vulnerability I found is universal aka worked with every client!). Basically Paypal has setup a Demo Paypal application to showcases their OAuth functionalities. The initial OAuth request looked like:
As you can see the registered redirect_uri for this application is

What I have found out is that the Paypal Authorization Server was also accepting localhost as redirect_uri. So

was still a valid request and the authorization code was then delivered back to localhost .
Cute right? But still not a vulnerability :(
Well the next natural step was to create a DNS entry for my website looking lke and try:

and you know what? BINGO :

So it really looks like that even if Paypal did actually performed exact matching validation, localhost was a magic word and it override the validation completely!!!
Worth repeating is this vulnerability worked for any Paypal OAuth client  hence was Universal making my initial claim

All your Paypal tokens belong to me - localhost for the win  

not so crazy anymore.
For more follow me on Twitter.

Disclosure timeline

08-09-2016 - Reported to Paypal security team.
26-09-2016 - Paypal replied this is not a vulnerability!!
26-09-2016 - I replied to Paypal saying ok no problem. Are you sure you do not want to give an extra look into it ?
28-09-2016 - Paypal replied the will give another try.
07-11-2016 - Paypal fixed the issue (bounty awarded)
28 -11-2016 - Public disclosure. 


I would like to thank the Paypal Security team for the constant and quick support.

Thursday, 20 October 2016

The RFC 5114 saga

Back in January I posed a question "to the Internet": What the heck is RFC 5114?
It looks like a lot happened since then around it. I would like to use this post to recollect some of the stuff around RFC5114 .

Chapter 0: October 2007

RFC5114 draft was submitted to the IETF .

Chapter I: January 2016

In short RFC5114 is an IETF Informational RTC that "describes eight Diffie-Hellman groups that can be used in conjunction with IETF protocols to provide security for Internet communications." .
One of the thing about this RTC that attracted the attention of many (and also mine) is that violates the Nothing up my sleeve principle.
The other peculiar thing about this RTC (that caught my attention) was that the Ps specified for groups 22/23/24 were not safe primes but were indeed DSA primes adapted to Diffie Hellman. So far so good. Except that all the p-1 specified for those groups factored in a really nice way! So I decided to intensify a bit my research and found something  here (emphasis mine):

...a semi-mysterious RFC 5114 – Additional Diffie-Hellman Groups document. It introduces new MODP groups not with higher sizes, but just with different primes. 

the odd thing is that when I talked to people in the IPsec community, no one really knew why this document was started. Nothing triggered this document, no one really wanted these, but no one really objected to it either, so the document (originating from Defense contractor BBN) made it to RFC status. 

It was than that  I posted this question in my blog post and other places in the web (including randombit) hoping for an answer. Well it turned out I got a pretty decent one (thanks again Paul Wouters BTW!!).  This answer was pointing to an old IETF mailing thread that contained a really interesting part (emphasis mine) :

    Longer answer: FIPS 186-3 was written about generating values for DSA,
    not DH.  Now, for DSA, there is a known weakness if the exponents you
    use are biased; these algorithms used in FIPS 186-3 were designed to
    make sure that the exponents are unbiased (or close enough not to
    matter).  DH doesn't have similar issues, and so these steps aren't
    required (although they wouldn't hurt either).


    For these new groups, (p-1)/q is quite large, and in all three cases,
    has a number of small factors (now, NIST could have defined groups where
    (p-1)/q has 2 as the only small factor; they declined to do so).  For
    example, for group 23 (which is the worse of the three), (p-1)/q ==  2 *
    3 * 3 * 5 * 43 * 73 * 157 * 387493 * 605921 * 5213881177 * 3528910760717
    * 83501807020473429349 * C489 (where C489 is a 489 digit composite
    number with no small factors). 
The attacker could use this (again, if
    you don't validate the peer value) to effective cut your exponent size
    by about 137 bits with using only  O(2**42) time); if you used 224 bit
    exponents, then the attacker would cut the work used to find the rest
    of the exponent to about O(2**44) time.
  Obviously, this is not

NOTE:  it turned out that this factorization listed here is actually wrong (more about it below).

At this point we started to look for some usage of the specification in the wild and with surprisingly we found was kind of commonly used!! In turn it was:
  • the default choice for Bouncy Castle and Exim
  • OpenSSL has built-in support for RFC5114 in OpenSSL 1.0.2 
  • and much more...
One of the outcome of this analysis was  OpenSSL Key Recovery Attack on DH small subgroups (CVE-2016-0701) (easy explanation in this ArsTechnica article). In turn we had:

Interlude: February 2016- June 2016

In the meantime another news came into the game. It was indeed discovered that Socat (a versatile command line utility that builds bi-directional communication) contained an hard-coded Diffie-Hellman 1024-bit prime number that was NOT prime!! This story is covered here. All this brought David Wong to write "How to Backdoor Diffie-Hellman"

Chapter II: October 2016

All this happened toward the first half of the year and the situation was kind of quiet until really recently when Fried et al. released "A kilobit hidden SNFS discrete logarithm computation" that made some people wake up. What is so special about this paper you might ask? An easy explanation can be found in this article. In a nutshell the authors of the paper were able to reuse some theory from the '90s and introduce a backdoor into a 1024 prime such that:

  1. it would be feasible for the creator of the backdoor to calculate discrete log 
  2. it would be impossible for anybody else to prove that this particular number was actually backdoored!
As we said at the begin of the post, RFC5114 violates the Nothing up my sleeve principle making it a possible backdoor candidate (but here is where the speculations start). Anyway this paper did not pass unobserved by the crypto  community and led to some actions:

At this point you might actually wonder how much is actually used this RFC5114 in the end ? If you are curious you can find a pretty decent answer in the paper we just released: "Measuring small subgroup attacks against Diffie-Hellman".
The paper contains a detailed usage of  RFC5114 in various protocols: HTTPS, POP3S, IKE. etc and analyzes over 20 open-source cryptographic libraries. For the sake of correctness the paper doesn't focus only on RFC5114 but includes also analysis of non-safe primes usage in the wild. For example Amazon ELB was also found to be partially vulnerable while it was not using RFC5114 : "...We  were  able  to  use  a small-subgroup  key  recovery  attack  to  compute  17  bits  of our load balancer’s private Diffie-Hellman exponent..." .

Another thing present in the paper is a complete factorization of group 22 and improved factorization for the other groups:

Chapter III:  ...What's next? and When ?

Funnily enough one of the author of RFC5114 was invited to express his point of view and here is his answer! So what is going on with RFC5114 ? Well is still unknown. So far there are only speculations and no facts but we all know what has happened with the Dual_EC_DRBG right?

That's all folks. For more, follow me on twitter.

Monday, 20 June 2016

Native applications with OAuth 2

By Justin Richer and Antonio Sanso 
This article was excerpted from the book OAuth 2 in Action.

The OAuth core specification specifies four different grant types: Authorization Code, Implicit, Resource Owner Password Credentials and Client Credentials. Each grant type is designed with different security and deployment aspects in mind and should be used accordingly. 
For example, the Implicit grant flow is to be used by OAuth clients where the client code executes within the user agent environment. Such clients are generally JavaScript-only applications, which have, of course, limited capability of hiding the client_secret in client-side code running in the browser. At the other side of the spectrum there are classic server-side applications that can use the authorization code grant type and can safely store the client_secret somewhere in the server. What about native applications then? 
Native applications are those that run directly on the end user’s device, be it a computer or mobile device. The software for the application is generally compiled or packaged externally then installed onto the device. These applications can easily make use of the back channel by making a direct HTTP call outbound to the remote server. Since the user is not in a web browser, as he would be with a web application or a browser client, the front channel is more problematic. 
To make a front channel request, the native application needs to be able to reach out to the system web browser or an embedded browser view to get the user to the authorization server directly. To listen for front channel responses, the native application needs to be able to serve a URI that the browser can be redirected to by the authorization server. This usually takes one of the following forms:
  • an embedded web server running on localhost 
  • a remote web server with some type of out-of- band push notification capability to the application
  • a custom URI scheme such as com.oauthinaction.mynativeapp:// that is registered with the operating system such that the application is called when URIs with that scheme are accessed
For mobile applications, the custom URI scheme is the most common. Native applications are capable of using the authorization code, client credentials, or assertion flows easily, but since they can keep information out of the web browser, it is not recommended that native applications use the implicit flow.
Historically, one of the weaknesses of OAuth was a poor end-user experience on mobile devices. To help smooth the user experience, it was common for native OAuth clients to leverage a “web-view” component when sending the user to the authorization server’s authorization endpoint (interacting with the front channel). 
A web-view is a system component that allows applications to display web content within the UI of an application. The web-view acts as an embedded user-agent, separate from the system browser. Unfortunately, the web-view has a long history of security vulnerabilities and concerns that come with it. Most notably, the client applications can inspect the contents of the web-view component, and would therefore be able to eavesdrop on the end-user credentials when they authenticated to the authorization server.
Since a major focus of OAuth is keeping the user’s credentials out of the hands of the client applications entirely, this is counterproductive. The usability of the web-view component is far from ideal. Since it’s embedded inside the application itself, the web-view doesn’t have access to the system browser’s cookies, memory, or session information. Accordingly, the web-view doesn’t have access to any existing authentication sessions, forcing users to sign in multiple times. 
One thing that native OAuth clients can do is to make HTTP requests exclusively through external user-agents. A great advantage of using a system browser is that it lets the resource owner see the URI address bar, which acts as a great anti-phishing defense. It also helps train users to put their credentials only into trusted websites and not into any application that asks for them. 
In recent mobile operating systems, a third option has been added that combines the best of both of these approaches. In this mode, a special web-view style component is made available to the application developer. This component can be embedded within the application just like a traditional web-view. However, this new component shares the same security model as the system browser itself, allowing single sign-on style user experiences. Furthermore, it is not inspectable by the host application, leading to greater security separation on par with using an external system browser. 
In order to capture this and other security and usability issues that are unique to native applications, the OAuth working group is working on a new document called “OAuth 2.0 for Native Apps” Other recommendations listed in the document include: 
  • For custom redirect URI schemes, pick a scheme that is globally unique and which you can assert ownership over. One way of doing this is to use reversed DNS notation as we have done in the example application: com.oauthinaction.mynativeapp://. This approach is a good way to avoid clashing with schemes used by other applications that could lead to a potential authorization code interception attack.
  • In order to mitigate some of the risk associated with authorization code interception attack, it is a good idea to use Proof Key for Code Exchange (PKCE).
One last important thing to remember is that, for a native application, even if the client_secret is somehow hidden in the compiled code it must not be considered as a secret. Even the most arcane artifact can be decompiled and the client_secret is then not that secret anymore. The same principle applies to mobile clients and desktop native applications. Failing to remember this simple principle might lead to disaster.
One way to hinder the danger of storage of the client_secret for native application is to use one of the OAuth family specifications named Dynamic Registration. This will solve the issue of having the client_secret shipped with the native application artifact. A production instance of such a native application would, of course, store this information so that each installation of the client software will register itself once on startup, but not every time it is launched by the user. No two instances of the client application will have access to each other’s credentials. 
These simple considerations can substantially improve the security and usability of native applications that use OAuth.

For source code, sample chapters, the Online Author Forum, and other resources, go to in-action

Monday, 9 May 2016

Holy redirect_uri Batman!

If you bought the book I have been writing with Justin Richer namely OAuth 2 in Action might have noticed that we will never got tired to stress out how much important the redirect_uri is in the OAuth 2 universe.
Failing to understand this (rather simple) concept might  lead to disasters. The redirect_uri is really central in the two most common OAuth flows (authorization code and implicit grant). I have blogged about redirect_uri related vulnerability several times and both in OAuth client and OAuth server context. 
Developing an OAuth client is notoriously easier to develop compare to the server counter part.
Said that the OAuth client implementer should still take care and master some concepts. 
If I would be limited to give a single warning for OAuth client implementer this would be 

If you are building an OAuth client,  
Thou shall register a redirect_uri as much as specific as you can

or simply less formally "The registered redirect_uri must be as specific as it can be".

If you wonder yourself why and you do not want to buy our book :p give a read at this blog post I wrote some time ago. This blog post describes a vulnerability I found in an integration between Google+ and Microsoft Live
In another blog post I described a quasi-vulnerability found in an integration between Google and Github. In that case Google was good enough to prevent the leakage of the authorization code since they cleaned the code part of the URI (namely the authorization code) before redirecting. 
It turned out that Google changed the domain name (from to offering the same  service that leaded to them registering a new OAuth client in Github and consequently a new redirect_uri. And guess what? Well they did the same mistake again as for the registered redirected_uri was not specific enough and this time this might have led to an authorization code leakage. Here  the details (I will spare the same details about Github loose redirect_uri validation, for a refresh here, but in a nutshell Github doesn't use exact matching for  redirect_uri):
  • Google registered in Github a new OAuth client (namely 70087bd6f8a55ecca2a1) with a registered redirect_uri that is too open:
  • An attacker might forge a URI like;;response_type=code&amp;scope=repo+user:email&amp;state=AFE_nuMqEhmFe6MswLJdRX785yLQSyscMQ 
  • The victim, clicking on the link,  ends up to;project=antoniosanso&amp;state=AFE_nuMqEhmFe6MswLJdRX785yLQSyscMQ&amp;authuser=0 that contained the authorization code. N.B. Github adopts the TOFU (Trust On First Use) approach for OAuth application.
  • This page contains a link to a page controller by the attacker namely
What Google did in order to not leak the Referrer (that would contain the code parameter with the authorization code: 09a8363e37f1197dc5ad ) is to add &lt;a rel="noreferrer target="_blank" &gt; that works well but is not supported by Internet Explorer (IE). In IE if the victim clicks that link it will leak the authorization code via the Referrer.

What I suggested to Google via the Vulnerability Reward Program (VRP) is to register a more specific redirect_uri a lâ This will kill the attack also for the IE users.

One last note, thanks to the fact the authorization code grant flow is safer than the implicit grant flow (not even supported by Github) this attack fails to gain a valid access token (even if the authorization code might leak), this due to:

...ensure that the "redirect_uri" parameter is present if the "redirect_uri" parameter was included in the initial authorization request as described in Section 4.1.1, and if included ensure that their values are identical.

I would also take the chance to thank once more the Goolge Security team. Kudos.

Thursday, 14 April 2016

Google Chrome Potential leak of sensitive information to malicious extensions (CVE-2016-1658)

Last Google Chrome release for Chrome 50.0.2661.75 contains the fix for a security low bug I found (CVE-2016-1658).
When first I found this bug I was under the impression it could be an UXSS. Quickly after I reported I started to realize that this wasn't as exploitable though.
The issue per se was extremely easy to reproduce:

  • Create an HTML file that looks like and save it (e.g. chrome.html)

<script> alert(document.domain)</script>
  • Now supposing the file is saved under (in MacOS) /Users/xxx/Downloads/chrome.html open the file from hard disk in this way:


     Note: is arbitrary . This can be any domain (hence is universal) 

  • Observe the document.domain alerted is!

  •  Observe the cookies transported are the one associated with * domain :

Now this looked really weird to me and I reported as an UXSS. Pretty quickly though was cleat that the file: URL has a unique origin hence:
  • doesn't gain access to things that it frames
  • doesn't gain access to cookies on the hostname it asserts (even if the Cookie extensions shows it!!)
  • The cookies are NOT even transmitted over the wire!
On top it looks like hostnames are a legitimate part of file: URL (spec wise)!
So no UXSS :(
Said that the Google Chrome Team thought that there is still something weird going on (at least with the extensions). Indeed was clear that UX and Extensions API got confused when file: URLs have hostnames. Now I am not a big expert of Chrome codebase but the reason behind it seemed to be that stuff outside of WebKit used GURL::GetOrigin() to get the security origin rather than SecurityOrigin. This is not the case anymore and fixed in  Chrome 50.0.2661.75.
So as Mathias Karlsson said some time ago do not shout hello before you cross the pond :)