Monday, 20 June 2016

Native applications with OAuth 2

By Justin Richer and Antonio Sanso 
This article was excerpted from the book OAuth 2 in Action.

The OAuth core specification specifies four different grant types: Authorization Code, Implicit, Resource Owner Password Credentials and Client Credentials. Each grant type is designed with different security and deployment aspects in mind and should be used accordingly. 
For example, the Implicit grant flow is to be used by OAuth clients where the client code executes within the user agent environment. Such clients are generally JavaScript-only applications, which have, of course, limited capability of hiding the client_secret in client-side code running in the browser. At the other side of the spectrum there are classic server-side applications that can use the authorization code grant type and can safely store the client_secret somewhere in the server. What about native applications then? 
Native applications are those that run directly on the end user’s device, be it a computer or mobile device. The software for the application is generally compiled or packaged externally then installed onto the device. These applications can easily make use of the back channel by making a direct HTTP call outbound to the remote server. Since the user is not in a web browser, as he would be with a web application or a browser client, the front channel is more problematic. 
To make a front channel request, the native application needs to be able to reach out to the system web browser or an embedded browser view to get the user to the authorization server directly. To listen for front channel responses, the native application needs to be able to serve a URI that the browser can be redirected to by the authorization server. This usually takes one of the following forms:
  • an embedded web server running on localhost 
  • a remote web server with some type of out-of- band push notification capability to the application
  • a custom URI scheme such as com.oauthinaction.mynativeapp:// that is registered with the operating system such that the application is called when URIs with that scheme are accessed
For mobile applications, the custom URI scheme is the most common. Native applications are capable of using the authorization code, client credentials, or assertion flows easily, but since they can keep information out of the web browser, it is not recommended that native applications use the implicit flow.
Historically, one of the weaknesses of OAuth was a poor end-user experience on mobile devices. To help smooth the user experience, it was common for native OAuth clients to leverage a “web-view” component when sending the user to the authorization server’s authorization endpoint (interacting with the front channel). 
A web-view is a system component that allows applications to display web content within the UI of an application. The web-view acts as an embedded user-agent, separate from the system browser. Unfortunately, the web-view has a long history of security vulnerabilities and concerns that come with it. Most notably, the client applications can inspect the contents of the web-view component, and would therefore be able to eavesdrop on the end-user credentials when they authenticated to the authorization server.
Since a major focus of OAuth is keeping the user’s credentials out of the hands of the client applications entirely, this is counterproductive. The usability of the web-view component is far from ideal. Since it’s embedded inside the application itself, the web-view doesn’t have access to the system browser’s cookies, memory, or session information. Accordingly, the web-view doesn’t have access to any existing authentication sessions, forcing users to sign in multiple times. 
One thing that native OAuth clients can do is to make HTTP requests exclusively through external user-agents. A great advantage of using a system browser is that it lets the resource owner see the URI address bar, which acts as a great anti-phishing defense. It also helps train users to put their credentials only into trusted websites and not into any application that asks for them. 
In recent mobile operating systems, a third option has been added that combines the best of both of these approaches. In this mode, a special web-view style component is made available to the application developer. This component can be embedded within the application just like a traditional web-view. However, this new component shares the same security model as the system browser itself, allowing single sign-on style user experiences. Furthermore, it is not inspectable by the host application, leading to greater security separation on par with using an external system browser. 
In order to capture this and other security and usability issues that are unique to native applications, the OAuth working group is working on a new document called “OAuth 2.0 for Native Apps” Other recommendations listed in the document include: 
  • For custom redirect URI schemes, pick a scheme that is globally unique and which you can assert ownership over. One way of doing this is to use reversed DNS notation as we have done in the example application: com.oauthinaction.mynativeapp://. This approach is a good way to avoid clashing with schemes used by other applications that could lead to a potential authorization code interception attack.
  • In order to mitigate some of the risk associated with authorization code interception attack, it is a good idea to use Proof Key for Code Exchange (PKCE).
One last important thing to remember is that, for a native application, even if the client_secret is somehow hidden in the compiled code it must not be considered as a secret. Even the most arcane artifact can be decompiled and the client_secret is then not that secret anymore. The same principle applies to mobile clients and desktop native applications. Failing to remember this simple principle might lead to disaster.
One way to hinder the danger of storage of the client_secret for native application is to use one of the OAuth family specifications named Dynamic Registration. This will solve the issue of having the client_secret shipped with the native application artifact. A production instance of such a native application would, of course, store this information so that each installation of the client software will register itself once on startup, but not every time it is launched by the user. No two instances of the client application will have access to each other’s credentials. 
These simple considerations can substantially improve the security and usability of native applications that use OAuth.

For source code, sample chapters, the Online Author Forum, and other resources, go to https://www.manning.com/books/oauth-2- in-action

Monday, 9 May 2016

Holy redirect_uri Batman!

If you bought the book I have been writing with Justin Richer namely OAuth 2 in Action https://www.manning.com/books/oauth-2-in-actionyou might have noticed that we will never got tired to stress out how much important the redirect_uri is in the OAuth 2 universe.
Failing to understand this (rather simple) concept might  lead to disasters. The redirect_uri is really central in the two most common OAuth flows (authorization code and implicit grant). I have blogged about redirect_uri related vulnerability several times and both in OAuth client and OAuth server context. 
Developing an OAuth client is notoriously easier to develop compare to the server counter part.
Said that the OAuth client implementer should still take care and master some concepts. 
If I would be limited to give a single warning for OAuth client implementer this would be 

If you are building an OAuth client,  
Thou shall register a redirect_uri as much as specific as you can

or simply less formally "The registered redirect_uri must be as specific as it can be".

If you wonder yourself why and you do not want to buy our book :p give a read at this blog post I wrote some time ago. This blog post describes a vulnerability I found in an integration between Google+ and Microsoft Live
In another blog post I described a quasi-vulnerability found in an integration between Google and Github. In that case Google was good enough to prevent the leakage of the authorization code since they cleaned the code part of the URI (namely the authorization code) before redirecting. 
It turned out that Google changed the domain name (from console.developers.google.com to console.cloud.google.com)while offering the same  service that leaded to them registering a new OAuth client in Github and consequently a new redirect_uri. And guess what? Well they did the same mistake again as for the registered redirected_uri was not specific enough and this time this might have led to an authorization code leakage. Here  the details (I will spare the same details about Github loose redirect_uri validation, for a refresh here, but in a nutshell Github doesn't use exact matching for  redirect_uri):
  • Google registered in Github a new OAuth client (namely 70087bd6f8a55ecca2a1) with a registered redirect_uri that is too open: https://console.cloud.google.com/
  • An attacker might forge a URI like https://github.com/login/oauth/authorize?client_id=70087bd6f8a55ecca2a1&redirect_uri=https%3A%2F%2Fconsole.cloud.google.com%2Fstart%2Fappengine%3Fproject%3Dantoniosanso&response_type=code&scope=repo+user:email&state=AFE_nuMqEhmFe6MswLJdRX785yLQSyscMQ 
  • The victim, clicking on the link,  ends up to https://console.cloud.google.com/start/appengine?code=09a8363e37f1197dc5ad&project=antoniosanso&state=AFE_nuMqEhmFe6MswLJdRX785yLQSyscMQ&authuser=0 that contained the authorization code. N.B. Github adopts the TOFU (Trust On First Use) approach for OAuth application.
  • This page contains a link to a page controller by the attacker namely antoniosanso.appspot.com
What Google did in order to not leak the Referrer (that would contain the code parameter with the authorization code: 09a8363e37f1197dc5ad ) is to add <a rel="noreferrer target="_blank" > that works well but is not supported by Internet Explorer (IE). In IE if the victim clicks that link it will leak the authorization code via the Referrer.

What I suggested to Google via the Vulnerability Reward Program (VRP) is to register a more specific redirect_uri a lâ https://console.cloud.google.com/project/. This will kill the attack also for the IE users.

One last note, thanks to the fact the authorization code grant flow is safer than the implicit grant flow (not even supported by Github) this attack fails to gain a valid access token (even if the authorization code might leak), this due to:

...ensure that the "redirect_uri" parameter is present if the "redirect_uri" parameter was included in the initial authorization request as described in Section 4.1.1, and if included ensure that their values are identical.

I would also take the chance to thank once more the Goolge Security team. Kudos.

Thursday, 14 April 2016

Google Chrome Potential leak of sensitive information to malicious extensions (CVE-2016-1658)


Last Google Chrome release for Chrome 50.0.2661.75 contains the fix for a security low bug I found (CVE-2016-1658).
When first I found this bug I was under the impression it could be an UXSS. Quickly after I reported I started to realize that this wasn't as exploitable though.
The issue per se was extremely easy to reproduce:

  • Create an HTML file that looks like and save it (e.g. chrome.html)

<h1>Hi</h1> 
<script> alert(document.domain)</script>
  • Now supposing the file is saved under (in MacOS) /Users/xxx/Downloads/chrome.html open the file from hard disk in this way:

     file://mail.google.com/Users/xxx/Downloads/chrome.html

     Note: mail.google.com is arbitrary . This can be any domain (hence is universal) 

  • Observe the document.domain alerted is mail.google.com!


  •  Observe the cookies transported are the one associated with *.google.com domain :


Now this looked really weird to me and I reported as an UXSS. Pretty quickly though was cleat that the file: URL has a unique origin hence:
  • doesn't gain access to things that it frames
  • doesn't gain access to cookies on the hostname it asserts (even if the Cookie extensions shows it!!)
  • The cookies are NOT even transmitted over the wire!
On top it looks like hostnames are a legitimate part of file: URL (spec wise)!
So no UXSS :(
Said that the Google Chrome Team thought that there is still something weird going on (at least with the extensions). Indeed was clear that UX and Extensions API got confused when file: URLs have hostnames. Now I am not a big expert of Chrome codebase but the reason behind it seemed to be that stuff outside of WebKit used GURL::GetOrigin() to get the security origin rather than SecurityOrigin. This is not the case anymore and fixed in  Chrome 50.0.2661.75.
So as Mathias Karlsson said some time ago do not shout hello before you cross the pond :)



Thursday, 28 January 2016

OpenSSL Key Recovery Attack on DH small subgroups (CVE-2016-0701)

Usual Mandatory Disclaimer: IANAC (I am not a cryptographer) so I might likely end up writing a bunch of mistakes in this blog post...

tl;dr The OpenSSL 1.0.2 releases suffer from a Key Recovery Attack on DH small subgroups. This issue got assigned CVE-2016-0701 with a severity of High and OpenSSL 1.0.2 users should upgrade to 1.0.2f. If an application is using DH configured with parameters based on primes that are not "safe" or not Lim-Lee (as the one in RFC 5114) and either Static DH ciphersuites are used or DHE ciphersuites with the default OpenSSL configuration (in particular SSL_OP_SINGLE_DH_USE is not set) then is vulnerable to this attack.  It is believed that many popular applications (e.g. Apache mod_ssl) do set the  SSL_OP_SINGLE_DH_USE option and would therefore not be at risk (for DHE ciphersuites), they still might be for Static DH ciphersuites.

Introduction

So if you are still here it means you wanna know more. And here is the thing. In my last blog post I was literally wondering: What the heck is RFC 5114? In a nutshell RFC-5114 was described here (emphasis mine) as 

...a semi-mysterious RFC 5114 – Additional Diffie-Hellman Groups document. It introduces new MODP groups not with higher sizes, but just with different primes. 
 
and 

the odd thing is that when I talked to people in the IPsec community, no one really knew why this document was started. Nothing triggered this document, no one really wanted these, but no one really objected to it either, so the document (originating from Defense contractor BBN) made it to RFC status. 

The thing that caught my attention back then and I was trying to get an answer were:
  • Why the generators g (defined in this spec)  are so big ? Often the generator is 2. Now I am aware that the generator g=2 leaks one bit but AFAIK this is still considered safe.
  • Why (p-1)/2 (defined in this spec) are not a safe prime
I posted those questions in my blog post and other places in the web (including randombit) hoping for an answer. Well it turned out I got a pretty decent one (thanks again Paul Wouters BTW!!).  This answer was pointing to an old IETF mailing thread that contained a really interesting part (emphasis mine) :

    Longer answer: FIPS 186-3 was written about generating values for DSA,
    not DH.  Now, for DSA, there is a known weakness if the exponents you
    use are biased; these algorithms used in FIPS 186-3 were designed to
    make sure that the exponents are unbiased (or close enough not to
    matter).  DH doesn't have similar issues, and so these steps aren't
    required (although they wouldn't hurt either).

    [...]

    For these new groups, (p-1)/q is quite large, and in all three cases,
    has a number of small factors (now, NIST could have defined groups where
    (p-1)/q has 2 as the only small factor; they declined to do so).  For
    example, for group 23 (which is the worse of the three), (p-1)/q ==  2 *
    3 * 3 * 5 * 43 * 73 * 157 * 387493 * 605921 * 5213881177 * 3528910760717
    * 83501807020473429349 * C489 (where C489 is a 489 digit composite
    number with no small factors). 
The attacker could use this (again, if
    you don't validate the peer value) to effective cut your exponent size
    by about 137 bits with using only  O(2**42) time); if you used 224 bit
    exponents, then the attacker would cut the work used to find the rest
    of the exponent to about O(2**44) time.
  Obviously, this is not
    acceptable.

Reading this answer and knowing that OpenSSL does use RFC 5114 my immediate though was,  I gonna try this to OpenSSL. And you know what? I actually did...

The Attack

The actual attack I performed is literally a verbatim application of a classical paper published in 1997: A Key Recovery Attack on Discrete Log-based Schemes Using a Prime Order Subgroup. The attack is as beautiful as simple. Here I will try to sketch it. For details please refer to the original paper. For the record, this attack is not the type where the other party merely forces the shared secret value to be "weak" (i.e. from a small set of possible values)  without attempting to compromise the private key (like the one I previously reported for Mozilla NSS).

I would refer to the classic  Diffie Hellman nomenclature
  • p as the prime number
  • g the generator with order 
  • q  the size of the prime-order subgroup generate by g
  • y public key
  • x private key
In order for the attack to succeed it needs to have two prerequisites:

  1. requires that the attacker complete multiple handshakes in which the peer (OpenSSL in this case) uses the same private DH exponent. And this is true for the default configuration of OpenSSL for the DHE ciphersuites (namely SSL_OP_SINGLE_DH_USE is not set) and is always true for Static DH ciphersuites. As mentioned above, it is believed that many popular applications (e.g. Apache mod_ssl) do set the  SSL_OP_SINGLE_DH_USE option and would therefore not be at risk (for DHE ciphersuites), they still might be for Static DH ciphersuites.
  2. requires DH configured with parameters based on primes that are not "safe" or not Lim-Lee. This is well the case of RFC 5114 where p-1 = 2 *3 * 3 * 5 * 43 * 73 * 157 * 387493 * 605921 * 5213881177 * 3528910760717 * 83501807020473429349 * C489 (where C489 is a 489 digit composite number with no small factors). But the problem it is not limited to RFC 5114 (while this is a perfect example). Note in order to generate the RFC 5114 parameter file in X9.42 style using openssl just do: 
 openssl genpkey -genparam -algorithm DH -pkeyopt dh_rfc5114:2

This will generate something like 

-----BEGIN X9.42 DH PARAMETERS-----
MIICKQKCAQEArRB+
HpEjqdDWYPqnlVnFH6INZOVoO5/RtUsVl7YdCnXm+hQd+VpW
26+
aPEB7od8V6z1oijCcGA4d5rhaEnSgpm0/gVKtasISkDfJ7e/aTfjZHo/vVbc5
S3rVt9C2wSIHyfmNEe002/
bGugssi7wnvmoA4KC5xJcIs7+KMXCRiDaBKGEwvImF
2xYC5xRBXZMwJ4Jzx94x79xzEPcSH9
WgdBWYfZrcCkhtzfk6zEQyg4cxXXXhmMZB
pIDNhqG55YfovmDmnMkosrnFIXLkEw
QumyPxCw4W55djybU9z0uoCinj+3PBa451
uX7zY+L/ox9xz53lOE5xuBwKxN/+
DBDmTwKCAQEArEAy708tmuOd8wtcj/2sUGze
vnuJmYyvdIZqCM/k/+
OmgkpOELmm8N2SHwGnDEr6q3OddwDCn1LFfbF8YgqGUr5e
kAGo1mrXwXZpEBmZAkr00CcnWsE0i7
inYtBSG8mK4kcVBCLqHtQJk51U2nRgzbX2
xrJQcXy+
8YDrNBGOmNEZUppF1vg0Vm4wJeMWozDvu3eobwwasVsFGuPUKMj4rLcK
gTcVC47rEOGD7dGZY93Z4mPkdwWJ72
qiHn9fL/OBtTnM40CdE81Wavu0jWwBkYHh
vP6UswJp7f5y/ptqpL17Wg8ccc//
TBnEGOH27AF5gbwIfypwZbOEuJDTGR8r+gId
AIAcDTTFjZP+mXF3EB+
AU1pHOM68vziambNjces=
-----END X9.42 DH PARAMETERS-----

that defines the following hexadecimals numbers:

   p =  AD107E1E 9123A9D0 D660FAA7 9559C51F A20D64E5 683B9FD1
        B54B1597 B61D0A75 E6FA141D F95A56DB AF9A3C40 7BA1DF15
        EB3D688A 309C180E 1DE6B85A 1274A0A6 6D3F8152 AD6AC212
        9037C9ED EFDA4DF8 D91E8FEF 55B7394B 7AD5B7D0 B6C12207
        C9F98D11 ED34DBF6 C6BA0B2C 8BBC27BE 6A00E0A0 B9C49708
        B3BF8A31 70918836 81286130 BC8985DB 1602E714 415D9330
        278273C7 DE31EFDC 7310F712 1FD5A074 15987D9A DC0A486D
        CDF93ACC 44328387 315D75E1 98C641A4 80CD86A1 B9E587E8
        BE60E69C C928B2B9 C52172E4 13042E9B 23F10B0E 16E79763
        C9B53DCF 4BA80A29 E3FB73C1 6B8E75B9 7EF363E2 FFA31F71
        CF9DE538 4E71B81C 0AC4DFFE 0C10E64F
 
   g =  AC4032EF 4F2D9AE3 9DF30B5C 8FFDAC50 6CDEBE7B 89998CAF
        74866A08 CFE4FFE3 A6824A4E 10B9A6F0 DD921F01 A70C4AFA
        AB739D77 00C29F52 C57DB17C 620A8652 BE5E9001 A8D66AD7
        C1766910 1999024A F4D02727 5AC1348B B8A762D0 521BC98A
        E2471504 22EA1ED4 09939D54 DA7460CD B5F6C6B2 50717CBE
        F180EB34 118E98D1 19529A45 D6F83456 6E3025E3 16A330EF
        BB77A86F 0C1AB15B 051AE3D4 28C8F8AC B70A8137 150B8EEB
        10E183ED D19963DD D9E263E4 770589EF 6AA21E7F 5F2FF381
        B539CCE3 409D13CD 566AFBB4 8D6C0191 81E1BCFE 94B30269
        EDFE72FE 9B6AA4BD 7B5A0F1C 71CFFF4C 19C418E1 F6EC0179
        81BC087F 2A7065B3 84B890D3 191F2BFA
 
   q =  801C0D34 C58D93FE 99717710 1F80535A 4738CEBC BF389A99
        B36371EB

As mentioned above the thing that the peers need to take in consideration is the fact that for this particular group (p-1)/2 has many small factors hence a validation of the peer public value  (we will see how this can be done later) is required. Well it turns out that OpenSSL did not do this extra step probably for a couple of reason (historically OpenSSL only ever generated DH parameters based on "safe" primes, while this changed lately and the validation has a certain cost in terms of performance). So here how the attack looks like:
  • Assuming the server (OpenSSL in this case) chooses his DH private key to be xb
  • Then transmits yb = g ^ xb (mod p)
Then the attacker
  • choose B  where ord(B) is small (and is equal to one of the small factors of p-1, (e.g. for RFC 5114  ord(B) =  2  or or 3 or  5 or 43 or 73 or 157 or 387493... )
  • choose xa
  • calculate ya = g*xa (mod p) * B
  • with the received yb the attacker by exhaustive search (for TLS these means try to handshake many sessions) tries yb^xa * B^j (mod p). It does it j-times where 0< j <ord (B).
  • At this point the attacker found j = xb (mod ord(B))
  • Once this is done the attacker may repeat the same steps aboove with a different computational feasible B' where ord(B')) is small.
  • The resulting partial secrets can then be combined using the Chinese Remainder Theorem
  • And for the yet remaining bits Shanks's method Pollard's lambda methods can be used.

Again, for details please refer to the original paper.   But to be more ground on Earth this would means that for RFC 5114 group 23 (the one shown in this blog post) that has 2048-bit MODP Group with 256-bit Prime Order Subgroup the attacker would cut the work used to find the rest of the exponent to about O(2**44) time. Now this is for sure not feasible work for many common PCs (isn't it :S?) but it is for sure not a safe.

The fix

As mentioned before in order for work safely with DH parameters as the one in RFC 5114 two options are possible:

  1. Never reuse the key for DHE ciphers suites. And this has been fixed in OpenSSL in https://git.openssl.org/?p=openssl.git;a=commit;h=ffaef3f1526ed87a46f82fa4924d5b08f2a2e631. Curiously enough when I reported the issue to OpenSSL (12-01-2016) this particular fix was already committed (indeed was done 23-12-2015) but was not yet in the release branches. 
  2. Validate the peer value. This can be easily done just checking  that ya^q (mod p) = 1. The fix done by OpenSSL for this issue adds an additional check where a "q" parameter is available (as is the case in X9.42 based parameters). This detects the only known attack, and is the only possible defense for static DH ciphersuites. This could have some performance impact.

 Fork Status

  • BoringSSL got rid of SSL_OP_SINGLE_DH_USE support some months ago. I am not sure about the Static DH ciphersuites situation.
  • I gave an heads up to the LibreSSL folks as well. I know they also assigned a CVE and deprecated SSL_OP_SINGLE_DH_USE this week. Again, I am not sure about the Static DH ciphersuites situation.

Disclosure timeline

12-01-2016 - Reported to OpenSSL security team.
13-01-2016 - Vendor confirmation, CVE-2016-0701 assigned.
15-01-2016 - Disclosure scheduled.
25-01-2016 - Release publicly announced. 
28-01-2016 - Public release and disclosure. 

Acknowledgement

I would like to thank the OpenSSL team for the constant and quick support. Same as the LibreSSL team.


That's all folks. For more, follow me on twitter.

Tuesday, 5 January 2016

What the heck is RFC 5114?

Mandatory Disclaimer: IANAC (I am not a cryptographer) so I might likely end up writing a bunch of mistakes in this blog post...

I already talked about Diffie–Hellman (DH from now on) in TLS in my previous post: Small subgroup attack in Mozilla NSS.
As mentioned FWIW I strongly agree with Google Chrome decision to deprecate DHE .
The reason is mainly due to the Weak Diffie-Hellman attack and related paper . If you are interested in this topic there is a really nice presentation about it at 32C3 .
This shows a really nice potential attack that anyone with enough computational power (let's say NSA) can perform against DHE 1024 bits (details in the paper).
Said that for some reason I have been looking at DHE for a while now and one day I hit RFC 5114.

Now what the heck is this specification about :S ?

I found only few references about it. One funny one from here says (emphasis mine):



There is a semi-mysterious RFC 5114 – Additional Diffie-Hellman Groups document. It introduces new MODP groups not with higher sizes, but just with different primes.
...
Nothing triggered this document, no one really wanted these, but no one really objected to it either, so the document (originating from Defense contractor BBN) made it to RFC status.

Now let's see for example the 1024-bit  numbers

The hexadecimal value of the prime is:

   p = B10B8F96 A080E01D DE92DE5E AE5D54EC 52C99FBC FB06A3C6
       9A6A9DCA 52D23B61 6073E286 75A23D18 9838EF1E 2EE652C0
       13ECB4AE A9061123 24975C3C D49B83BF ACCBDD7D 90C4BD70
       98488E9C 219A7372 4EFFD6FA E5644738 FAA31A4F F55BCCC0
       A151AF5F 0DC8B4BD 45BF37DF 365C1A65 E68CFDA7 6D4DA708
       DF1FB2BC 2E4A4371


   The hexadecimal value of the generator is:

   g = A4D1CBD5 C3FD3412 6765A442 EFB99905 F8104DD2 58AC507F
       D6406CFF 14266D31 266FEA1E 5C41564B 777E690F 5504F213
       160217B4 B01B886A 5E91547F 9E2749F4 D7FBD7D3 B9A92EE1
       909D0D22 63F80A76 A6A24C08 7A091F53 1DBF0A01 69B6A28A
       D662A4D1 8E73AFA3 2D779D59 18D08BC8 858F4DCE F97C2A24
       855E6EEB 22B3B2E5


   The generator generates a prime-order subgroup of size:

   q = F518AA87 81A8DF27 8ABA4E7D 64B7CB9D 49462353

Some straightforward questions comes to my mind:

  • Why the generator g is so big ? Often the generator is 2. Now I know I am aware that the generator g leaks one bit but AFAIK this is still considered safe.
  • Why (p-1)/2 is not a safe prime? (p-1)/2 is divided at least by 7,8,... (what is going one here :S?) AFAIK having (p-1)/2 being a safe prime is important for DH. Maybe in this case would not matter due the "weird" g? (Comments are welcome)
  • Is 160 bit a big enough value for q giving the fact a real generator would give a bigger number ?
Said that is this RFC 5114 used at all? A quick search showed that:

  • Bouncy Castle just changed the default to 2048 just few months ago (but still use rfc5114)
  •  OpenSSL has built-in support for these parameters from OpenSSL 1.0.2 
  • maybe more....

It would be really nice if someone can comment and answer to my questions :)

Tuesday, 22 December 2015

Small subgroup attack in Mozilla NSS

tl;dr While the TLS servers attacks has been pretty much studied and fixed (see e.g. https://www.secure-resumption.com/ and https://weakdh.org/) the situation with the TLS clients is (was) not ideal and can be improved. Here I report a Small subgroup attack for TLS clients that I performed against various browsers and reported.

Whoever reads this blog is used to read about OAuth .
For once (and maybe more in the future) let's hijack the usual topic and let's talk about my new "passion" : TLS in particular Diffie–Hellman (DH from now on).

Now, before to start I need to clarify one thing IANAC (I am not a cryptographer) so I might likely end up writing a bunch of mistakes in this blog post...

Diffie-Hellman is used in SSL/TLS, as "ephemeral Diffie-Hellman" (EDH) and it is probably going to be kill soonish (or at least is the intent of Google Chrome). FWIW I personally agree with this unless EDH implements the Negotiated Finite Field specification.

Now in the last years there were at least a couple of issue that affected EDH:
What I am going to describe here is by far less severe that the issues above.  Indeed has been rated by Mozilla NSS as security moderate and Google Chrome did not consider harmful at all (and since Adam Langley is one of the people that is on this side I got to agree with him :)  ).

But here the details:

When using TLS_DHE_RSA_WITH_AES_128_CBC_SHA Firefox/Chrome doesn't accept degenerate public key of value 0,1 and -1 since this key lead to pms that is {0,1, -1}.
This (the -1 case) is probably a consequence of CVE-2014-1491 (raised as part of the Triple Handshake Attack ).

I would refer to the classic  Diffie Hellman nomenclature
  •  p as the prime number
  • g the generator with order p-1 = q
  • y public key
  • x private key

Observation

If (p-1)/4  = 0 (mod p) then if I choose my private key x = (p-1)/4 then my public key
y = g^x will generates a prime-order subgroup of size 4.

This means that Mozilla/Chrome will agree on a pms = 1 one time out of 4.

The issue

I set up a server with

p = 13407807929942597099574024998205846127479365820592393377723561443721764030073546976801874298166903427690031858186486050853753882811946569946433649006084241
g = 3
q =1

and TLS_DHE_RSA_WITH_AES_128_CBC_SHA as cipher.

During the negotiation with Chrome I always choose

x= (p-1)/4 = 3351951982485649274893506249551461531869841455148098344430890360930441007518386744200468574541725856922507964546621512713438470702986642486608412251521060

and pass

y = 11130333445084706427994000041243435077443611277989851635896953056790400956946719341695219235480436483595595868058263313228038179294276393680262837344694991

Chrome/Firefox will happily "agree" on those 4 pms
  • 1
  • 2277474484857890671580024956962411050035754542602541741826608386931363073126827635106655062686466944094435990128222737625715703517670176266170811661389250
  • 13407807929942597099574024998205846127479365820592393377723561443721764030073546976801874298166903427690031858186486050853753882811946569946433649006084240
  • 11130333445084706427994000041243435077443611277989851635896953056790400956946719341695219235480436483595595868058263313228038179294276393680262837344694991

Of course the "worse" one is 1 and happens to be 1 time out of 4 (according to Adam Langley though "here's nothing special about sending an odd DH value, it could equally well make its DH private key equal to 42"). So not big deal :(

Just for the record even the easier suggestion given in [1] aka

"Make sure that g^x,g^y and g^xy do not equal to 1"


 is not followed and this happens with very high probability (25%)

The Summary



[1] http://crypto.cs.mcgill.ca/~stiglic/Papers/dhfull.pdf

Thursday, 17 December 2015

Top 10 OAuth 2 Implementation Vulnerabilities

Some time ago I posted a blogpost abut  Top 5 OAuth 2 Implementation Vulnerabilities.
This week I have extended the list while presenting Top X OAuth 2 Hacks at OWASP Switzerland.

This blog post (like the presentation) is just a collection of interesting attack OAuth related.

#10 The Postman Always Rings Twice 

I have introduced this 'attack' in last year post . This is for provider implementer, it is not extremely severe but, hey, is better to follow the spec. Specifically

The client MUST NOT use the authorization code  more than once.  If an authorization code is used more than once, the authorization server MUST deny the request and SHOULD revoke (when possible) all tokens previously issued based on that authorization code.

It turned out that even Facebook and Google did it wrong... :)

#9 Match Point

To all OAuth Providers be sure to follow section 4.1.3 of the spec in particular

...if the "redirect_uri" parameter was included in the initial authorization request as described in Section 4.1.1, and if included ensure that their values are identical.

Should you fail to do it, this in combination with Lassie Come Home below is game over (even for implementer that support only the Authorization Code Grant flow).

#8 Open redirect in rfc6749 

If you want to implent OAuth Authorization Server and  follow verbatim the OAuth core spec you might end up having an Open Redirect. Full story here . Interesting attack here .

#7 Native apps - Which OAuth flow ?

In a nutshell

  • It is NOT recommended that native applications use the implicit flow.
  • Native clients CAN NOT protect a client_secret unless it is configured at runtime as in the dynamic registration case (RFC 7591).
If you do not follow this suggestions then you risk this.

#6 Cross-site request forgery for OAuth Clients

Defined  the the Most Common OAuth2 Vulnerability. So do you the state anti CSRF parameter, as long as you use the right library to check and not a broken one :)

#5 Cross-site request forgery for Authorization Servers

As per any other website part is important to not forget Cross Site Request Forgery aka CSRF protection in your OAuth provider impelemtation. Some examples are:

#4 On Bearer Tokens

DO NOT  (if you can avoid) pass the access_token as a URI parameter a la

GET /resource?access_token=mF_9.B5f-4.1Jq HTTP/1.1                  
Host: server.example.com


since:

#3 The Devil Wears Prada

 If you are an OAuth client that use OAuth for authentication (do NOT). If you absolutely have to, you'd better read User Authentication with OAuth 2.0 . Specially if you are using the OAuth Implicit Grant flow (aka Client side).
More about the topic in here and here

#2 Lassie Come Home for OAuth clients

If you are building an OAuth client,  
Thou shall register a redirect_uri as much as specific as you can

#1 Lassie Come Home for Authorization Server

 ....and the winner is (again) 'Lassie Come Home'. Well this is hell of a danger.
There are way too many example of provider vulnerable to this attack. Just listing few here:

At least the mitigation for this issue is damn simple:  use exact matching against registered redirect uri to validate the redirect_uri parameter

BTW the slides are here.

<snip>
//SHAMELESS SELF ADVERTISEMENT
If you like OAuth 2.0 and/or you want to know more about it here you can find a book on OAuth that Justin Richer and myself have been writing on the subject.
https://www.manning.com/books/oauth-2-in-action

</snip>

Monday, 7 December 2015

A Quick Glance at Modern Browsers's Protection Part #1

tl;dr in this blog post we are going to give a look at modern browsers's protection with some hands on example available at https://github.com/asanso/browsers-security and deployed in Heroku. This blog post is NOT about Same-origin policy

Introduction

In this blog post we are going to give a look at modern browsers's protection. More specifically if you are designing a REST API where the result response is driven by some user input, then why not have some help from the browser rather than brewing some ad hoc protection?
I am going to provide some demo deployed in Heroku .
If you prefer running them on your machine you might want to clone  https://github.com/asanso/browsers-security and drill down into the specific example.

Mind your content type

By definition Content-Type entity-header field indicates the media type of the entity-body sent to the recipient or, in the case of the HEAD method, the media type that would have been sent had the request been a GET
It turns out that returning the proper Content-Type might save a lot of headache. 




Or from your browsers-security check out:
So lets dig the response using curl:

curl -v -L "https://mysterious-ocean-4724.herokuapp.com/?name=<script>alert(document.domain)</script>"

>
< HTTP/1.1 200 OK
* Server Cowboy is not blacklisted
< Server: Cowboy
< Connection: keep-alive
< X-Powered-By: Express
< Content-Type: text/html; charset=utf-8
< Content-Length: 42
< Etag: W/"2a-QK3v/EQbwe/c0QdPJrXydw"
< Date: Wed, 02 Dec 2015 15:16:31 GMT
< Via: 1.1 vegur

{"helloWorld": "<script>alert(
document.domain)</script>"}

As you can see we are returning some JSON payload in the response but using the "wrong" Content-Type (aka text/html). This in combination with a malicious input provided by an attacker will make the browser to happily execute the provided javascript snippet.
Now of course output sanitization (this is always good BTW) would have stopped this attack but this would have required some effort. From the other hand just returning the right Content-Type (application/json in this example ) will make the browser displaying the JSON text content as in this example

https://mysterious-ocean-4724.herokuapp.com/?surname=%3Cscript%3Ealert%28document.domain%29%3C/script%3E

curl -v -L "https://mysterious-ocean-4724.herokuapp.com/?surname=<script>alert(document.domain)</script>" 

< HTTP/1.1 200 OK
* Server Cowboy is not blacklisted
< Server: Cowboy
< Connection: keep-alive
< X-Powered-By: Express
< Content-Type: application/json; charset=utf-8
< Content-Length: 56
< Etag: W/"38-AEX4mYlsmzOHSw8oOicxiQ"
< Date: Mon, 07 Dec 2015 09:39:53 GMT
< Via: 1.1 vegur
<
{"helloWorld":"<script>alert(document.domain)</script>"}


Bonus Part:  
The examples above where targetting a stored XSS. Those are cross browsers and if successful (namley some stored javascript is displayed in some not sanitized output) every browser will happiliy execute the javascript.  For  reflected XSS (where the input is bounced directly in the output) some browsers (Chrome, Safari, IE ) ship with an XSS filter. E.g. try to hit the follow link with Google Chrome


has the result 

The XSS Auditor refused to execute a script in 'https://mysterious-ocean-4724.herokuapp.com/?title=%3Cscript%3Ealert%28document.domain%29%3C/script%3E' because its source code was found within the request. 

and the XSS is then stopped by the browser. From the other hand Firefox would still be vulnerable.

Re-mind your content type

As returning a "wrong" content type you might imagine that not returning a Content-Type AT ALL is NOT a so great idea :) Indeed there are some browsers (did I say IE :)?) that trying to be extra clever and try to  intelligently interpret the response content in order to guess the right Content-Type. In the netsec jargon this is call sniffing. But let's the example talking on its own, using IE ONLY

https://protected-garden-1595.herokuapp.com?name=<script>alert(document.domain)</script>

Again if you prefer running in local then clone https://github.com/asanso/browsers-security/tree/master/noContentType

Trying to inspect the response we can see the total lack of content type:

curl -v -L "https://protected-garden-1595.herokuapp.com?name=<script>alert(document.domain)</script>"

< HTTP/1.1 200 OK
* Server Cowboy is not blacklisted
< Server: Cowboy
< Connection: keep-alive
< Date: Mon, 07 Dec 2015 10:52:54 GMT
< Content-Length: 51
< Via: 1.1 vegur
<
Hello World <script>alert(document.domain)</script>


The solution is obviously is to return the correct  Content-Type hence

TIL: mind you Content-Type

Coming soon...

This concludes the part #1. If you like this stuff you might watch this space for:
  • more about Content-Type
  • nosniff 
  • X-XSS-Protection
  • Content-Disposition
  • Content Security Policy (CSP)
  • Cross-origin resource sharing (CORS)
  • HTTP Strict Transport Security (HSTS)
  • Subresource Integrity (SRI)

Wednesday, 14 October 2015

On (OAuth) token hijacks for fun and profit part #2 (Microsoft/xxx integration)

In a previous blogpost we have already analyzed a token hijack on one OAuth integration between some Microsoft and Google service and seen what went wrong.
Now it is time to see yet another integration between Microsoft and xxxx (unluckily I can't disclose the name of the other company due the fact the haven't still fixed a related issue...) and see some fallacy.
But before to focus on the attack we might need a bit of introduction.


HTTP referrer


An HTTP referrer (misspelled as referer in the spec) is a special HTTP header field that browsers (and http clients in general) attach when surfing from a page to another. In this way the new webpage can see where the request originated. One extra thing to point out is that as per section 15.1.3 (Encoding Sensitive Information in URI's) of HTTP RFC [RFC 2616]:

Clients SHOULD NOT include a Referer header field in a (non-secure) HTTP request if the referring page was transferred with a secure protocol.

This is summarized in the image below:


The issue


Microsoft (of course) offers some service that allows you to have your Office document displayed online (similar to Google docs). For Microsoft Word the address is https://word.office.live.com/ .
Now this service is also integrated with other partners and you can display document hosted to a partner website doing something like

https://word.office.live.com/wv/WordView.aspx?FBsrc=http%3A%2F%2PARTNER_WEBSITE%2Fattachments%2Fdoc_preview.php%3Fmid%3Dmid.1426701639299%253A78532202c0996b8097%26id%3D10152839561617017%26metadata&access_token=AQD2GswFGnDGl28A&title=sanso-test

The two things to note in the link above are (in bold):

  • The partner website address (http://PARTNER_WEBSITE)
  • The access_token contained in the URI (access_token=AQD2GswFGnDGl28A)
Anybody bearing this URI can access to a Word document uploaded to  PARTNER_WEBSITE via a Microsoft service https://word.office.live.com/

 

The bad part of this is that if the document contains a link and if the victim clicks on the link the above mentioned referrer will leak the access token.

 

 The attack


The attack might look like this:

- The attacker craft a special Word document containing a link to a website he owns (MUST be https though)
- The attacker upload the file to the PARTNER_WEBSITE 
- The attacker shares this document with the victim
- The attacker waits for the victim to access the document and click to the link

And yep the Referrer will contain the victim's access token leaking it.

You might argue that the attacker would not gain anything by stealing this access token since it would allow to have access to a resource the attacker can already see. This indeed might be put in the bucket of the privacy issue rather than security vulnerability. From the other hand it is really matter of how good is the implementation at the PARTNER WEBSITE and how granular is the hijacked's access token.
In any case Microsoft fixed promptly the issue (fixing the referrer leakage) and rewarded me with a bounty (thanks MSFT).

<snip>
//SHAMELESS SELF ADVERTISEMENT
If you like OAuth 2.0 and/or you want to know more about it here you can find a book on OAuth that Justin Richer and myself have been writing on the subject.
https://www.manning.com/books/oauth-2-in-action

</snip>

 

The bonus


While looking at this Microsoft endpoint I also found a stored XSS vulnerability (now also fixed :)) and Microsoft rewarded me as well for it (thanks MSFT)




Wednesday, 30 September 2015

Apple Safari URI spoofing (CVE-2015-5764)

tl;dr Apple Safari for OS X was prone to URI spoofing vulnerability  (and more general a user interface spoofing). Apple released security updates for Safari 9 on OS X and assigned CVE-2015-5764. Accidentally this vulnerability was also present in iOS.

Instant demo

In Safari up to 8.0.8 :
  • go to https://asanso.github.io/CVE-2015-5764/file0.html
  • click "click me!"
  • notice the address bar being "data:text/html,%3CH1%3EHi!!%3C/H1%3E"
  • go back using the browser button
  • click "click me!"
  • notice the address bar being http://www.intothesymmetry.com/CVE-2015-5764/file0.php !!!! 

Well this looks a clear caching problem to me, right :) ?

The Introduction (Oldie but goldie)

Several months ago (almost a year!!) I was reading the great book written by lcamtuf (aka Michal Zalewski) named The Tangled Web .  I know, I know I was a bit late for the party :)
Said that, this book contained a really interesting Chapter (for the record Chapter 10) that is dedicated almost entirely to pseudo-URLs such as (about:, javascript:, or data:). 
As for almost all the parts of this book I wanted to try what it was written and I started a bit to poke around.  The followers of this blog know that I kind of like to "play" with OAuth. Hence I combined the two things and started to see what I could do.

The Issue

The first issue I found was the one mentioned in the Instant demo section above.
Now,  in order to understand the issue we need to look at the code of https://asanso.github.io/CVE-2015-5764/file0.html

<html>
<a href="http://www.intothesymmetry.com/CVE-2015-5764/file0.php">click me! </a>
</html>


As you can see this is simply pointing to a PHP page in my website (http://www.intothesymmetry.com/CVE-2015-5764/file0.php). So let's take a look at it:

<?php
header("Location: data:text/html,<H1>Hi!!</H1>");
exit();
?>


This is simply an HTTP 302 redirect to data:text/html,data:text/html,<H1>Hi!!</H1>

Now when clicking at the link all the browser but Safari showed properly the address bar being data:text/html,%3CH1%3EHi!!%3C/H1%3E . Safari instead from the second visit onward would show the original website namely http://www.intothesymmetry.com/CVE-2015-5764/file0.php but with the HTML contained in the data:text/html pseudo-URI!!!

Well well this looks like an URI spoofing according to my book ;)
Safari was not unkown to this kind of vulnerability in 2015, same as Google Chrome

The Vulnerability(ies)

At this point your question can easily be why on earth should exists a website that allow an attacker to manipulate a 302 redirect versus a data:text/html URI and how did you find this weird vulnerability  :D ? The answer is: because of some  OAuth 2.0 implementations!!!
One of steps in order to obtain an OAuth client is to register a client application providing client name and a list of redirect_uri. 

<snip>
//SHAMELESS SELF ADVERTISEMENT
If you are not too much familiar with OAuth 2.0 here you can find a book on OAuth that Justin Richer and myself have been writing on the subject.
</snip>

Below a  little reminder on how a typical OAuth flow would look like


It turns out that exists some OAuth Autorization server that allows the registration of redirect_uri of the form of data:text/html. One example is (was as Facebook fixed this in the meantime) Moves on of Facebook acquisitions



The final piece of the puzzle is an Open Redirect vulnerability that exists in rfc6749 aka 'The OAuth 2.0 Authorization Framework' and some of its implementation . You can see an example of it clicking one of the links below:
The link will redirect to the registered_uri (without any user interaction). Before Facebook fixed the data:text/html redirect_uri clicking the following URL


 would have redirected you to  data:text/html,a&state=<script>alert('hi')</script>

The Attack

So far so good. Now let's try to sum up. Indeed we have all we need for a real attack. For an attacker would be enough to:

- find a website that offers OAuth support
- this website needs to allow registration of redirect_uri also of the type data:text/html
- this website implements OAuth 2 verbatim hence has an open redirect
- craft a URI of the form https://api.moves-app.com/oauth/v1/authorize?response_type=code&client_id=bc88FitX1298KPj2WS259BBMa9_KCfL3&redirect_uri=data%3Atext%2Fhtml%2Ca&state=<script>alert('hi')</script>

And here we go, you would have a spoofed website (thanks to CVE-2015-5764)



The Fix

Apple released two security updates this month that include a fix for this issue:

Beyond URI spoofing

It looks like Apple Security team broke down the fix  for this issue in two tranches. Indeed if you careful read the description of CVE-2015-5764

Description: Multiple user interface inconsistencies may have allowed a malicious website to display an arbitrary URL. These issues were addressed through improved URL display logic.

This means that also other user interface were vulnerable and not only the address bar. One of the vulnerable component was the title of the alert box (that is commonly used for anti-phishing) .

But looks like Apple fixed this part of vulnerability in the previous security update.
If you want to give a look to it  with Safari 8.0.7 :

Visit

https://asanso.github.io/CVE-2015-5764/file.html
https://asanso.github.io/CVE-2015-5764/file2.html
https://asanso.github.io/CVE-2015-5764/file3.html

Friday, 18 September 2015

New OAuth book: OAuth 2 in Action

Justin Richer and myself have been writing a book about OAuth.

OAuth 2 in Action

It gives a deep look at the OAuth 2.0 protocol including hands on examples and practical implementation vulnerabilities to avoid. You can preorder the book today or you can download the first chapter for free on the publisher’s website:

https://www.manning.com/books/oauth-2-in-action

Happy reading!!

p.s. for the next few days, you can order it at half-off with the code mloauth2