Skip to main content

The Ugly Duckling in factoring aka the filtering steps part I

People that knows me well are well aware that prime numbers have been my obsession since my childhood and they are source of continue interest for me. Actually thanks to cryptography they are a relevant part of my everyday life.
One of the most important problem in cryptography since the discovery of RSA is factoring.
The factoring problem consists of finding the prime numbers p and q given a large number , N = p x q.
Unless you are still convinced that factoring is an easy peasy problem, you should know that, while probably not NP-complete, factoring is indeed reaaally hard.
The faster known method for factoring is currently NFS (Number Field Sieve) and if you are interested in the topic I suggest you to read  this beautiful article from the great Carl Pomerance titled "A Tale of Two Sieves" . But it is not what I wanted to talk about today, mainly because the complete algorithm and all its shades go well beyond my current knowledge.
Today instead I want to talk you about one of the IMHO too often underappreciated (but crucial) steps of NFS that goes under the name of filtering. I became more familiar about this topic while reading Topics in Computational Number Theory Inspired by Peter L. Montgomery where I discovered that the legendary Peter L. Montgomery played a big role in the concrete development of these techniques.
One of the most intriguing part of filtering is that at the first sight these methods look trivial (because well, they are) but my main point, that drove me to write this blog post, is that they are probably one of the most important part of the all  NFS algorithm.
The current NFS record factorization is RSA-768, an integer of 768 bits (232 digits). At the beginning of the filtering step, the matrix had about 47 billion rows and 35 billion columns. After the first part of the filtering step, the matrix had about 2.5 billion rows and 1.7 billions columns. At the end of the filtering step, the matrix used in the linear algebra had about 193 million rows and columns. So the filtering step reduced the size of the matrix by more than 99%!!!
But let's go in order...

..What is this filtering about? 

All the modern factorization procedures consist basically of 3 steps:
  1. Relation Building 
  2. Elimination 
  3. GCD Computation 
Filtering is part of step 2 (Elimination). The main goal of the filtering step is to reduce the size of a large sparse matrix over a finite field in order to being able to compute its kernel (we will see that for the factorization problem what we actually need is the left kernel). We will see that filtering per see is a 4 steps process:
  1. Removing duplicates 
  2. Removing singletons 
  3. Removing cliques 
  4. Merging 
As we said in this example we focus on filtering that is part of Elimination so in order to showcase it, we borrow a simple Relation Building example from "An Introduction to Mathematical Cryptography"

from "An Introduction to Mathematical Cryptography"

Again, without going to much in details, the goal here is to find a subset of all these relations whose product is a square on each side of the equality.  This is equivalent to: the sum of the exponents in all chosen relations must be even. If you wonder why, we are trying to leverage one of the simplest of the identities in all of the mathematics:

X^2 - Y^2 = (X+Y)(X-Y)

But let's not diverge from out today's topic and let's focus on filtering. The problem above can be translated in a linear algebra problem. If one sees the relations as rows of a matrix where each column corresponds to an ideal, the coefficients of the matrix are the exponents of the ideals in the relations. As we are looking for even exponents, one can consider the matrix over GF(2).

Finding a linear combination of relations such as every exponent is even is equivalent to
computing the left kernel of the matrix. So lets translate A to B = A'

As we mentioned before, the matrix that comes out from factoring problem is huge (in the order of billions of row/columns). Our goal is to minimize the size of the matrix before the kernel computation in order to make this operation possible. So let's filter

Removing duplicates

The first step of filtering is extremely trivial. It is simply about removing the duplicate rows in the matrix. As weird at it sounds having those rows is inevitable though in the factorization context (because lattice sieving with many distinct special q-primes will produce identical relations).

Removing singletons 

The rule of removing singleton is as simple as  the removing duplicates above: if there is a row in B that contains only a single entry that is non-zero module 2 them the column containing non-zero entry can not occur in a dependency. Such column, called singleton, can be removed from B (together with the respective row). In order to give an example let's use a simpler matrix then B above called M

Now let's apply the removing singleton filter:

So we removed the first row and column of M. But we are not done yet as you can see this removal generated a new singleton (the first column of the new matrix) so several passes are normally requited before all singleton are normally removed from M. Continuing until the very end may not be worth the effort though (we will see in the next post how to handle this part).


Filtering is a really important step in Number Field Sieve and it is implemented also in important integer factorizatiion tools as  CADO-NFS, Msieve and GGNFS. In this blog post we covered the first to phases of filtering as removing duplicates and removing singleton. Stay tuned for the last two steps removing cliques and merging coming in part II of the blog post series.

For more crypto goodies, follow me on Twitter.


Unknown said…
Factoring in cryptography is a really passionate topic. Thank you very much for the post. I work in another side of factoring the business area!

Popular posts from this blog

Billion Laugh Attack in

tl;dr suffered from a Billion Laugh Attack vulnerability that made the containerized environment to crash with a single invocation.
Introduction Few months ago I applied for a talk at a security conference titled Soyouwanna be a Bug Bounty Hunter but it was rejected :(. The reason behind it is that I have been on/off in the bug bounty business for a while as you can see here:
Funny. Found in a forgotten drawer from the time I was a bug hunter :p — Antonio Sanso (@asanso) November 30, 2018 and I would have liked to share some of the things I have learned during these years (not necessary technical advises only). You can find a couple of these advises here:

Rule #1 of any bug hunter is to have a good RSS feed list
and here

The rule #2 of any bug hunter is to DO NOT be to fussy with 'food' specifically with "left over"
Today's rule is: The rule #3 of any bug hunter is DO LOOK at the old stuff


Top 10 OAuth 2 Implementation Vulnerabilities

Some time ago I posted a blogpost abut  Top 5 OAuth 2 Implementation Vulnerabilities.
This week I have extended the list while presenting Top X OAuth 2 Hacks at OWASP Switzerland.

This blog post (like the presentation) is just a collection of interesting attack OAuth related.

#10 The Postman Always Rings Twice  I have introduced this 'attack' in last year post . This is for provider implementer, it is not extremely severe but, hey, is better to follow the spec. Specifically

The client MUST NOT use the authorization code  more than once.  If an authorization code is used more than once, the authorization server MUST deny the request and SHOULD revoke (when possible) all tokens previously issued based on that authorization code.

It turned out that even Facebook and Googledid it wrong... :)

#9 Match Point To all OAuth Providers be sure to follow section 4.1.3 of the spec in particular

...if the "redirect_uri" parameter was included in the initial authorization requ…

OpenSSL Key Recovery Attack on DH small subgroups (CVE-2016-0701)

Usual Mandatory Disclaimer: IANAC (I am not a cryptographer) so I might likely end up writing a bunch of mistakes in this blog post...

tl;dr The OpenSSL 1.0.2 releases suffer from a Key Recovery Attack on DH small subgroups. This issue got assigned CVE-2016-0701 with a severity of High and OpenSSL 1.0.2 users should upgrade to 1.0.2f. If an application is using DH configured with parameters based on primes that are not "safe" or not Lim-Lee (as the one in RFC 5114) and either Static DH ciphersuites are used or DHE ciphersuites with the default OpenSSL configuration (in particular SSL_OP_SINGLE_DH_USE is not set) then is vulnerable to this attack.  It is believed that many popular applications (e.g. Apache mod_ssl) do set the  SSL_OP_SINGLE_DH_USE option and would therefore not be at risk (for DHE ciphersuites), they still might be for Static DH ciphersuites.
Introduction So if you are still here it means you wanna know more. And here is the thing. In my last blog post I was …