Reinventing Quick Sort by improving Stupid Sort.

March 14, 2018

Stupid sort is the sort that just randomly shuffles the array until it eventually becomes sorted.

In spite of the fact that stupid sort is not so popular in production environment, it may be mentioned as an opening question at programming interview of any level. The discussion about optimizing this kind of search may be very interesting and useful for better understanding some sorting techniques.

In a nutshell stupid sort may be written like this:

The average running time of this algorithm is terribly long. For each permutation (out of n!) we need to verify the order of all the elements in the array (n elements) which gives us O (n*n!). On my Hexa-core Xiaomi Redmi 3 phone this process is very slow even for n=10 ending up with more than 36 million operations. In the worst case, the running time of the algorithm is unbounded, for the same reason that a tossed coin might turn up heads any number of times in a row.

The best case occurs if the list as given is already sorted; in this case the expected time complexity is O(n)

For any collection of fixed size, the expected running time of the algorithm is finite for much the same reason that the infinite monkey theorem holds: there is non-zero probability of getting  right permutation, so given an unbounded number of tries it will almost surely eventually be chosen.

Let us demonstrate the way we can increase the average running time by adding a couple of simple rules. In our example shuffle each time swaps 2 random elements. What if we will not swap those pairs that already placed in a right order (left element smaller than right)? This will immediately save us a lot of swaps and also significantly decrease the running time.

Let us write down this “directional shuffle”:

Now, after adding these new rules, my phone was able to sort 100 elements in less than a second.

Although the running time has decreased, it still increases very fast, when the number of elements in the array adds up. For example, to sort 1000 elements takes about 100 seconds. We increased the array size by x10 and got the x1000 slow down. This allows us to estimate our algorithm by O(n^3). It turns out that if we only could split our 1000 elements array to 10 sub-arrays by 100 elements and then sort each other separately, this would take us less than 10*(1 sec) = 10 seconds (instead of 1000). The only thing we should figure out is how to split the array in such way that the merge back would be trivial. Apparently we can change our shuffling function a little bit so that after shuffling the value of elements in the left part of the array will be smaller than those of the elements in the right part. Shuffling written in this way may be called many times on the same part of the array until it eventually becomes sorted (stupid-sort technique). Then, after all parts of the array are sorted, the entire array becomes sorted (trivially).

Let us modify our “shuffle” to swap elements with respect to some randomly chosen element of the array (pivot):

The result of one run of this function is that original array was divided into 2 parts: left and right, separated by pivot. Each element on the left is smaller than pivot, and each element on the right is greater or equal than the pivot. Please notice that the function has O (n) time complexity and works in-place.

Now our stupid_sort can be written as follows:

We can actually achieve the same functionality, using a recursion:

Pay attention, that is_sorted(arr) call replaced with if l >= r: since once the size of partitions is 2 the partitions become sorted. Then the original array which consists of sorted partitions becomes also sorted.

Here is the full code resuming all the above:

Written in such way, shuffling function has a special name – partitioning and the algorithm called Quick Sort. Its complexity is bound by number of

partitions, multiplied by complexity of shuffling each partition which is log(n)
* n

Thus, we have just demonstrated how by means of gradually improving the
most basic and simple function, our stupid sort has been transformed into
one of the most efficient sorting algorithms known today.

Extracting textual time-based content from blog page using LCA techique on a DOM tree.

January 20, 2014

Today there is increasing interest in scraping the latest data from internet. Especially textual data. There is a lot of content providing sites, such as blogs, news, forums, etc. This content is time-based (periodically updated during the time). Extracting time-based content from millions of sites is not a trivial task. The main difficulty here is that we don’t know beforehand what is the format of the HTML page that we are going to scrape.

In this post I will describe the method for extracting the time-based textual content from the blog pages in the automatic manner.

Let us look on how posts are organized in the vast majority of blogs:

There are posts that consist of repeating DOM elements such as divs, spans and so on. The common feature in them is that every post contains time stamp. The time stamp might be in the beginning of the post or in the end of it, but almost always it’s a separate DOM element.

So it might be not too hard to detect the dates, using some sort of date parsing library, such as dateutil.

The dates detection process would go like this:

Now when we have the dates, we assume that each of them is the part of the post that we need to collect, but we don’t know where does the post starts and where does it end?

Fortunately there is an efficient method to identify the posts boundaries by finding the Lowest Common Ancestor (LCA) for given dates elements in a DOM tree. Then having the dates LCA’s we can be pretty sure that its children provide the required entry points to the posts itself. Below is the picture that visualizes the idea:

Screenshot from 2014-01-19 18:31:38

Another, more complex case such as multiple post threads in the page:

Screenshot from 2014-01-19 17:57:45

All these cases are supported by the following code:

Where graph is the pattern.graph.Graph instance representing the HTML DOM tree. We are using here shortest_path method to find the shortest paths between the dates elements. The intersection points of these shortest paths will provide the LCA  candidates.

Then we might want to iterate through LCA’s children and extract text from them. This may be accomplished with something similar to the following actions:

And, Yuppee!! We have got the posts!

This was a high level overview of the method. Obviously there is more optimization needed to make use of it in the production environment. Hopefully this will give some insights for those who’s interested with data mining and especially in time-based content extraction.

Thanks to my sister Natalia Vishnevsky for editing this and for the moral support.