An Introduction to Fundamental Web Crawling Strategies

Generally, any search engine architecture is consisted of four core elements: a crawler, an indexer, a retrieval engine and a user interface interacts with end users. In this post, I’ll make an introduction to crawlers, crawling strategies and the the main challenges search engines face with the growth of the Web.

What is a Web Crawler?

A web crawler is an automatic web page collector to create a cache/base of local copies of pages found. Initially a crawler starts with a beginning set of known URLs. These known links are also known as seed URLs. Then, crawler extracts the links inside the known documents and responsible to download these newly found pages in some order. In crawling field, there are two major crawler types:

  1. Periodic or Snapshot Crawling: Crawler continues to find new pages until the collection hits a desirable size. Then, periodically it runs the same process and replaces new collection with the existing. There are typically very large intervals between two crawlings. This method doesn’t use any existing knowledge comes from the previous crawls.
  2. Incremental Crawling: These crawlers keep searching for new links although collection becomes as large as it is desired to be. Old pages are repeatedly visited in a schedule to update the collection. It’s very hard to crawl a large source (such as Web) this way. Documents are needed to be refreshed one by one, also new links should be explored to be handled by the local collection. Web growth function is an exponential one according to the statistics. It means there are even more newly added pages than the updates of the existing documents. We unfortunately have a limited processing power, so it becomes more critical to decide which page to (re)visit next from the queue?

Figure above shows it is more useful to choose snapshot strategies for large scope search engines. On the other side, if rapid change of documents is guaranteed for a small scale corpus, it’s more effective to use an incremental crawler.

Scheduling the Download Queue

Although I stated it is more likely to use a snapshot crawling for large amount of documents, large gaps between updates makes it absolutely impractical solution for Web search engines while competitors such as Google explores even the most least significant change in hours. Incremental crawling looks (and actually is) very costly if we cannot predict when a page is updated/removed. Re-downloading the whole Web in short periods is also impossible. But, what if we can guess how often a page changes? What if we can prioritize each URL and schedule our download queue? Continue reading

Functional Programming for Beginners

Recently, I’m facing many questions about functional programming. Instead of answering everybody one by one, I decided to write a blog post about functional programming. In this article, I’ll try to introduce you the FP concept. If you are interested, I advice you to have a hands-on experience. There are many widely used functional languages available today: LISP, Haskell, Erlang and F# (new but promising) are a few to name.

Firstly, a Brief History…

A long time ago, in 1930s, when the world was stuck in another economical recession, lives of four extra-ordinary mathematicians were crossed in Princeton, NJ.  These men were not interested in the physical world they were living but trying to create their own universe to find answers about limits of computation – a word not heard by many yet. The area they were interested in was called formal systems and their main problem was to answer which problems are solvable if processing power and memory were infinite. One of them were a truly materialist, a slave of questioning and curiosity, a British guy who decided to move to the new world after graduating from Trinity College. Second was a super brain whose Ph.D. dissertation was accepted when he was just 23 years old, nicknamed “Mr. Why”, a close friend of Albert Einstein. The other two were recent Princeton graduates who decided to go for graduate school. Correspondingly, the names of these men were Alan Turing, Kurt Gödel, Alonzo Church and Stephen Kleene. In 1936, Turing extended Gödel’s study on the limits of proof and computation with replacing Gödel’s universal arithmetic-based formal language with formal devices called Turing machines. At the same time, two young grad students Church and Kleene were designing a universal model of computation which was identical to Turing machines in power. Their formal system were called lambda calculus. Let’s say it in a clearer and less scientific-BS way: they invented a language, lambda calculus, that was capable to be the smallest universal programming language of the whole world.

Lambda Calculus

Lambda calculus is the common programming language of the world. The main aim or their inventors was to prove any computable function can be expressed and evaluated using this formulization. In the universe of lambda calculus, the key elements are <name>, <expression> and <application> where,

 

<name> in lambda calculus cannot be associated with different values, therefore it is not called a “variable.” Imagine your favourite iterative programming language don’t let you change values of the variables by default. Yes, it sounds like a headache at first, but the whole concept is standing on these rules. Now, let’s move on to a more practical example, for instance, to an function multiples its input by 2.

 

For great examples, I suggest you to read “A Tutorial Introduction to the Lambda Calculus” by Rojas. Continue reading

BigTable Concept: Why do the World’s Smartest People Ignore Relational DBs?

In the era of the Internet, the key problem is scalability. As cloud’s popularity climbs up, we are hearing more about the constraints. So far, I only had time to play with Google’s App Engine and Microsoft’s Azure Services Platform. Cloud developers are mainly shocked by the new non-relational databases that cloud services use as the only alternative. Google calls it BigTable and Microsoft finds a new place in its own terminology dictionary for BLOB. Many start to wonder what the hype about the relational databases was over the past 30 years. Foremost, let’s clear that this is not a replacement, but a more efficient way to store data by eliminating not-that-fundamental super engineered functionality layers of the current relational database management systems. Yes, good news for people makes living by designing super large and highly normalized databases to ensure data integrity.

On a relational database, everything is in control; you can add constrains to ensure nobody will be able to enter a duplicated row. Or in deletion, you can program DBMS to handle the useless orphan rows. But the best, a relational DBMS is going to pre-process your SQL query before executing to avoid silly performance mistakes you can make. Think of the environment now: constraints over constraints, query execution strategies, high-level of dependence and complex indexing methods. This package works great unless you want to distribute the tables to different machines. Can you image joining two tables where tables are distributed over 100.000 nodes? In a Google case, this is the everyday problem (or better, call it an every millisecond issue). Luckily, Google’s data has characteristics; according to Jeffrey Dean, they are able to manage constraints DBMSes serve to process data, on the application level. Consequently, Google keeps data in a very basic form as <key,value,timestamp> tuples.

BigTable looks like a very large B+ tree. It has 3 levels of hierarchy. All of tables are sorted and those tables are separated into pieces called tablets. First two levels are made of metadata tables to locate you to the right  tablet. Root tablet is not distributed, but with helps of prefetching and extreme caching, it is actually not the bottleneck of the system. Final level tablets points to physical files (managed by Google File System). GFS provides 3 copies for each file on the system, so no matter if a machine is going down, they still have 2 other copies somewhere else. In the 2nd figure, a row of a tablet is illustrated. com.cnn.www is the key in this case and value has three different columns: contents, anchor:cnnsi.com and anchor:my.look.ca. Notice the timestamps, these fields may contain more than one version of entry. In this case, as Google crawler finds updated content on www.cnn.com, a new layer is being added. This enables and leads BigTable to provide a three dimensional data presentation.

In the end of the day, BigTable is not rocket science. It is compact and easy to adopt. It is very straight-forward. Many friends know I came with a very similar concept while designing Rootapi two years ago, those were the times I havent heard of BigTable. Additionally I was saving values as JSON (equality operation was enough in querying) in blocks which were multiples of the sector size of my physical hard drives. IO operations were super fast, JSON based web services were super fast and it was highly distributable, although I couldn’t find a great environment to explore the severe situations deeply.

As we move on the cloud, this is the way we are going to look at data storage. If you need more technical details, I highly recommend you to take a look at the following references:

  1. BigTable: A Distributed Structured Storage System
  2. Bigtable: A Distributed Storage System for Structured Data – Original publication paper of BigTable, appeared in OSDI’06.
  3. Google File System An introduction to GFS by Aaron Kimball.