Featured Post

Field Marshal Sam Manekshaw on leadership

Field Marshal Sam Manekshaw on leadership I had a chance to read this article recently and just loved the way he describes leadership...

Thursday, January 14, 2010

80legs Crawling

80legs is a web crawling service running on a distributed grid of 50,000 computers, spidering the web at a rate of 2 billion pages/day, and analyzing the content found.

The service can be accessed on demand by setting up a job and executing it. As any crawling process, the job needs a seed list which can be contained by a text file up to 1 GB in size. Other job parameters are:

o Outgoing links – used to specify which links to crawl of those resulting from a seed

o Depth level – the URL level measured to a seed

o Crawling type – multiple depths in the same time or only one depth at a time

o Number of URLs – specifies the maximum number of URLs to crawl

o MIME types – specifies the page types to crawl

o Analyze options – there are several analysis options like keyword matching, regular expressions, running custom code

When a job runs, the crawler starts reading web pages starting with the seed ones and considering the outgoing links options, and analyzes the content of the pages. Simple analysis is available by specifying keywords to match or by selecting information based on regular expressions, but complex analysis can be performed on the data by using a custom application or a pre-built 80legs application. The analysis application needs to be written in Java. 80legs plans to open an application store where developers can sell their applications at their desired price and will collect all the revenue. 80legs has launched a contest to attract developers.

Paid subscriptions offer access to a Python API to interact with the crawling engine. Plans are for a Perl API. Free subscribers can create and control their jobs through the 80legs Portal.

There is a free plan with some limitations: 1 job at a time, 100k pages of max 100KB each, a 10MB analysis application (Java JAR), no API, 1 hit per second for the domain searched. There are two paid subscriptions, the top one offering 5 concurrent repeatable jobs with 10M pages/job, 10 MB/page, a 10 MB JAR, and 10 hits/sec/domain for $2/million pages crawled and 3 cents for CPU-hour utilized

No comments:

Google Custom search

Custom Search