Since we have already reviewed classic web harvesting software, we want to sum up some other scraping services and crawlers, scrape plugins and other scrape related tools.

Web scraping is a sphere that can be applied to a vast variety of fields, and in turn it can require other technologies to be involved. SEO needs scrape. Proxying is one of the methods which can help you to stay masked while doing much web data extraction. Crawling is another sub-technology indispensable in scrape for unordered information sources. Data refining follows the scrape, so as to deal with the unavoidable inconsistency of harvested data.
In addition, we will consider fast scrape tools, making our life better, and some services and handy scrapers which enable us to obtain freshly extracted data or images.

 Web Scraping directory (classified by function)

Crawling
Proxy for scrape
Scrape services
Scrape plugins
Anti-scrape services
Tracking for change
Scrape for SEO
Fast scrape
Handy scrape
Scrape legal issues

Fast Scrape

Often I need to get something fast from the screen into my pocket. How to do it without invoking web scraping applications? What can help me?

Scraper, the Google Chrome extension is what makes my life easy. I’ve installed this extension in Chrome browser ( :-) ) and have this tool always embedded in the right-button menu. I highlight the sample area and right-click, and the same page area content is on the display, and with the next click, the content is on a Google spreadsheet. It is as easy as possible: no applications to run, no data samples, no target folders and other such things.

Another fast data extraction tool is one in the cloud, the Get By Sample of TheWebMiner. This cloud scraper lets you just manually enter data samples from the target site, and it will automatically define similar data and harvest them. The result is downloadable in CSV, XML and JSON formats

Scrape services and tools

Among the scrape services we take note of:

  • Grepsr scraping service. This service allows administrators to set up a scrape project but still be able to control the scrape scheduling and other data extraction steps.
  • Inspyder, the application for scrape and crawl. It’s good for crawling first as many pages as possible, and then scraping by applying a predefined pattern.
  • The A1 Website scraper works to extract text, URLs etc., using only Regexes. The output is saved into a CSV file. This scraper allows multifaceted tuning for web scraping. However, in mass data gathering, it consumes a lot of time.

Anti-scrape services

Since web scraping methods are being commonly used, many are concerned with malicious scrapers stealing website data, mirroring proprietary databases or throttling a site’s bandwidth. Why not have some protection against these invasions?

  • We’ve reviewed an anti-scrape service, called Distil, that proved to berobust and trustworthy. This service is also quite user friendly.
  • Another anti-scrape service is ScrapeShield. This service works by replacing your web page common DNS provider with CloudFlare DNS provider that becomes responsible for tracking and filtering undesired web robots’ activities.
  • BotDefender anti-scrape service is one that leverages a JavaScript techniques to hide sensitive web page data (ex. prices) and retrieve them from its servers as opposed to an open exposure of those data in an unprotected HTML page.
  • There are also some WordPress anti-scrape plugins.

Crawling tools

Then there are cases when users or companies do not need to get much data from the web, but rather they just need to crawl some web pages and index them based on certain criteria. What tools can help here? How about the 80legs service that does web crawling utilizing the power of thousands of widely distributed consumers’ computers while they are in idle mode? The claimed crawling speed is one to be ranked with modern search engines.

Another tool for crawling and scraping is Crawlera by ScrapingHub. It’s not a visual tool, yet it facilitates for the developers to set up and run python scrapers with all the convenience.

Scrape plugins

Need to acquire some fluctuating data to insert into your Word Press driven web page? The Web Scraper Shortcode plugin is good for that. Just insert it into the html code with the specified URL and desired element notation, and your page gets enriched with the elements of the extracted pages with set limits.

Another geeky tool is the WP Web Scraper, the Word Press plugin that works to extract web data into custom Word Press pages. The scraper uses a cURL extraction library for scraping and phpQuery for parsing HTML. This tool is a highly flexible plugin having plenty of the optional arguments: Regex replacement, basehref adding to the links, cache data timeout, target page decode and others.

Scrape for SEO

How can scrape help your website’s SEO? To fix the broken links to your website requires identifying them. In the video of SEOMoz you can watch how to do it and also find out more about XPath and Regex techniques. The link to the simple Twitter scraper is available there as a bonus.

Sometimes you need to gather together all your blog’s posts as they are indexed by Google. How to do a custom Google search results scraper (based on Outwit Hub) is really interesting to watch in this video.

Tracking a webpage for changes on it

Web scraping is often needed in conjunction with tracking particular info. Why harvest the whole content if no or only tiny changes occurred? In this case you do not need to scrape the page but rather only be aware of some changes on the monitored sites. These kinds of tools, keeping track of target page changes, both free and paid are reviewed at this post: Web Page Change Tracking.

For how to apply one of the free change tracking tools to a particular target page, you can go to this post.

Proxy for scrape

How do I set up my own scraper with proxy without programming or sophisticated proxy services sign up and tune up? The ScraperWiki is a toolset and a platform that makes this possible. This free service allows you to load and run any scraper written on PHP, Python or Ruby. Yes, its original purpose is to let people write or adopt a scraper for non-profit data gathering, but, in my experience, I’ve run my custom scraper on ScraperWiki for the sake of proxying.

Handy scrape

Why spend extra time and effort to visit the same page just to monitor tiny elements? If you want to look over a picture of the week or news of the day, use the Handy Web Extractor comfortably residing in your PC tray. This tiny handy tool will make life easier for you, emancipating you from the daily opening of the same pages.

Scrape legal issues

The legal issues concerning scrape or employee monitoring have always been an important consideration and worthy of careful attention for most lawful web users. So we call to your attention two posts: How to alarm if your website is under illegal scrape and Ethical issues of using employee monitoring software.

Summary

Web scraping, web mining, data extraction and website scrape encompass indeed a wide range of application technology. In spite of some malicious use of them, web data scraping serves well for business intelligence in the following areas (but not limited to these):

  • web crawling services
  • data scrape services
  • seo improvement
  • changes tracking
  • fast scrape

The adjacent area of the web scraping is the website changes tracking and monitoring.