Development

Various topics related to Web Scraper, Web Crawler and Data Processing development

JavaScript rendering library for scraping javascript sites

logo-js-rendering-libraryCan you imagine how many scraping instruments are at our service? Though it has a long history, scraping has at last become a multi-lingual and simple approach. Unfortunately, there is a list of non-trivial tasks which can’t be resolved in a snap.

One of these tasks is scraping javascript sites, those that output data using JavaScript. Facing this task, classic scrapers (not all of them though) ignore JS-data and continue their own life-cycle. However, when this little defect becomes a big trouble, developers all over the world take measures. And they did it! Today we consider one of the most awesome tools which scrapes JS-generated data – Splash.

The elegant scraping of  JS-generated data – Splash

Installing Docker

Trying examples from this article is not possible without Docker. The installing process has some pitfalls, so here will be described the right way to install Docker on Windows and Mac OS.

Windows

The best way to install Docker on Windows is to install Docker Toolbox. The installation process is simple “next->next” installation. It contains the Docker Quickstart program (you will run it to launch Docker), Oracle VM Virtual box (help-tool to run Docker) and Kitematic (also a help-tool, but you won’t use it). After launching Docker Quickstart docker-quickstart-terminal-js-rendering-library  you will be ready to work!

Mac OS

Mac users should download Docker here. After that, the simple installation cycle starts:

  1. Open dmg to start installation
  2. Drag the app to your application folder
  3. Open app. You should authorize with your system password on this step
  4. If everything is done successfully, you will see the whale in the top status bar. Also, the message with greetings will be shown. Run docker info command to check basic information about your Docker

Quick start

Splash is JavaScript rendering library with HTTP API that was implemented in Python, but it cannot be used alone – only in tandem with Scrapy, which provides the main functionality for scraping web-pages.

There are two main ways to start using Splash immediately

Docker

  1. docker run –p 8050:8050 scrapinghub/splash
  2. Visit the address http://127.0.0.1:8050 . If you have install Docker Toolbox, try 192.168.99.100:8050 address

 There you should see something like:

splash-in-browser-js-rendering-library

DataFlowkit offering Splash
Eager to test it, but Docker is not installed? Simply visit http://dataflowkit.org:8050/ to play around with Splash (however, this access to Splash only works temporary – so you will have to authorize soon to be able to visit it). The using language is called Lua. It’s ok if you don’t know anything about it – you can use built-in examples or study Lua in 15 minutes.

How it works

Here we will create a simple scraping project that will scrape JS-generated data. As an example, the site http://quotes.toscrape.com/  is used. The data is organized in tags, and there is nothing difficult at all about scraping it.

To begin our Scrapy-Splash trip, we need to install Scrapy. To make this process comfortable, it’s recommended that you install conda. After that, simply print your command line:

Alternatively, you can use pip

But for beginners in Python we strongly recommend using the first approach.

After successful installation we can create our first scraping spider:

This command will create a quotes.py file with basic functionality, where quotes is a name of the file (quotes.py) and toscrape.com is the site to be scraped. Using toscrape.com as a domain name allows us to work with its subdomains (quotes.toscrape.com, quotes.toscrape.com/js/).

If everything is all right, your code should look like the following:

Let’s try to scrape the site http://quotes.toscrape.com. As it’s a sub domain of toscrape.com we are able to work with the quotes site.

The first thing we need to do is to analyze the site. For example, we simply want to get all quotes with their authors and tags. Press F12 to see where the needed information is situated. In Chrome it looks like the following:quotes-js-rendering-library

Pointing to the code you can see where its graphical result is situated. Using this simple method we can see that authors’ names are located in <small class = “author”>, text is located in <span class = “text”> and tags are hiding in <a class = “tag”>. In general, we have everything to start scraping.

The scraping class for http://quotes.toscrape.com site is below:

In order to create understandable output, we will put the results in quotes.json file. To launch the scraper and create json-file with output we will use:

The results will be displayed in json format

(code source)

Scrape JS-generated data

Everything is perfect. In the example above we simply get data from the source HTML code, but what should we do if we  need to scrape JS-generated data? Let’s look at the site, similar to the above-mentioned one http://quotes.toscrape.com/js/ . At first glance, everything looks like in the previous example. So, let’s try to scrape it – change the value of variable start_urls  to http://quotes.toscrape.com/js/. Run the command once more… And we can’t see any data! Strange, isn’t it? Let’s take a look at its source code. For that press Ctrl + U and see the following code (inside of page’s html) in a new tab:

In the end of the script you can see a cycle that generates HTML-code by JavaScript. Scrapy can’t resolve this trouble solo. Fortunately, Splash is our rescue ranger!

First, we should set up Splash by Docker

Then, run Splash

Now we need to install the Splash by pip

Good, now we have everything to solve dynamically-generated-data problem. The code for scraping JS-generated data is below:

 

In custom_settings we should add middleware, which is needed for scraping with Splash. All in all, the code is easy enough.

Try to launch the quotes.py file now. Results will be displayed in logs or in .json file, if you use it for saving code output.

What can we say about Splash after some practice?

  • Easy in use
  • Capable of using Python or Lua
  • Open-source project

Today, Splash is on its way to gaining huge popularity, and this is great – the Scrapy ecosystem should become popular!

Scrapy and its awesome features

Instead of an ordinary conclusion, we want to introduce the cool Scrapy feature that will motivate you to study  this tool more deeply. The feature is scraping infinite scrolling pages, which allows you to get information from long-page websites.

As usual, we will practice on the quotes site. Visit this url and press F12, open the network tab and start to scroll the page. What’s going on in the developer console?awesome-features-scrapy-js-rendering-library

The new events are appearing while the page gets scrolled down. New quotes are loaded thru AJAX (XMLHttpRequest) requests. All we need is to retrieve them. The simple code below will help us to get all quotes from such a long page of quotes.

Nothing complex at all, thanks to such a convenient instrument as Scrapy and Python, of course. Hope, this article will be a first step in your scraping adventure with Scrapy.

Design patterns for hierarchical data storage and effective processing

The hierarchical data storage problem is a non-trivial task in relational database context. For example, your online shop has goods of different categories and subcategories creating tree spans for 5 levels. How should they be stored in a database?

Luckily, there are several approaches (design patterns) that will help the developer to design database structure without both odd tables and code. As a result, the site will work faster and any changes, even on database layer, won’t cause troubles. We will study these approaches below. more…

SQL-injection: how to use them and how to defend against them

sql-injection-logoSQL (Structured Query Language) is a powerful language for working with relational databases, but quite a few people are in fact ignorant of the dark side of this language, which is called SQL-injection. Anyone who knows this language well enough can extract the needed data from your site by means of SQL – unless developers build defenses against SQL-injection, of course. Let’s discuss how to hack data and how to secure your web resource from these kinds of data leaks! more…

Web parsing php tools

Almost all developers have faced a parsing data task. Needs can be different –  from a product catalog to parsing stock pricing. Parsing is a very popular direction in back-end development; there are specialists creating quality parsers and scrapers. Besides, this theme is very interesting and appeals to the tastes of everyone who enjoys web. Today we review php tools used in parsing web content. more…

PHPxcel for importing and exporting data in working with Excel

Sometimes when you are developing a project, it might be necessary to do a parsing of xls documents. To give an example: you do a synchronization between xls worksheets and a website database, and you need to convert xls data to the Mysql and want to do it completely automatically.

If you work with Windows it is simple enough – you just need to use COM objects. However, it is another thing if you work with PHP and need to make it work under the UNIX systems. Fortunately there are many classes and libraries for this purpose. One of them is the class PHPExcel. This library is completely cross-platform, so you will not have problems with portability.  more…

Web Scraping with Java and HtmlUnit

java-htmlunit-post-front-cover-smallWeb scraping or crawling is the act of fetching data from a third party website by downloading and parsing the HTML code to extract the data you want. It can be done manually, but generally this term refers to the automated process of downloading the HTML content of a page, parsing/extracting the data, and saving it into a database for further analysis or use. more…

Back to top