Scrapy and JSON Data: A Simple Spider

Share This!


How easy is it to get JSON data with Scrapy?

The answer—very easy, even if you have basic knowledge of Scrapy.


The most common question that I get asked is which is the best tool for getting data from web pages

It is difficult to have a one size fits all answer to this as the use case is very different. I wrote on this earlier explaining these differences.

Using Scrapy

Scrapy is perceived to be difficult, just because it can do a lot of things.

It is actually very easy to get started if you follow the correct approach.

Getting Dynamic Data

Let’s see one example problem:

Let’s try to solve this problem in the easiest way possible.

Understanding How the Web Page Works

Open this web page in Chrome, enable the developer tab. Go to Network and filter with xhr.

Dynamic Page with JSON
The Data is Populated from a JSON file

After examining this, we would know that the data is actually being loaded from a json file:

This makes our work very easy. We don’t need to worry about selecting the elements or using more complex ways.

Ready to use Templates in Scrapy

Let’s create our scrapy spider.

First, Install Scrapy (Ideally in a virtual environment)

pip install scrapy

Now, create a simple spider with the default template. You can actually run the following command to see the list of available templets:

scrapy genspider -l

The output of this command is like this:

Available templates:

Now we can either use -l basic switch to specify the basic template, or skip the -l switch. The default template is basic, so this is not a problem.

scrapy genspider live

This will create file with the skeleton of a Scrapy spider.

import scrapy

class LiveSpider(scrapy.Spider):
    name = 'live'
    allowed_domains = ['']
    start_urls = ['']

    def parse(self, response):

We know that the request will return a JSON response. We can use Python’s json module parse it and return an anonymous object.

Scraping the JSON Data

import scrapy
import json

class LiveSpider(scrapy.Spider):
    name = 'live'
    start_urls = ['']

    def parse(self, response):
        json_response = json.loads(response.text)
        listings = json_response['data']
        for listing in listings:
                "symbol": listing['symbol'],
                "open":  listing['open'],
                "high":  listing['high'],
                "low":  listing['low']

We can finally run scrapy with -o switch to move the output to CSV.

scrapy runspider -o stocks.csv

Easy. Isn’t it?

learn Web Scraping with Python in ONE HOUR?

Sign-up now for our Free Course!

You may also like these