Scrapy and JSON Data: A Simple Spider

Share This!

Facebook
LinkedIn
Twitter
Email

The most common question that I get asked is which is the best tool for getting data from web pages.

It is difficult to have a one size fits all answer to this as the use case is very different. I wrote on this earlier explaining these differences.

Scrapy is perceived to be difficult, just because it can do a lot of things.

It is actually very easy to get started if you follow the correct approach.

Let’s see one example problem:

Let’s try to solve this problem in the easiest way possible.

Open this web page in Chrome, enable the developer tab. Go to Network and filter with xhr.

Dynamic Page with JSON
The Data is Populated from a JSON file

After examining this, we would know that the data is actually being loaded from a json file:

https://www.nseindia.com/live_market/dynaContent/live_watch/stock_watch/niftyStockWatch.json

This makes our work very easy. We don’t need to worry about selecting the elements or using more complex ways.

Let’s create our scrapy spider.

First, Install Scrapy (Ideally in a virtual environment)

pip install scrapy

Now, create a simple spider skeleton:

scrapy genspider live nseindia.com

This will create live.py file with the skeleton of a scrapy spider.

import scrapy

class LiveSpider(scrapy.Spider):
    name = 'live'
    allowed_domains = ['nseindia.com']
    start_urls = ['http://nseindia.com/']

    def parse(self, response):
        pass

We know that the request will return a JSON response. We can use Python’s json module parse it and return an anonymous object.

We can finally run scrapy with -o switch to move the output to CSV.

scrapy runspider live.py -o stocks.csv

Here is the complete code.

import scrapy
import json


class LiveSpider(scrapy.Spider):
    name = 'live'
    start_urls = ['https://www.nseindia.com/live_market/dynaContent/live_watch/stock_watch/niftyStockWatch.json']

    def parse(self, response):
        json_response = json.loads(response.text)
        listings = json_response['data']
        for listing in listings:
            yield{
                "symbol": listing['symbol'],
                "open":  listing['open'],
                "high":  listing['high'],
                "low":  listing['low']
            }

Easy. Isn’t it?

Do you want to learn Web Scraping in ONE HOUR?

Sign-up now for our Free Course!

More To Explore

Ready To

Say bye to the boring courses that put you to sleep immediately.

Learn how to monitor online store prices, save data to CSV file, and even send mail alerts—all of this in less than 1 hour!

Enter your details to receive your invite for FREE.

small_c_popup.png

Get Free Material Now

Learn how we helped so may people Learn Python and land a job!