Crawl Javascript Website Python
Web-scraping JavaScript page with Python – Stack Overflow
I personally prefer using scrapy and selenium and dockerizing both in separate containers. This way you can install both with minimal hassle and crawl modern websites that almost all contain javascript in one form or another. Here’s an example:
Use the scrapy startproject to create your scraper and write your spider, the skeleton can be as simple as this:
import scrapy
class MySpider():
name = ‘my_spider’
start_urls = [”]
def start_requests(self):
yield quest(art_urls[0])
def parse(self, response):
# do stuff with results, scrape items etc.
# now were just checking everything worked
print()
The real magic happens in the Overwrite two methods in the downloader middleware, __init__ and process_request, in the following way:
# import some additional modules that we need
import os
from copy import deepcopy
from time import sleep
from scrapy import signals
from import HtmlResponse
from selenium import webdriver
class SampleProjectDownloaderMiddleware(object):
def __init__(self):
SELENIUM_LOCATION = (‘SELENIUM_LOCATION’, ‘NOT_HERE’)
SELENIUM_URL = f'{SELENIUM_LOCATION}:4444/wd/hub’
chrome_options = romeOptions()
# d_experimental_option(“mobileEmulation”, mobile_emulation)
= (command_executor=SELENIUM_URL,
_capabilities())
def process_request(self, request, spider):
()
# sleep a bit so the page has time to load
# or monitor items on page to continue as soon as page ready
sleep(4)
# if you need to manipulate the page content like clicking and scrolling, you do it here
# (”)()
# you only need the now properly and completely rendered html from your page to get results
body = deepcopy()
# copy the current url in case of redirects
url = deepcopy()
return HtmlResponse(url, body=body, encoding=’utf-8′, request=request)
Dont forget to enable this middlware by uncommenting the next lines in the file:
DOWNLOADER_MIDDLEWARES = {
‘mpleProjectDownloaderMiddleware’: 543, }
Next for dockerization. Create your Dockerfile from a lightweight image (I’m using python Alpine here), copy your project directory to it, install requirements:
# Use an official Python runtime as a parent image
FROM python:3. 6-alpine
# install some packages necessary to scrapy and then curl because it’s handy for debugging
RUN apk –update add linux-headers libffi-dev openssl-dev build-base libxslt-dev libxml2-dev curl python-dev
WORKDIR /my_scraper
ADD /my_scraper/
RUN pip install -r
ADD. /scrapers
And finally bring it all together in
version: ‘2’
services:
selenium:
image: selenium/standalone-chrome
ports:
– “4444:4444”
shm_size: 1G
my_scraper:
build:.
depends_on:
– “selenium”
environment:
– SELENIUM_LOCATION=samplecrawler_selenium_1
volumes:
-. :/my_scraper
# use this command to keep the container running
command: tail -f /dev/null
Run docker-compose up -d. If you’re doing this the first time it will take a while for it to fetch the latest selenium/standalone-chrome and the build your scraper image as well.
Once it’s done, you can check that your containers are running with docker ps and also check that the name of the selenium container matches that of the environment variable that we passed to our scraper container (here, it was SELENIUM_LOCATION=samplecrawler_selenium_1).
Enter your scraper container with docker exec -ti YOUR_CONTAINER_NAME sh, the command for me was docker exec -ti samplecrawler_my_scraper_1 sh, cd into the right directory and run your scraper with scrapy crawl my_spider.
The entire thing is on my github page and you can get it from here
Data Science Skills: Web scraping javascript using python
There are different ways of scraping web pages using python. In my previous article, I gave an introduction to web scraping by using the libraries:requests and BeautifulSoup. However, many web pages are dynamic and use JavaScript to load their content. These websites often require a different approach to gather the this tutorial, I will present several different ways of gathering the content of a webpage that contains Javascript. The techniques used will be the following:Using selenium with Firefox web driverUsing a headless browser with phantomJSMaking an API call using a REST client or python requests libraryTL;DR For examples of scraping javascript web pages in python you can find the complete code as covered in this tutorial over on November 7th 2019: Please note, the html structure of the webpage being scraped may be updated over time and this article initially reflected the structure at the time of publication in November 2018. The article has now been updated to run with the current webpage but in the future this may again start the tutorial, I first needed to find a website to scrape. Before proceeding with your web scraper, it is important to always check the Terms & Conditions and the Privacy Policy on the website you plan to scrape to ensure that you are not breaking any of their terms of trying to find a suitable website to demonstrate, many of the examples I first looked at explicitly stated that web crawlers were prohibited. It wasn’t until reading an article about sugar content in yogurt and wondering where I could find the latest nutritional information inspired another train of thought where I could find a suitable website; online retailers often have dynamic web pages that load content using javascript so the aim of this tutorial is to scrape the nutritional information of yogurts from the web page of an online we will be using some new python libraries to access the content of the web pages and also to handle the data, these libraries will need to be installed using your usual python package manager pip. If you don’t already have beautifulsoup then you will need to install this here install seleniumpip install pandasTo use selenium as a web driver, there are a few additional requirements:FirefoxI will be using Firefox as the browser for my web driver so this means you will either need to install Firefox to follow this tutorial or alternatively you can use Chromium with ckodriverTo use the web driver we need to install a web browser engine, geckodriver. You will need to download geckodriver for your OS, extract the file and set the executable path can do this in several ways:(i) move geckodriver to a directory of your choice and define this the executable path in your python code (see later example), (ii) move geckodriver to a directory which is already a set as a directory where executable files are located, this is known as your environmental variable path. You can find out which directories are in your $PATH by the following:WindowsGo to:Control Panel > Environmental Variables > System Variables > PathMac OSX / LinuxIn your terminal use the command:echo $PATH(iii) add geckodriver location to your PATH environment variablesWindowsGo to:Control Panel > Environmental Variables > System Variables > Path > EditAdd the directory containing geckodriver to this list and saveMac OSX / LinuxAdd a line to your. bash_profile (Mac OSX) or. bash_rc (Linux)# add geckodriver to your PATHexport PATH=”$PATH:/path/to/your/directory”Restart your terminal and use the command from (ii) to check that your new path has been antomJSSimilar to the steps for geckodriver, we also need to download PhantomJS. Once downloaded, unzip the file and move to a directory of choice or add to your path executable, following the same instructions as ClientIn the final part of this blog, we will make a request to an API using a REST client. I will be using Insomnia but feel free to use whichever client you prefer! Following the standard steps outlined in my introductory tutorial into web scraping, I have inspected the webpage and want to extract the repeated HTML element:
As a first step, you might try using BeautifulSoup to extract this information using the following script. # import librariesimport questfrom bs4 import BeautifulSoup# specify the urlurlpage = ” print(urlpage)# query the website and return the html to the variable ‘page’page = quest. urlopen(urlpage)# parse the html using beautiful soup and store in variable ‘soup’soup = BeautifulSoup(page, ”)# find product items# at time of publication, Nov 2018:# results = nd_all(‘div’, attrs={‘class’: ‘listing category_templates clearfix productListing’})# updated Nov 2019:results = nd_all(‘div’, attrs={‘class’: ‘co-product’})print(‘Number of results’, len(results))Unexpectedly, when running the python script, the number of results returned is 0 even though I see many results on the web page! – Number of results 0When further inspecting the page, there are many dynamic features on the web page which suggests that javascript is used to present these results. By right-clicking and selecting View Page Source there are many