• April 29, 2024

Python Web Scraping Example

A Practical Introduction to Web Scraping in Python

Web scraping is the process of collecting and parsing raw data from the Web, and the Python community has come up with some pretty powerful web scraping tools.
The Internet hosts perhaps the greatest source of information—and misinformation—on the planet. Many disciplines, such as data science, business intelligence, and investigative reporting, can benefit enormously from collecting and analyzing data from websites.
In this tutorial, you’ll learn how to:
Parse website data using string methods and regular expressions
Parse website data using an HTML parser
Interact with forms and other website components
Scrape and Parse Text From Websites
Collecting data from websites using an automated process is known as web scraping. Some websites explicitly forbid users from scraping their data with automated tools like the ones you’ll create in this tutorial. Websites do this for two possible reasons:
The site has a good reason to protect its data. For instance, Google Maps doesn’t let you request too many results too quickly.
Making many repeated requests to a website’s server may use up bandwidth, slowing down the website for other users and potentially overloading the server such that the website stops responding entirely.
Let’s start by grabbing all the HTML code from a single web page. You’ll use a page on Real Python that’s been set up for use with this tutorial.
Your First Web Scraper
One useful package for web scraping that you can find in Python’s standard library is urllib, which contains tools for working with URLs. In particular, the quest module contains a function called urlopen() that can be used to open a URL within a program.
In IDLE’s interactive window, type the following to import urlopen():
>>>>>> from quest import urlopen
The web page that we’ll open is at the following URL:
>>>>>> url = ”
To open the web page, pass url to urlopen():
>>>>>> page = urlopen(url)
urlopen() returns an HTTPResponse object:
>>>>>> page
< object at 0x105fef820>
To extract the HTML from the page, first use the HTTPResponse object’s () method, which returns a sequence of bytes. Then use () to decode the bytes to a string using UTF-8:
>>>>>> html_bytes = ()
>>> html = (“utf-8”)
Now you can print the HTML to see the contents of the web page:
>>>>>> print(html)


Profile: Aphrodite


Name: Aphrodite

Favorite animal: Dove
Favorite color: Red
Hometown: Mount Olympus




Once you have the HTML as text, you can extract information from it in a couple of different ways.
A Primer on Regular Expressions
Regular expressions—or regexes for short—are patterns that can be used to search for text within a string. Python supports regular expressions through the standard library’s re module.
To work with regular expressions, the first thing you need to do is import the re module:
Regular expressions use special characters called metacharacters to denote different patterns. For instance, the asterisk character (*) stands for zero or more of whatever comes just before the asterisk.
In the following example, you use findall() to find any text within a string that matches a given regular expression:
>>>>>> ndall(“ab*c”, “ac”)
[‘ac’]
The first argument of ndall() is the regular expression that you want to match, and the second argument is the string to test. In the above example, you search for the pattern “ab*c” in the string “ac”.
The regular expression “ab*c” matches any part of the string that begins with an “a”, ends with a “c”, and has zero or more instances of “b” between the two. ndall() returns a list of all matches. The string “ac” matches this pattern, so it’s returned in the list.
Here’s the same pattern applied to different strings:
>>>>>> ndall(“ab*c”, “abcd”)
[‘abc’]
>>> ndall(“ab*c”, “acc”)
>>> ndall(“ab*c”, “abcac”)
[‘abc’, ‘ac’]
>>> ndall(“ab*c”, “abdc”)
[]
Notice that if no match is found, then findall() returns an empty list.
Pattern matching is case sensitive. If you want to match this pattern regardless of the case, then you can pass a third argument with the value re. IGNORECASE:
>>>>>> ndall(“ab*c”, “ABC”)
>>> ndall(“ab*c”, “ABC”, re. IGNORECASE)
[‘ABC’]
You can use a period (. ) to stand for any single character in a regular expression. For instance, you could find all the strings that contain the letters “a” and “c” separated by a single character as follows:
>>>>>> ndall(“a. c”, “abc”)
>>> ndall(“a. c”, “abbc”)
>>> ndall(“a. c”, “ac”)
>>> ndall(“a. c”, “acc”)
[‘acc’]
The pattern. * inside a regular expression stands for any character repeated any number of times. For instance, “a. *c” can be used to find every substring that starts with “a” and ends with “c”, regardless of which letter—or letters—are in between:
>>>>>> ndall(“a. *c”, “abc”)
>>> ndall(“a. *c”, “abbc”)
[‘abbc’]
>>> ndall(“a. *c”, “ac”)
>>> ndall(“a. *c”, “acc”)
Often, you use () to search for a particular pattern inside a string. This function is somewhat more complicated than ndall() because it returns an object called a MatchObject that stores different groups of data. This is because there might be matches inside other matches, and () returns every possible result.
The details of the MatchObject are irrelevant here. For now, just know that calling () on a MatchObject will return the first and most inclusive result, which in most cases is just what you want:
>>>>>> match_results = (“ab*c”, “ABC”, re. IGNORECASE)
>>> ()
‘ABC’
There’s one more function in the re module that’s useful for parsing out text. (), which is short for substitute, allows you to replace text in a string that matches a regular expression with new text. It behaves sort of like the. replace() string method.
The arguments passed to () are the regular expression, followed by the replacement text, followed by the string. Here’s an example:
>>>>>> string = “Everything is if it’s in . ”
>>> string = (“<. *>“, “ELEPHANTS”, string)
>>> string
‘Everything is ELEPHANTS. ‘
Perhaps that wasn’t quite what you expected to happen.
() uses the regular expression “<. *>” to find and replace everything between the first < and last >, which spans from the beginning of to the end of . This is because Python’s regular expressions are greedy, meaning they try to find the longest possible match when characters like * are used.
Alternatively, you can use the non-greedy matching pattern *?, which works the same way as * except that it matches the shortest possible string of text:
>>> string = (“<. *? >“, “ELEPHANTS”, string)
“Everything is ELEPHANTS if it’s in ELEPHANTS. ”
This time, () finds two matches, and , and substitutes the string “ELEPHANTS” for both matches.
Check Your Understanding
Expand the block below to check your understanding.
Write a program that grabs the full HTML from the following URL:
Then use () to display the text following “Name:” and “Favorite Color:” (not including any leading spaces or trailing HTML tags that might appear on the same line).
You can expand the block below to see a solution.
First, import the urlopen function from the quest module:
from quest import urlopen
Then open the URL and use the () method of the HTTPResponse object returned by urlopen() to read the page’s HTML:
url = ”
html_page = urlopen(url)
html_text = ()(“utf-8”)
() returns a byte string, so you use () to decode the bytes using the UTF-8 encoding.
Now that you have the HTML source of the web page as a string assigned to the html_text variable, you can extract Dionysus’s name and favorite color from his profile. The structure of the HTML for Dionysus’s profile is the same as Aphrodite’s profile that you saw earlier.
You can get the name by finding the string “Name:” in the text and extracting everything that comes after the first occurence of the string and before the next HTML tag. That is, you need to extract everything after the colon (:) and before the first angle bracket (<). You can use the same technique to extract the favorite color. The following for loop extracts this text for both the name and favorite color: for string in ["Name: ", "Favorite Color:"]: string_start_idx = (string) text_start_idx = string_start_idx + len(string) next_html_tag_offset = html_text[text_start_idx:]("<") text_end_idx = text_start_idx + next_html_tag_offset raw_text = html_text[text_start_idx: text_end_idx] clean_text = (" \r\n\t") print(clean_text) It looks like there’s a lot going on in this forloop, but it’s just a little bit of arithmetic to calculate the right indices for extracting the desired text. Let’s break it down: You use () to find the starting index of the string, either "Name:" or "Favorite Color:", and then assign the index to string_start_idx. Since the text to extract starts just after the colon in "Name:" or "Favorite Color:", you get the index of the the character immediately after the colon by adding the length of the string to start_string_idx and assign the result to text_start_idx. You calculate the ending index of the text to extract by determining the index of the first angle bracket (<) relative to text_start_idx and assign this value to next_html_tag_offset. Then you add that value to text_start_idx and assign the result to text_end_idx. You extract the text by slicing html_text from text_start_idx to text_end_idx and assign this string to raw_text. You remove any whitespace from the beginning and end of raw_text using () and assign the result to clean_text. At the end of the loop, you use print() to display the extracted text. The final output looks like this: This solution is one of many that solves this problem, so if you got the same output with a different solution, then you did great! When you’re ready, you can move on to the next section. Use an HTML Parser for Web Scraping in Python Although regular expressions are great for pattern matching in general, sometimes it’s easier to use an HTML parser that’s explicitly designed for parsing out HTML pages. There are many Python tools written for this purpose, but the Beautiful Soup library is a good one to start with. Install Beautiful Soup To install Beautiful Soup, you can run the following in your terminal: $ python3 -m pip install beautifulsoup4 Run pip show to see the details of the package you just installed: $ python3 -m pip show beautifulsoup4 Name: beautifulsoup4 Version: 4. 9. 1 Summary: Screen-scraping library Home-page: Author: Leonard Richardson Author-email: License: MIT Location: c:\realpython\venv\lib\site-packages Requires: Required-by: In particular, notice that the latest version at the time of writing was 4. 1. Create a BeautifulSoup Object Type the following program into a new editor window: from bs4 import BeautifulSoup page = urlopen(url) html = ()("utf-8") soup = BeautifulSoup(html, "") This program does three things: Opens the URL using urlopen() from the quest module Reads the HTML from the page as a string and assigns it to the html variable Creates a BeautifulSoup object and assigns it to the soup variable The BeautifulSoup object assigned to soup is created with two arguments. The first argument is the HTML to be parsed, and the second argument, the string "", tells the object which parser to use behind the scenes. "" represents Python’s built-in HTML parser. Use a BeautifulSoup Object Save and run the above program. When it’s finished running, you can use the soup variable in the interactive window to parse the content of html in various ways. For example, BeautifulSoup objects have a. get_text() method that can be used to extract all the text from the document and automatically remove any HTML tags. Type the following code into IDLE’s interactive window: >>>>>> print(t_text())
Profile: Dionysus
Name: Dionysus
Favorite animal: Leopard
Favorite Color: Wine
There are a lot of blank lines in this output. These are the result of newline characters in the HTML document’s text. You can remove them with the string. replace() method if you need to.
Often, you need to get only specific text from an HTML document. Using Beautiful Soup first to extract the text and then using the () string method is sometimes easier than working with regular expressions.
However, sometimes the HTML tags themselves are the elements that point out the data you want to retrieve. For instance, perhaps you want to retrieve the URLs for all the images on the page. These links are contained in the src attribute of HTML tags.
In this case, you can use find_all() to return a list of all instances of that particular tag:
>>>>>> nd_all(“img”)
[, ]
This returns a list of all tags in the HTML document. The objects in the list look like they might be strings representing the tags, but they’re actually instances of the Tag object provided by Beautiful Soup. Tag objects provide a simple interface for working with the information they contain.
Let’s explore this a little by first unpacking the Tag objects from the list:
>>>>>> image1, image2 = nd_all(“img”)
Each Tag object has a property that returns a string containing the HTML tag type:
You can access the HTML attributes of the Tag object by putting their name between square brackets, just as if the attributes were keys in a dictionary.
For example, the tag has a single attribute, src, with the value “/static/”. Likewise, an HTML tag such as the link has two attributes, href and target.
To get the source of the images in the Dionysus profile page, you access the src attribute using the dictionary notation mentioned above:
>>>>>> image1[“src”]
‘/static/’
>>> image2[“src”]
Certain tags in HTML documents can be accessed by properties of the Tag object. For example, to get the tag in a document, you can use the property:<br /> >>>>>><br /> <title>Profile: Dionysus
If you look at the source of the Dionysus profile by navigating to the profile page, right-clicking on the page, and selecting View page source, then you’ll notice that the tag as written in the document looks like this:<br /> <title >Profile: Dionysus
Beautiful Soup automatically cleans up the tags for you by removing the extra space in the opening tag and the extraneous forward slash (/) in the closing tag.
You can also retrieve just the string between the title tags with the property of the Tag object:
‘Profile: Dionysus’
One of the more useful features of Beautiful Soup is the ability to search for specific kinds of tags whose attributes match certain values. For example, if you want to find all the tags that have a src attribute equal to the value /static/, then you can provide the following additional argument to. find_all():
>>>>>> nd_all(“img”, src=”/static/”)
[]
This example is somewhat arbitrary, and the usefulness of this technique may not be apparent from the example. If you spend some time browsing various websites and viewing their page sources, then you’ll notice that many websites have extremely complicated HTML structures.
When scraping data from websites with Python, you’re often interested in particular parts of the page. By spending some time looking through the HTML document, you can identify tags with unique attributes that you can use to extract the data you need.
Then, instead of relying on complicated regular expressions or using () to search through the document, you can directly access the particular tag you’re interested in and extract the data you need.
In some cases, you may find that Beautiful Soup doesn’t offer the functionality you need. The lxml library is somewhat trickier to get started with but offers far more flexibility than Beautiful Soup for parsing HTML documents. You may want to check it out once you’re comfortable using Beautiful Soup.
BeautifulSoup is great for scraping data from a website’s HTML, but it doesn’t provide any way to work with HTML forms. For example, if you need to search a website for some query and then scrape the results, then BeautifulSoup alone won’t get you very far.
Write a program that grabs the full HTML from the page at the URL Using Beautiful Soup, print out a list of all the links on the page by looking for HTML tags with the name a and retrieving the value taken on by the href attribute of each tag.
The final output should look like this:
You can expand the block below to see a solution:
First, import the urlopen function from the quest module and the BeautifulSoup class from the bs4 package:
Each link URL on the /profiles page is a relative URL, so create a base_url variable with the base URL of the website:
base_url = ”
You can build a full URL by concatenating base_url with a relative URL.
Now open the /profiles page with urlopen() and use () to get the HTML source:
html_page = urlopen(base_url + “/profiles”)
With the HTML source downloaded and decoded, you can create a new BeautifulSoup object to parse the HTML:
soup = BeautifulSoup(html_text, “”)
nd_all(“a”) returns a list of all links in the HTML source. You can loop over this list to print out all the links on the webpage:
for link in nd_all(“a”):
link_url = base_url + link[“href”]
print(link_url)
The relative URL for each link can be accessed through the “href” subscript. Concatenate this value with base_url to create the full link_url.
Interact With HTML Forms
The urllib module you’ve been working with so far in this tutorial is well suited for requesting the contents of a web page. Sometimes, though, you need to interact with a web page to obtain the content you need. For example, you might need to submit a form or click a button to display hidden content.
The Python standard library doesn’t provide a built-in means for working with web pages interactively, but many third-party packages are available from PyPI. Among these, MechanicalSoup is a popular and relatively straightforward package to use.
In essence, MechanicalSoup installs what’s known as a headless browser, which is a web browser with no graphical user interface. This browser is controlled programmatically via a Python program.
Install MechanicalSoup
You can install MechanicalSoup with pip in your terminal:
$ python3 -m pip install MechanicalSoup
You can now view some details about the package with pip show:
$ python3 -m pip show mechanicalsoup
Name: MechanicalSoup
Version: 0. 12. 0
Summary: A Python library for automating interaction with websites
Home-page: Author: UNKNOWN
Author-email: UNKNOWN
Requires: requests, beautifulsoup4, six, lxml
In particular, notice that the latest version at the time of writing was 0. 0. You’ll need to close and restart your IDLE session for MechanicalSoup to load and be recognized after it’s been installed.
Create a Browser Object
Type the following into IDLE’s interactive window:
>>>>>> import mechanicalsoup
>>> browser = owser()
Browser objects represent the headless web browser. You can use them to request a page from the Internet by passing a URL to their () method:
>>> page = (url)
page is a Response object that stores the response from requesting the URL from the browser:

The number 200 represents the status code returned by the request. A status code of 200 means that the request was successful. An unsuccessful request might show a status code of 404 if the URL doesn’t exist or 500 if there’s a server error when making the request.
MechanicalSoup uses Beautiful Soup to parse the HTML from the request. page has a attribute that represents a BeautifulSoup object:
>>>>>> type()

You can view the HTML by inspecting the attribute:
Log In

Please log in to access Mount Olympus:

Username:
Password:


Notice this page has a

on it with elements for a username and a password.
Submit a Form With MechanicalSoup
Open the /login page from the previous example in a browser and look at it yourself before moving on. Try typing in a random username and password combination. If you guess incorrectly, then the message “Wrong username or password! ” is displayed at the bottom of the page.
However, if you provide the correct login credentials (username zeus and password ThunderDude), then you’re redirected to the /profiles page.
In the next example, you’ll see how to use MechanicalSoup to fill out and submit this form using Python!
The important section of HTML code is the login form—that is, everything inside the

tags. The

on this page has the name attribute set to login. This form contains two elements, one named user and the other named pwd. The third element is the Submit button.
Now that you know the underlying structure of the login form, as well as the credentials needed to log in, let’s take a look at a program. that fills the form out and submits it.
In a new editor window, type in the following program:
import mechanicalsoup
# 1
browser = owser()
login_page = (url)
login_html =
# 2
form = (“form”)[0]
(“input”)[0][“value”] = “zeus”
(“input”)[1][“value”] = “ThunderDude”
# 3
profiles_page = (form, )
Save the file and press F5 to run it. You can confirm that you successfully logged in by typing the following into the interactive window:

Let’s break down the above example:
You create a Browser instance and use it to request the URL. You assign the HTML content of the page to the login_html variable using the property.
(“form”) returns a list of all

elements on the page. Since the page has only one

element, you can access the form by retrieving the element at index 0 of the list. The next two lines select the username and password inputs and set their value to “zeus” and “ThunderDude”, respectively.
You submit the form with (). Notice that you pass two arguments to this method, the form object and the URL of the login_page, which you access via
In the interactive window, you confirm that the submission successfully redirected to the /profiles page. If something had gone wrong, then the value of would still be “.
Now that we have the profiles_page variable set, let’s see how to programmatically obtain the URL for each link on the /profiles page.
To do this, you use () again, this time passing the string “a” to select all the
anchor elements on the page:
>>>>>> links = (“a”)
Now you can iterate over each link and print the href attribute:
>>>>>> for link in links:… address = link[“href”]… text =… print(f”{text}: {address}”)…
Aphrodite: /profiles/aphrodite
Poseidon: /profiles/poseidon
Dionysus: /profiles/dionysus
The URLs contained in each href attribute are relative URLs, which aren’t very helpful if you want to navigate to them later using MechanicalSoup. If you happen to know the full URL, then you can assign the portion needed to construct a full URL.
In this case, the base URL is just. Then you can concatenate the base URL with the relative URLs found in the src attribute:
>>>>>> base_url = ”
>>> for link in links:… address = base_url + link[“href”]… print(f”{text}: {address}”)…
Aphrodite: Poseidon: Dionysus:
You can do a lot with just (), (), and (). That said, MechanicalSoup is capable of much more. To learn more about MechanicalSoup, check out the official docs.
Expand the block below to check your understanding
Use MechanicalSoup to provide the correct username (zeus) and password (ThunderDude) to the login form located at the URL Once the form is submitted, display the title of the current page to determine that you’ve been redirected to the /profiles page.
Your program should print the text All Profiles.
First, import the mechanicalsoup package and create a Broswer object:
Point the browser to the login page by passing the URL to () and grab the HTML with the attribute:
login_url = ”
login_page = (login_url)
login_html is a BeautifulSoup instance. Since the page has only a single form on it, you can access the form via Using (), select the username and password inputs and fill them with the username “zeus” and the password “ThunderDude”:
form =
Now that the form is filled out, you can submit it with ():
If you filled the form with the correct username and password, then profiles_page should actually point to the /profiles page. You can confirm this by printing the title of the page assigned to profiles_page:
print()
You should see the following text displayed:
All Profiles
If instead you see the text Log In or something else, then the form submission failed.
Interact With Websites in Real Time
Sometimes you want to be able to fetch real-time data from a website that offers continually updated information.
In the dark days before you learned Python programming, you had to sit in front of a browser, clicking the Refresh button to reload the page each time you wanted to check if updated content was available. But now you can automate this process using the () method of the MechanicalSoup Browser object.
Open your browser of choice and navigate to the URL. This /dice page simulates a roll of a six-sided die, updating the result each time you refresh the browser. Below, you’ll write a program that repeatedly scrapes the page for a new result.
The first thing you need to do is determine which element on the page contains the result of the die roll. Do this now by right-clicking anywhere on the page and selecting View page source. A little more than halfway down the HTML code is an

tag that looks like this:
The text of the

tag might be different for you, but this is the page element you need for scraping the result.
Let’s start by writing a simple program that opens the /dice page, scrapes the result, and prints it to the console:
page = (“)
tag = (“#result”)[0]
result =
print(f”The result of your dice roll is: {result}”)
This example uses the BeautifulSoup object’s () method to find the element with id=result. The string “#result” that you pass to () uses the CSS ID selector # to indicate that result is an id value.
To periodically get a new result, you’ll need to create a loop that loads the page at each step. So everything below the line browser = owser() in the above code needs to go in the body of the loop.
For this example, let’s get four rolls of the dice at ten-second intervals. To do that, the last line of your code needs to tell Python to pause running for ten seconds. You can do this with sleep() from Python’s time module. sleep() takes a single argument that represents the amount of time to sleep in seconds.
Here’s an example that illustrates how sleep() works:
import time
print(“I’m about to wait for five seconds… “)
(5)
print(“Done waiting! “)
When you run this code, you’ll see that the “Done waiting! ” message isn’t displayed until 5 seconds have passed from when the first print() function was executed.
For the die roll example, you’ll need to pass the number 10 to sleep(). Here’s the updated program:
for i in range(4):
(10)
When you run the program, you’ll immediately see the first result printed to the console. After ten seconds, the second result is displayed, then the third, and finally the fourth. What happens after the fourth result is printed?
The program continues running for another ten seconds before it finally stops!
Well, of course it does—that’s what you told it to do! But it’s kind of a waste of time. You can stop it from doing this by using an if statement to run () for only the first three requests:
# Wait 10 seconds if this isn’t the last request
if i < 3: With techniques like this, you can scrape data from websites that periodically update their data. However, you should be aware that requesting a page multiple times in rapid succession can be seen as suspicious, or even malicious, use of a website. It’s even possible to crash a server with an excessive number of requests, so you can imagine that many websites are concerned about the volume of requests to their server! Always check the Terms of Use and be respectful when sending multiple requests to a website. Conclusion Although it’s possible to parse data from the Web using tools in Python’s standard library, there are many tools on PyPI that can help simplify the process. In this tutorial, you learned how to: Request a web page using Python’s built-in urllib module Parse HTML using Beautiful Soup Interact with web forms using MechanicalSoup Repeatedly request data from a website to check for updates Writing automated web scraping programs is fun, and the Internet has no shortage of content that can lead to all sorts of exciting projects. Just remember, not everyone wants you pulling data from their web servers. Always check a website’s Terms of Use before you start scraping, and be respectful about how you time your web requests so that you don’t flood a server with traffic. Additional Resources For more information on web scraping with Python, check out the following resources: Beautiful Soup: Build a Web Scraper With Python API Integration in Python Python & APIs: A Winning Combo for Reading Public Data Tutorial: Web Scraping with Python Using Beautiful Soup

Tutorial: Web Scraping with Python Using Beautiful Soup

Published: March 30, 2021 Learn how to scrape the web with Python! The internet is an absolutely massive source of data — data that we can access using web scraping and Python! In fact, web scraping is often the only way we can access data. There is a lot of information out there that isn’t available in convenient CSV exports or easy-to-connect APIs. And websites themselves are often valuable sources of data — consider, for example, the kinds of analysis you could do if you could download every post on a web access those sorts of on-page datasets, we’ll have to use web scraping. Don’t worry if you’re still a total beginner! In this tutorial we’re going to cover how to do web scraping with Python from scratch, starting with some answers to frequently-asked, we’ll work through an actual web scraping project, focusing on weather ‘ll work together to scrape weather data from the web to support a weather before we start writing any Python, we’ve got to cover the basics! If you’re already familiar with the concept of web scraping, feel free to scroll past these questions and jump right into the tutorial! The Fundamentals of Web Scraping:What is Web Scraping in Python? Some websites offer data sets that are downloadable in CSV format, or accessible via an Application Programming Interface (API). But many websites with useful data don’t offer these convenient nsider, for example, the National Weather Service’s website. It contains up-to-date weather forecasts for every location in the US, but that weather data isn’t accessible as a CSV or via API. It has to be viewed on the NWS site:If we wanted to analyze this data, or download it for use in some other app, we wouldn’t want to painstakingly copy-paste everything. Web scraping is a technique that lets us use programming to do the heavy lifting. We’ll write some code that looks at the NWS site, grabs just the data we want to work with, and outputs it in the format we this tutorial, we’ll show you how to perform web scraping using Python 3 and the Beautiful Soup library. We’ll be scraping weather forecasts from the National Weather Service, and then analyzing them using the Pandas to be clear, lots of programming languages can be used to scrape the web! We also teach web scraping in R, for example. For this tutorial, though, we’ll be sticking with Does Web Scraping Work? When we scrape the web, we write code that sends a request to the server that’s hosting the page we specified. The server will return the source code — HTML, mostly — for the page (or pages) we far, we’re essentially doing the same thing a web browser does — sending a server request with a specific URL and asking the server to return the code for that unlike a web browser, our web scraping code won’t interpret the page’s source code and display the page visually. Instead, we’ll write some custom code that filters through the page’s source code looking for specific elements we’ve specified, and extracting whatever content we’ve instructed it to example, if we wanted to get all of the data from inside a table that was displayed on a web page, our code would be written to go through these steps in sequence:1Request the content (source code) of a specific URL from the server2Download the content that is returned3Identify the elements of the page that are part of the table we want4Extract and (if necessary) reformat those elements into a dataset we can analyze or use in whatever way we that all sounds very complicated, don’t worry! Python and Beautiful Soup have built-in features designed to make this relatively straightforward. One thing that’s important to note: from a server’s perspective, requesting a page via web scraping is the same as loading it in a web browser. When we use code to submit these requests, we might be “loading” pages much faster than a regular user, and thus quickly eating up the website owner’s server Use Python for Web Scraping? As previously mentioned, it’s possible to do web scraping with many programming ever, one of the most popular approaches is to use Python and the Beautiful Soup library, as we’ll do in this tutorial. Learning to do this with Python will mean that there are lots of tutorials, how-to videos, and bits of example code out there to help you deepen your knowledge once you’ve mastered the Beautiful Soup Web Scraping Legal? Unfortunately, there’s not a cut-and-dry answer here. Some websites explicitly allow web scraping. Others explicitly forbid it. Many websites don’t offer any clear guidance one way or the scraping any website, we should look for a terms and conditions page to see if there are explicit rules about scraping. If there are, we should follow them. If there are not, then it becomes more of a judgement member, though, that web scraping consumes server resources for the host website. If we’re just scraping one page once, that isn’t going to cause a problem. But if our code is scraping 1, 000 pages once every ten minutes, that could quickly get expensive for the website, in addition to following any and all explicit rules about web scraping posted on the site, it’s also a good idea to follow these best practices:Web Scraping Best Practices:Never scrape more frequently than you need nsider caching the content you scrape so that it’s only downloaded pauses into your code using functions like () to keep from overwhelming servers with too many requests too our case for this tutorial, the NWS’s data is public domain and its terms do not forbid web scraping, so we’re in the clear to to scrape the web with Python, right in your browser! Our interactive APIs and Web Scraping in Python skill path will help you learn the skills you need to unlock new worlds of data with Python. (No credit card required! ) The Components of a Web PageBefore we start writing code, we need to understand a little bit about the structure of a web page. We’ll use the site’s structure to write code that gets us the data we want to scrape, so understanding that structure is an important first step for any web scraping we visit a web page, our web browser makes a request to a web server. This request is called a GET request, since we’re getting files from the server. The server then sends back files that tell our browser how to render the page for us. These files will typically include:HTML — the main content of the — used to add styling to make the page look — Javascript files add interactivity to web — image formats, such as JPG and PNG, allow web pages to show our browser receives all the files, it renders the page and displays it to ’s a lot that happens behind the scenes to render a page nicely, but we don’t need to worry about most of it when we’re web scraping. When we perform web scraping, we’re interested in the main content of the web page, so we look primarily at the MLHyperText Markup Language (HTML) is the language that web pages are created in. HTML isn’t a programming language, like Python, though. It’s a markup language that tells a browser how to display content. HTML has many functions that are similar to what you might find in a word processor like Microsoft Word — it can make text bold, create paragraphs, and so you’re already familiar with HTML, feel free to jump to the next section of this tutorial. Otherwise, let’s take a quick tour through HTML so we know enough to scrape consists of elements called tags. The most basic tag is the tag. This tag tells the web browser that everything inside of it is HTML. We can make a simple HTML document just using this tag:We haven’t added any content to our page yet, so if we viewed our HTML document in a web browser, we wouldn’t see anything:Right inside an html tag, we can put two other tags: the head tag, and the body main content of the web page goes into the body tag. The head tag contains data about the title of the page, and other information that generally isn’t useful in web scraping:We still haven’t added any content to our page (that goes inside the body tag), so if we open this HTML file in a browser, we still won’t see anything:You may have noticed above that we put the head and body tags inside the html tag. In HTML, tags are nested, and can go inside other ’ll now add our first content to the page, inside a p tag. The p tag defines a paragraph, and any text inside the tag is shown as a separate paragraph:

Here’s a paragraph of text!

Here’s a second paragraph of text!

Rendered in a browser, that HTML file will look like this: Here’s a paragraph of text! Here’s a second paragraph of text! Tags have commonly used names that depend on their position in relation to other tags:child — a child is a tag inside another tag. So the two p tags above are both children of the body — a parent is the tag another tag is inside. Above, the html tag is the parent of the body biling — a sibiling is a tag that is nested inside the same parent as another tag. For example, head and body are siblings, since they’re both inside html. Both p tags are siblings, since they’re both inside can also add properties to HTML tags that change their behavior. Below, we’ll add some extra text and hyperlinks using the a tag.

Here’s a paragraph of text! Python

Here’s how this will look:In the above example, we added two a tags. a tags are links, and tell the browser to render a link to another web page. The href property of the tag determines where the link goes. a and p are extremely common html tags. Here are a few others:div — indicates a division, or area, of the page. b — bolds any text inside. i — italicizes any text — creates a — creates an input a full list of tags, look we move into actual web scraping, let’s learn about the class and id properties. These special properties give HTML elements names, and make them easier to interact with when we’re element can have multiple classes, and a class can be shared between elements. Each element can only have one id, and an id can only be used once on a page. Classes and ids are optional, and not all elements will have can add classes and ids to our example:

Here’s a paragraph of text! Learn Data Science Online

Here’s a second paragraph of text! Python

Here’s how this will look:As you can see, adding classes and ids doesn’t change how the tags are rendered at requests libraryNow that we understand the structure of a web page, it’s time to get into the fun part: scraping the content we want! The first thing we’ll need to do to scrape a web page is to download the page. We can download pages using the Python requests requests library will make a GET request to a web server, which will download the HTML contents of a given web page for us. There are several different types of requests we can make using requests, of which GET is just one. If you want to learn more, check out our API ’s try downloading a simple sample website, ll need to first import the requests library, and then download the page using the method:import requests
page = (“)
page
After running our request, we get a Response object. This object has a status_code property, which indicates if the page was downloaded successfully:A status_code of 200 means that the page downloaded successfully. We won’t fully dive into status codes here, but a status code starting with a 2 generally indicates success, and a code starting with a 4 or a 5 indicates an can print out the HTML content of the page using the content ntent



A simple example page

Here is some simple content for this page.


Parsing a page with BeautifulSoupAs you can see above, we now have downloaded an HTML can use the BeautifulSoup library to parse this document, and extract the text from the p first have to import the library, and create an instance of the BeautifulSoup class to parse our document:from bs4 import BeautifulSoup
soup = BeautifulSoup(ntent, ”)We can now print out the HTML content of the page, formatted nicely, using the prettify method on the BeautifulSoup object.
This step isn’t strictly necessary, and we won’t always bother with it, but it can be helpful to look at prettified HTML to make the structure of the and where tags are nested easier to all the tags are nested, we can move through the structure one level at a time. We can first select all the elements at the top level of the page using the children property of soup. Note that children returns a list generator, so we need to call the list function on it:list(ildren)
[‘html’, ‘n’, A simple example page

Here is some simple content for this page.

]The above tells us that there are two tags at the top level of the page — the initial tag, and the tag. There is a newline character (n) in the list as well. Let’s see what the type of each element in the list is:[type(item) for item in list(ildren)]
[ctype, vigableString, ]As we can see, all of the items are BeautifulSoup objects:The first is a Doctype object, which contains information about the type of the second is a NavigableString, which represents text found in the HTML final item is a Tag object, which contains other nested most important object type, and the one we’ll deal with most often, is the Tag Tag object allows us to navigate through an HTML document, and extract other tags and text. You can learn more about the various BeautifulSoup objects can now select the html tag and its children by taking the third item in the list:html = list(ildren)[2]Each item in the list returned by the children property is also a BeautifulSoup object, so we can also call the children method on, we can find the children inside the html tag:list(ildren)
[‘n’, A simple example page , ‘n’,

Here is some simple content for this page.

, ‘n’]As we can see above, there are two tags here, head, and body. We want to extract the text inside the p tag, so we’ll dive into the body:body = list(ildren)[3]Now, we can get the p tag by finding the children of the body tag:list(ildren)
[‘n’,

Here is some simple content for this page.

, ‘n’]We can now isolate the p tag:p = list(ildren)[1]Once we’ve isolated the tag, we can use the get_text method to extract all of the text inside the t_text()
‘Here is some simple content for this page. ‘Finding all instances of a tag at onceWhat we did above was useful for figuring out how to navigate a page, but it took a lot of commands to do something fairly simple. If we want to extract a single tag, we can instead use the find_all method, which will find all the instances of a tag on a = BeautifulSoup(ntent, ”)
nd_all(‘p’)
[

Here is some simple content for this page.

]Note that find_all returns a list, so we’ll have to loop through, or use list indexing, it to extract nd_all(‘p’)[0]. get_text()
‘Here is some simple content for this page. ‘f you instead only want to find the first instance of a tag, you can use the find method, which will return a single BeautifulSoup (‘p’)

Here is some simple content for this page.

Searching for tags by class and idWe introduced classes and ids earlier, but it probably wasn’t clear why they were asses and ids are used by CSS to determine which HTML elements to apply certain styles to. But when we’re scraping, we can also use them to specify the elements we want to illustrate this principle, we’ll work with the following page:

First paragraph.

Second paragraph.


First outer paragraph.

Second outer paragraph.
We can access the above document at the URL. Let’s first download the page and create a BeautifulSoup object:page = (“)
soup = BeautifulSoup(ntent, ”)
soup
A simple example page<br />



Now, we can use the find_all method to search for items by class or by id. In the below example, we’ll search for any p tag that has the class nd_all(‘p’, class_=’outer-text’)
[

First outer paragraph.

,

Second outer paragraph.

]In the below example, we’ll look for any tag that has the class nd_all(class_=”outer-text”)

,

]We can also search for elements by nd_all(id=”first”)
[

]Using CSS SelectorsWe can also search for items using CSS selectors. These selectors are how the CSS language allows developers to specify HTML tags to style. Here are some examples:p a — finds all a tags inside of a p p a — finds all a tags inside of a p tag inside of a body body — finds all body tags inside of an html — finds all p tags with a class of outer-text. p#first — finds all p tags with an id of — finds any p tags with a class of outer-text inside of a body can learn more about CSS selectors autifulSoup objects support searching a page via CSS selectors using the select method. We can use CSS selectors to find all the p tags in our page that are inside of a div like (“div p”)

,

]Note that the select method above returns a list of BeautifulSoup objects, just like find and wnloading weather dataWe now know enough to proceed with extracting information about the local weather from the National Weather Service website! The first step is to find the page we want to scrape. We’ll extract weather information about downtown San Francisco from this page. Specifically, let’s extract data about the extended we can see from the image, the page has information about the extended forecast for the next week, including time of day, temperature, and a brief description of the conditions. Exploring page structure with Chrome DevToolsThe first thing we’ll need to do is inspect the page using Chrome Devtools. If you’re using another browser, Firefox and Safari have can start the developer tools in Chrome by clicking View -> Developer -> Developer Tools. You should end up with a panel at the bottom of the browser like what you see below. Make sure the Elements panel is highlighted:Chrome Developer ToolsThe elements panel will show you all the HTML tags on the page, and let you navigate through them. It’s a really handy feature! By right clicking on the page near where it says “Extended Forecast”, then clicking “Inspect”, we’ll open up the tag that contains the text “Extended Forecast” in the elements panel:The extended forecast textWe can then scroll up in the elements panel to find the “outermost” element that contains all of the text that corresponds to the extended forecasts. In this case, it’s a div tag with the id seven-day-forecast:The div that contains the extended forecast we click around on the console, and explore the div, we’ll discover that each forecast item (like “Tonight”, “Thursday”, and “Thursday Night”) is contained in a div with the class to Start Scraping! We now know enough to download the page and start parsing it. In the below code, we will:Download the web page containing the a BeautifulSoup class to parse the the div with id seven-day-forecast, and assign to seven_dayInside seven_day, find each individual forecast item. Extract and print the first forecast = (“)
seven_day = (id=”seven-day-forecast”)
forecast_items = nd_all(class_=”tombstone-container”)
tonight = forecast_items[0]
print(ettify())

Tonight


Tonight: Mostly clear, with a low around 49. West northwest wind 12 to 17 mph decreasing to 6 to 11 mph after midnight. Winds could gust as high as 23 mph.

Mostly Clear

Low: 49 °F

Extracting information from the pageAs we can see, inside the forecast item tonight is all the information we want. There are four pieces of information we can extract:The name of the forecast item — in this case, description of the conditions — this is stored in the title property of img. A short description of the conditions — in this case, Mostly temperature low — in this case, 49 ’ll extract the name of the forecast item, the short description, and the temperature first, since they’re all similar:period = (class_=”period-name”). get_text()
short_desc = (class_=”short-desc”). get_text()
temp = (class_=”temp”). get_text()
print(period)
print(short_desc)
print(temp)
Low: 49 °FNow, we can extract the title attribute from the img tag. To do this, we just treat the BeautifulSoup object like a dictionary, and pass in the attribute we want as a key:img = (“img”)
desc = img[‘title’]
print(desc)
Tonight: Mostly clear, with a low around 49. Extracting all the information from the pageNow that we know how to extract each individual piece of information, we can combine our knowledge with CSS selectors and list comprehensions to extract everything at the below code, we will:Select all items with the class period-name inside an item with the class tombstone-container in a list comprehension to call the get_text method on each BeautifulSoup riod_tags = (“. tombstone-container “)
periods = [t_text() for pt in period_tags]
periods
[‘Tonight’,
‘Thursday’,
‘ThursdayNight’,
‘Friday’,
‘FridayNight’,
‘Saturday’,
‘SaturdayNight’,
‘Sunday’,
‘SundayNight’]As we can see above, our technique gets us each of the period names, in order. We can apply the same technique to get the other three fields:short_descs = [t_text() for sd in (“. tombstone-container “)]
temps = [t_text() for t in (“. tombstone-container “)]
descs = [d[“title”] for d in (“. tombstone-container img”)]print(short_descs)print(temps)print(descs)
[‘Mostly Clear’, ‘Sunny’, ‘Mostly Clear’, ‘Sunny’, ‘Slight ChanceRain’, ‘Rain Likely’, ‘Rain Likely’, ‘Rain Likely’, ‘Chance Rain’]
[‘Low: 49 °F’, ‘High: 63 °F’, ‘Low: 50 °F’, ‘High: 67 °F’, ‘Low: 57 °F’, ‘High: 64 °F’, ‘Low: 57 °F’, ‘High: 64 °F’, ‘Low: 55 °F’]
[‘Tonight: Mostly clear, with a low around 49. ‘, ‘Thursday: Sunny, with a high near 63. North wind 3 to 5 mph. ‘, ‘Thursday Night: Mostly clear, with a low around 50. Light and variable wind becoming east southeast 5 to 8 mph after midnight. ‘, ‘Friday: Sunny, with a high near 67. Southeast wind around 9 mph. ‘, ‘Friday Night: A 20 percent chance of rain after 11pm. Partly cloudy, with a low around 57. South southeast wind 13 to 15 mph, with gusts as high as 20 mph. New precipitation amounts of less than a tenth of an inch possible. ‘, ‘Saturday: Rain likely. Cloudy, with a high near 64. Chance of precipitation is 70%. New precipitation amounts between a quarter and half of an inch possible. ‘, ‘Saturday Night: Rain likely. Cloudy, with a low around 57. Chance of precipitation is 60%. ‘, ‘Sunday: Rain likely. ‘, ‘Sunday Night: A chance of rain. Mostly cloudy, with a low around 55. ‘]Combining our data into a Pandas DataframeWe can now combine the data into a Pandas DataFrame and analyze it. A DataFrame is an object that can store tabular data, making data analysis easy. If you want to learn more about Pandas, check out our free to start course order to do this, we’ll call the DataFrame class, and pass in each list of items that we have. We pass them in as part of a dictionary key will become a column in the DataFrame, and each list will become the values in the column:import pandas as pd
weather = Frame({
“period”: periods,
“short_desc”: short_descs,
“temp”: temps,
“desc”:descs})
weather
desc
period
short_desc
temp
0
Tonight: Mostly clear, with a low around 49. W…
1
Thursday: Sunny, with a high near 63. North wi…
Thursday
Sunny
High: 63 °F
2
Thursday Night: Mostly clear, with a low aroun…
ThursdayNight
Low: 50 °F
3
Friday: Sunny, with a high near 67. Southeast …
Friday
High: 67 °F
4
Friday Night: A 20 percent chance of rain afte…
FridayNight
Slight ChanceRain
Low: 57 °F
5
Saturday: Rain likely. Cloudy, with a high ne…
Saturday
Rain Likely
High: 64 °F
6
Saturday Night: Rain likely. Cloudy, with a l…
SaturdayNight
7
Sunday: Rain likely. Cloudy, with a high near…
Sunday
8
Sunday Night: A chance of rain. Mostly cloudy…
SundayNight
Chance Rain
Low: 55 °F
We can now do some analysis on the data. For example, we can use a regular expression and the method to pull out the numeric temperature values:temp_nums = weather[“temp”](“(? Pd+)”, expand=False)
weather[“temp_num”] = (‘int’)
temp_nums
0 49
1 63
2 50
3 67
4 57
5 64
6 57
7 64
8 55
Name: temp_num, dtype: objectWe could then find the mean of all the high and low temperatures:weather[“temp_num”]()
58. 444444444444443We could also only select the rows that happen at night:is_night = weather[“temp”](“Low”)
weather[“is_night”] = is_night
is_night
0 True
1 False
2 True
3 False
4 True
5 False
6 True
7 False
8 True
Name: temp, dtype: boolweather[is_night]
Name: temp, dtype: bool
temp_num
49
True
50
57
55
Next Steps For This Web Scraping ProjectIf you’ve made it this far, congratulations! You should now have a good understanding of how to scrape web pages and extract data. Of course, there’s still a lot more to learn! If you want to go further, a good next step would be to pick a site and try some web scraping on your own. Some good examples of data to scrape are:News articlesSports scoresWeather forecastsStock pricesOnline retailer pricesYou may also want to keep scraping the National Weather Service, and see what other data you can extract from the page, or about your own ternatively, if you want to take your web scraping skills to the next level, you can check out our interactive course, which covers both the basics of web scraping and using Python to connect to APIs. With those two skills under your belt, you’ll be able to collect lots of unique and interesting datasets from sites all over the web! Learn to scrape the web with Python, right in your browser! Our interactive APIs and Web Scraping in Python skill path will help you learn the skills you need to unlock new worlds of data with Python. (No credit card required! )beginner, data mining, python, python tutorials, scraping, tutorial, Tutorials, web scraping
Python Web Scraping Tutorial: Step-By-Step [2021 Guide] - Blog

Python Web Scraping Tutorial: Step-By-Step [2021 Guide] – Blog

Getting started in web scraping is simple except when it isn’t which is why you are here. Python is one of the easiest ways to get started as it is an object-oriented language. Python’s classes and objects are significantly easier to use than in any other language. Additionally, many libraries exist that make building a tool for web scraping in Python an absolute breeze.
In this web scraping Python tutorial, we will outline everything needed to get started with a simple application. It will acquire text-based data from page sources, store it into a file and sort the output according to set parameters. Options for more advanced features when using Python for web scraping will be outlined at the very end with suggestions for implementation. By following the steps outlined below in this tutorial, you will be able to understand how to do web scraping.
What do we call web scraping? Web scraping is an automated process of gathering public data. Web scrapers automatically extract large amounts of public data from target websites in seconds.
This Python web scraping tutorial will work for all operating systems. There will be slight differences when installing either Python or development environments but not in anything else.
Building a web scraper: Python prepwork
Getting to the libraries
WebDrivers and browsers
Finding a cozy place for our Python web scraper
Importing and using libraries
Picking a URL
Defining objects and building lists
Extracting data with our Python web scraper
Exporting the data
More lists. More!
Web scraping with Python best practices
Conclusion
Throughout this entire web scraping tutorial, Python 3. 4+ version will be used. Specifically, we used 3. 8. 3 but any 3. 4+ version should work just fine.
For Windows installations, when installing Python make sure to check “PATH installation”. PATH installation adds executables to the default Windows Command Prompt executable search. Windows will then recognize commands like “pip” or “python” without requiring users to point it to the directory of the executable (e. g. C:/tools/python/…/). If you have already installed Python but did not mark the checkbox, just rerun the installation and select modify. On the second screen select “Add to environment variables”.
Web scraping with Python is easy due to the many useful libraries available
One of the Python advantages is a large selection of libraries for web scraping. These web scraping libraries are part of thousands of Python projects in existence – on PyPI alone, there are over 300, 000 projects today. Notably, there are several types of Python web scraping libraries from which you can choose:
RequestsBeautiful SouplxmlSelenium
Requests library
Web scraping starts with sending HTTP requests, such as POST or GET, to a website’s server, which returns a response containing the needed data. However, standard Python HTTP libraries are difficult to use and, for effectiveness, require bulky lines of code, further compounding an already problematic issue.
Unlike other HTTP libraries, the Requests library simplifies the process of making such requests by reducing the lines of code, in effect making the code easier to understand and debug without impacting its effectiveness. The library can be installed from within the terminal using the pip command:
pip install requests
Requests library provides easy methods for sending HTTP GET and POST requests. For example, the function to send an HTTP Get request is aptly named get():
import requests
response = (“)
print()
If there is a need for a form to be posted, it can be done easily using the post() method. The form data can sent as a dictionary as follows:
form_data = {‘key1’: ‘value1’, ‘key2’: ‘value2’}
response = (” “, data=form_data)
Requests library also makes it very easy to use proxies that require authentication.
proxies={”: ”}
response = (”, proxies=proxies)
But this library has a limitation in that it does not parse the extracted HTML data, i. e., it cannot convert the data into a more readable format for analysis. Also, it cannot be used to scrape websites that are written using purely JavaScript.
Beautiful Soup
Beautiful Soup is a Python library that works with a parser to extract data from HTML and can turn even invalid markup into a parse tree. However, this library is only designed for parsing and cannot request data from web servers in the form of HTML documents/files. For this reason, it is mostly used alongside the Python Requests Library. Note that Beautiful Soup makes it easy to query and navigate the HTML, but still requires a parser. The following example demonstrates the use of the module, which is part of the Python Standard Library.
#Part 1 – Get the HTML using Requests
url=”
response = (url)
#Part 2 – Find the element
from bs4 import BeautifulSoup
soup = BeautifulSoup(, ”)
This will print the title element as follows:

Oxylabs Blog

Due to its simple ways of navigating, searching and modifying the parse tree, Beautiful Soup is ideal even for beginners and usually saves developers hours of work. For example, to print all the blog titles from this page, the findAll() method can be used. On this page, all the blog titles are in h2 elements with class attribute set to blog-card__content-title. This information can be supplied to the findAll method as follows:
blog_titles = ndAll(‘h2’, attrs={“class”:”blog-card__content-title”})
for title in blog_titles:
# Output:
# Prints all blog tiles on the page
BeautifulSoup also makes it easy to work with CSS selectors. If a developer knows a CSS selector, there is no need to learn find() or find_all() methods. The following is the same example, but uses CSS selectors:
blog_titles = (”)
While broken-HTML parsing is one of the main features of this library, it also offers numerous functions, including the fact that it can detect page encoding further increasing the accuracy of the data extracted from the HTML file.
What is more, it can be easily configured, with just a few lines of code, to extract any custom publicly available data or to identify specific data types. Our Beautiful Soup tutorial contains more on this and other configurations, as well as how this library works.
lxml
lxml is a parsing library. It is a fast, powerful, and easy-to-use library that works with both HTML and XML files. Additionally, lxml is ideal when extracting data from large datasets. However, unlike Beautiful Soup, this library is impacted by poorly designed HTML, making its parsing capabilities impeded.
The lxml library can be installed from the terminal using the pip command:
pip install lxml
This library contains a module html to work with HTML. However, the lxml library needs the HTML string first. This HTML string can be retrieved using the Requests library as discussed in the previous section. Once the HTML is available, the tree can be built using the fromstring method as follows:
# After response = ()
from lxml import html
tree = omstring()
This tree object can now be queried using XPath. Continuing the example discussed in the previous section, to get the title of the blogs, the XPath would be as follows:
//h2[@class=”blog-card__content-title”]/text()
This XPath can be given to the () function. This will return all the elements matching this XPath. Notice the text() function in the XPath. This will extract the text within the h2 elements.
blog_titles = (‘//h2[@class=”blog-card__content-title”]/text()’)
print(title)
Suppose you are looking to learn how to use this library and integrate it into your web scraping efforts or even gain more knowledge on top of your existing expertise. In that case, our detailed lxml tutorial is an excellent place to start.
Selenium
As stated, some websites are written using JavaScript, a language that allows developers to populate fields and menus dynamically. This creates a problem for Python libraries that can only extract data from static web pages. In fact, as stated, the Requests library is not an option when it comes to JavaScript. This is where Selenium web scraping comes in and thrives.
This Python web library is an open-source browser automation tool (web driver) that allows you to automate processes such as logging into a social media platform. Selenium is widely used for the execution of test cases or test scripts on web applications. Its strength during web scraping derives from its ability to initiate rendering web pages, just like any browser, by running JavaScript – standard web crawlers cannot run this programming language. Yet, it is now extensively used by developers.
Selenium requires three components:
Web Browser – Supported browsers are Chrome, Edge, Firefox and SafariDriver for the browser – See this page for links to the driversThe selenium package
The selenium package can be installed from the terminal:
pip install selenium
After installation, the appropriate class for the browser can be imported. Once imported, the object of the class will have to be created. Note that this will require the path of the driver executable. Example for the Chrome browser as follows:
from selenium. webdriver import Chrome
driver = Chrome(executable_path=’/path/to/driver’)
Now any page can be loaded in the browser using the get() method.
(”)
Selenium allows use of CSS selectors and XPath to extract elements. The following example prints all the blog titles using CSS selectors:
blog_titles = t_elements_by_css_selector(‘ ‘)
for title in blog_tiles:
() # closing the browser
Basically, by running JavaScript, Selenium deals with any content being displayed dynamically and subsequently makes the webpage’s content available for parsing by built-in methods or even Beautiful Soup. Moreover, it can mimic human behavior.
The only downside to using Selenium in web scraping is that it slows the process because it must first execute the JavaScript code for each page before making it available for parsing. As a result, it is unideal for large-scale data extraction. But if you wish to extract data at a lower-scale or the lack of speed is not a drawback, Selenium is a great choice.
Web scraping Python libraries compared
RequestsBeautiful SouplxmlSeleniumPurposeSimplify making HTTP requestsParsingParsingSimplify making HTTP requestsEase-of-useHighHighMediumMediumSpeedFastFastVery fastSlowLearning CurveVery easy (beginner-friendly)Very easy (beginner-friendly)EasyEasyDocumentationExcellentExcellentGoodGoodJavaScript SupportNoneNoneNoneYesCPU and Memory UsageLowLowLowHighSize of Web Scraping Project SupportedLarge and smallLarge and smallLarge and smallSmall
For this Python web scraping tutorial, we’ll be using three important libraries – BeautifulSoup v4, Pandas, and Selenium. Further steps in this guide assume a successful installation of these libraries. If you receive a “NameError: name * is not defined” it is likely that one of these installations has failed.
Every web scraper uses a browser as it needs to connect to the destination URL. For testing purposes we highly recommend using a regular browser (or not a headless one), especially for newcomers. Seeing how written code interacts with the application allows simple troubleshooting and debugging, and grants a better understanding of the entire process.
Headless browsers can be used later on as they are more efficient for complex tasks. Throughout this web scraping tutorial we will be using the Chrome web browser although the entire process is almost identical with Firefox.
To get started, use your preferred search engine to find the “webdriver for Chrome” (or Firefox). Take note of your browser’s current version. Download the webdriver that matches your browser’s version.
If applicable, select the requisite package, download and unzip it. Copy the driver’s executable file to any easily accessible directory. Whether everything was done correctly, we will only be able to find out later on.
One final step needs to be taken before we can get to the programming part of this web scraping tutorial: using a good coding environment. There are many options, from a simple text editor, with which simply creating a * file and writing the code down directly is enough, to a fully-featured IDE (Integrated Development Environment).
If you already have Visual Studio Code installed, picking this IDE would be the simplest option. Otherwise, I’d highly recommend PyCharm for any newcomer as it has very little barrier to entry and an intuitive UI. We will assume that PyCharm is used for the rest of the web scraping tutorial.
In PyCharm, right click on the project area and “New -> Python File”. Give it a nice name!
Time to put all those pips we installed previously to use:
import pandas as pd
from selenium import webdriver
PyCharm might display these imports in grey as it automatically marks unused libraries. Don’t accept its suggestion to remove unused libs (at least yet).
We should begin by defining our browser. Depending on the webdriver we picked back in “WebDriver and browsers” we should type in:
driver = (executable_path=’c:\path\to\windows\webdriver\’)
OR
driver = refox(executable_path=’/nix/path/to/webdriver/executable’)
Python web scraping requires looking into the source of websites
Before performing our first test run, choose a URL. As this web scraping tutorial is intended to create an elementary application, we highly recommended picking a simple target URL:
Avoid data hidden in Javascript elements. These sometimes need to be triggered by performing specific actions in order to display the required data. Scraping data from Javascript elements requires more sophisticated use of Python and its image scraping. Images can be downloaded directly with conducting any scraping activities ensure that you are scraping public data, and are in no way breaching third-party rights. Also, don’t forget to check the file for guidance.
Select the landing page you want to visit and input the URL into the (‘URL’) parameter. Selenium requires that the connection protocol is provided. As such, it is always necessary to attach “” or “” to the URL.
Try doing a test run by clicking the green arrow at the bottom left or by right clicking the coding environment and selecting ‘Run’.
Follow the red pointer
If you receive an error message stating that a file is missing then turn double check if the path provided in the driver “webdriver. *” matches the location of the webdriver executable. If you receive a message that there is a version mismatch redownload the correct webdriver executable.
Python allows coders to design objects without assigning an exact type. An object can be created by simply typing its title and assigning a value.
# Object is “results”, brackets make the object an empty list.
# We will be storing our data here.
results = []
Lists in Python are ordered, mutable and allow duplicate members. Other collections, such as sets or dictionaries, can be used but lists are the easiest to use. Time to make more objects!
# Add the page source to the variable `content`.
content = ge_source
# Load the contents of the page, its source, into BeautifulSoup
# class, which analyzes the HTML as a nested data structure and allows to select
# its elements by using various selectors.
soup = BeautifulSoup(content)
Before we go on with, let’s recap on how our code should look so far:
driver = (executable_path=’/nix/path/to/webdriver/executable’)
Try rerunning the application again. There should be no errors displayed. If any arise, a few possible troubleshooting options were outlined in earlier chapters.
We have finally arrived at the fun and difficult part – extracting data out of the HTML file. Since in almost all cases we are taking small sections out of many different parts of the page and we want to store it into a list, we should process every smaller section and then add it to the list:
# Loop over all elements returned by the `findAll` call. It has the filter `attrs` given
# to it in order to limit the data returned to those elements with a given class only.
for element in ndAll(attrs={‘class’: ‘list-item’}):…
“ndAll” accepts a wide array of arguments. For the purposes of this tutorial we only use “attrs” (attributes). It allows us to narrow down the search by setting up a statement “if attribute is equal to X is true then…”. Classes are easy to find and use therefore we shall use those.
Let’s visit the chosen URL in a real browser before continuing. Open the page source by using CTRL+U (Chrome) or right click and select “View Page Source”. Find the “closest” class where the data is nested. Another option is to press F12 to open DevTools to select Element Picker. For example, it could be nested as:

This is a Title

Our attribute, “class”, would then be “title”. If you picked a simple target, in most cases data will be nested in a similar way to the example above. Complex targets might require more effort to get the data out. Let’s get back to coding and add the class we found in the source:
# Change ‘list-item’ to ‘title’.
for element in ndAll(attrs={‘class’: ‘title’}):…
Our loop will now go through all objects with the class “title” in the page source. We will process each of them:
name = (‘a’)
Let’s take a look at how our loop goes through the HTML:
Our first statement (in the loop itself) finds all elements that match tags, whose “class” attribute contains “title”. We then execute another search within that class. Our next search finds all the tags in the document ( is included while partial matches like are not). Finally, the object is assigned to the variable “name”.
We could then assign the object name to our previously created list array “results” but doing this would bring the entire
tag with the text inside it into one element. In most cases, we would only need the text itself without any additional tags.
# Add the object of “name” to the list “results”.
# `` extracts the text in the element, omitting the HTML tags.
()
Our loop will go through the entire page source, find all the occurrences of the classes listed above, then append the nested data to our list:
for element in ndAll(attrs={‘class’: ‘title’}):
Note that the two statements after the loop are indented. Loops require indentation to denote nesting. Any consistent indentation will be considered legal. Loops without indentation will output an “IndentationError” with the offending statement pointed out with the “arrow”.
Python web scraping requires constant double-checking of the code
Even if no syntax or runtime errors appear when running our program, there still might be semantic errors. You should check whether we actually get the data assigned to the right object and move to the array correctly.
One of the simplest ways to check if the data you acquired during the previous steps is being collected correctly is to use “print”. Since arrays have many different values, a simple loop is often used to separate each entry to a separate line in the output:
for x in results:
print(x)
Both “print” and “for” should be self-explanatory at this point. We are only initiating this loop for quick testing and debugging purposes. It is completely viable to print the results directly:
print(results)
So far our code should look like this:
for a in ndAll(attrs={‘class’: ‘class’}):
if name not in results:
Running our program now should display no errors and display acquired data in the debugger window. While “print” is great for testing purposes, it isn’t all that great for parsing and analyzing data.
You might have noticed that “import pandas” is still greyed out so far. We will finally get to put the library to good use. I recommend removing the “print” loop for now as we will be doing something similar but moving our data to a csv file.
df = Frame({‘Names’: results})
_csv(”, index=False, encoding=’utf-8′)
Our two new statements rely on the pandas library. Our first statement creates a variable “df” and turns its object into a two-dimensional data table. “Names” is the name of our column while “results” is our list to be printed out. Note that pandas can create multiple columns, we just don’t have enough lists to utilize those parameters (yet).
Our second statement moves the data of variable “df” to a specific file type (in this case “csv”). Our first parameter assigns a name to our soon-to-be file and an extension. Adding an extension is necessary as “pandas” will otherwise output a file without one and it will have to be changed manually. “index” can be used to assign specific starting numbers to columns. “encoding” is used to save data in a specific format. UTF-8 will be enough in almost all cases.
No imports should now be greyed out and running our application should output a “” into our project directory. Note that a “Guessed At Parser” warning remains. We could remove it by installing a third party parser but for the purposes of this Python web scraping tutorial the default HTML option will do just fine.
Python web scraping often requires many data points
Many web scraping operations will need to acquire several sets of data. For example, extracting just the titles of items listed on an e-commerce website will rarely be useful. In order to gather meaningful information and to draw conclusions from it at least two data points are needed.
For the purposes of this tutorial, we will try something slightly different. Since acquiring data from the same class would just mean appending to an additional list, we should attempt to extract data from a different class but, at the same time, maintain the structure of our table.
Obviously, we will need another list to store our data in.
other_results = []
for b in ndAll(attrs={‘class’: ‘otherclass’}):
# Assume that data is nested in ‘span’.
name2 = (‘span’)
Since we will be extracting an additional data point from a different part of the HTML, we will need an additional loop. If needed we can also add another “if” conditional to control for duplicate entries:
Finally, we need to change how our data table is formed:
df = Frame({‘Names’: results, ‘Categories’: other_results})
So far the newest iteration of our code should look something like this:
name = (‘a’)
If you are lucky, running this code will output no error. In some cases “pandas” will output an “ValueError: arrays must all be the same length” message. Simply put, the length of the lists “results” and “other_results” is unequal, therefore pandas cannot create a two-dimensional table.
There are dozens of ways to resolve that error message. From padding the shortest list with “empty” values, to creating dictionaries, to creating two series and listing them out. We shall do the third option:
series1 = (results, name = ‘Names’)
series2 = (other_results, name = ‘Categories’)
df = Frame({‘Names’: series1, ‘Categories’: series2})
Note that data will not be matched as the lists are of uneven length but creating two series is the easiest fix if two data points are needed. Our final code should look something like this:
Running it should create a csv file named “names” with two columns of data.
Our first web scraper should now be fully functional. Of course it is so basic and simplistic that performing any serious data acquisition would require significant upgrades. Before moving on to greener pastures, I highly recommend experimenting with some additional features:
Create matched data extraction by creating a loop that would make lists of an even length.
Scrape several URLs in one go. There are many ways to implement such a feature. One of the simplest options is to simply repeat the code above and change URLs each time. That would be quite boring. Build a loop and an array of URLs to visit.
Another option is to create several arrays to store different sets of data and output it into one file with different rows. Scraping several different types of information at once is an important part of e-commerce data acquisition.
Once a satisfactory web scraper is running, you no longer need to watch the browser perform its actions. Get headless versions of either Chrome or Firefox browsers and use those to reduce load times.
Create a scraping pattern. Think of how a regular user would browse the internet and try to automate their actions. New libraries will definitely be needed. Use “import time” and “from random import randint” to create wait times between pages. Add “scrollto()” or use specific key inputs to move around the browser. It’s nearly impossible to list all of the possible options when it comes to creating a scraping pattern.
Create a monitoring process. Data on certain websites might be time (or even user) sensitive. Try creating a long-lasting loop that rechecks certain URLs and scrapes data at set intervals. Ensure that your acquired data is always fresh.
Make use of the Python Requests library. Requests is a powerful asset in any web scraping toolkit as it allows to optimize HTTP methods sent to servers.
Finally, integrate proxies into your web scraper. Using location specific request sources allows you to acquire data that might otherwise be inaccessible.
If you enjoy video content more, watch our embedded, simplified version of the web scraping tutorial!
You can also watch our video version of the tutorial!
From here onwards, you are on your own. Building web scrapers in Python, acquiring data and drawing conclusions from large amounts of information is inherently an interesting and complicated process.
If you want to find out more about how proxies or advanced data acquisition tools work, or about specific web scraping use cases, such as web scraping job postings or building a yellow page scraper, check out our blog. We have enough articles for everyone: a more detailed guide on how to avoid blocks when scraping, is web scraping legal, an in-depth walkthrough on what is a proxy and many more!
Adomas Sulcas is a Content Manager at Oxylabs. Having grown up in a tech-minded household, he quickly developed an interest in everything IT and Internet related. When he is not nerding out online or immersed in reading, you will find him on an adventure or coming up with wicked business ideas.
All information on Oxylabs Blog is provided on an “as is” basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on Oxylabs Blog or any third-party websites that may be linked therein. Before engaging in scraping activities of any kind you should consult your legal advisors and carefully read the particular website’s terms of service or receive a scraping license.

Frequently Asked Questions about python web scraping example

Is web scraping with Python legal?

So is it legal or illegal? Web scraping and crawling aren’t illegal by themselves. After all, you could scrape or crawl your own website, without a hitch. … Big companies use web scrapers for their own gain but also don’t want others to use bots against them.

What is web scraping in Python with example?

Web scraping, also called web data mining or web harvesting, is the process of constructing an agent which can extract, parse, download and organize useful information from the web automatically.

How do I use Python to scrape a website?

To extract data using web scraping with python, you need to follow these basic steps:Find the URL that you want to scrape.Inspecting the Page.Find the data you want to extract.Write the code.Run the code and extract the data.Store the data in the required format.Sep 24, 2021

Leave a Reply

Your email address will not be published. Required fields are marked *