How To Use Beautifulsoup
Tutorial: Web Scraping with Python Using Beautiful Soup
Published: March 30, 2021 Learn how to scrape the web with Python! The internet is an absolutely massive source of data — data that we can access using web scraping and Python! In fact, web scraping is often the only way we can access data. There is a lot of information out there that isn’t available in convenient CSV exports or easy-to-connect APIs. And websites themselves are often valuable sources of data — consider, for example, the kinds of analysis you could do if you could download every post on a web access those sorts of on-page datasets, we’ll have to use web scraping. Don’t worry if you’re still a total beginner! In this tutorial we’re going to cover how to do web scraping with Python from scratch, starting with some answers to frequently-asked, we’ll work through an actual web scraping project, focusing on weather ‘ll work together to scrape weather data from the web to support a weather before we start writing any Python, we’ve got to cover the basics! If you’re already familiar with the concept of web scraping, feel free to scroll past these questions and jump right into the tutorial! The Fundamentals of Web Scraping:What is Web Scraping in Python? Some websites offer data sets that are downloadable in CSV format, or accessible via an Application Programming Interface (API). But many websites with useful data don’t offer these convenient nsider, for example, the National Weather Service’s website. It contains up-to-date weather forecasts for every location in the US, but that weather data isn’t accessible as a CSV or via API. It has to be viewed on the NWS site:If we wanted to analyze this data, or download it for use in some other app, we wouldn’t want to painstakingly copy-paste everything. Web scraping is a technique that lets us use programming to do the heavy lifting. We’ll write some code that looks at the NWS site, grabs just the data we want to work with, and outputs it in the format we this tutorial, we’ll show you how to perform web scraping using Python 3 and the Beautiful Soup library. We’ll be scraping weather forecasts from the National Weather Service, and then analyzing them using the Pandas to be clear, lots of programming languages can be used to scrape the web! We also teach web scraping in R, for example. For this tutorial, though, we’ll be sticking with Does Web Scraping Work? When we scrape the web, we write code that sends a request to the server that’s hosting the page we specified. The server will return the source code — HTML, mostly — for the page (or pages) we far, we’re essentially doing the same thing a web browser does — sending a server request with a specific URL and asking the server to return the code for that unlike a web browser, our web scraping code won’t interpret the page’s source code and display the page visually. Instead, we’ll write some custom code that filters through the page’s source code looking for specific elements we’ve specified, and extracting whatever content we’ve instructed it to example, if we wanted to get all of the data from inside a table that was displayed on a web page, our code would be written to go through these steps in sequence:1Request the content (source code) of a specific URL from the server2Download the content that is returned3Identify the elements of the page that are part of the table we want4Extract and (if necessary) reformat those elements into a dataset we can analyze or use in whatever way we that all sounds very complicated, don’t worry! Python and Beautiful Soup have built-in features designed to make this relatively straightforward. One thing that’s important to note: from a server’s perspective, requesting a page via web scraping is the same as loading it in a web browser. When we use code to submit these requests, we might be “loading” pages much faster than a regular user, and thus quickly eating up the website owner’s server Use Python for Web Scraping? As previously mentioned, it’s possible to do web scraping with many programming ever, one of the most popular approaches is to use Python and the Beautiful Soup library, as we’ll do in this tutorial. Learning to do this with Python will mean that there are lots of tutorials, how-to videos, and bits of example code out there to help you deepen your knowledge once you’ve mastered the Beautiful Soup Web Scraping Legal? Unfortunately, there’s not a cut-and-dry answer here. Some websites explicitly allow web scraping. Others explicitly forbid it. Many websites don’t offer any clear guidance one way or the scraping any website, we should look for a terms and conditions page to see if there are explicit rules about scraping. If there are, we should follow them. If there are not, then it becomes more of a judgement member, though, that web scraping consumes server resources for the host website. If we’re just scraping one page once, that isn’t going to cause a problem. But if our code is scraping 1, 000 pages once every ten minutes, that could quickly get expensive for the website, in addition to following any and all explicit rules about web scraping posted on the site, it’s also a good idea to follow these best practices:Web Scraping Best Practices:Never scrape more frequently than you need nsider caching the content you scrape so that it’s only downloaded pauses into your code using functions like () to keep from overwhelming servers with too many requests too our case for this tutorial, the NWS’s data is public domain and its terms do not forbid web scraping, so we’re in the clear to to scrape the web with Python, right in your browser! Our interactive APIs and Web Scraping in Python skill path will help you learn the skills you need to unlock new worlds of data with Python. (No credit card required! ) The Components of a Web PageBefore we start writing code, we need to understand a little bit about the structure of a web page. We’ll use the site’s structure to write code that gets us the data we want to scrape, so understanding that structure is an important first step for any web scraping we visit a web page, our web browser makes a request to a web server. This request is called a GET request, since we’re getting files from the server. The server then sends back files that tell our browser how to render the page for us. These files will typically include:HTML — the main content of the — used to add styling to make the page look — Javascript files add interactivity to web — image formats, such as JPG and PNG, allow web pages to show our browser receives all the files, it renders the page and displays it to ’s a lot that happens behind the scenes to render a page nicely, but we don’t need to worry about most of it when we’re web scraping. When we perform web scraping, we’re interested in the main content of the web page, so we look primarily at the MLHyperText Markup Language (HTML) is the language that web pages are created in. HTML isn’t a programming language, like Python, though. It’s a markup language that tells a browser how to display content. HTML has many functions that are similar to what you might find in a word processor like Microsoft Word — it can make text bold, create paragraphs, and so you’re already familiar with HTML, feel free to jump to the next section of this tutorial. Otherwise, let’s take a quick tour through HTML so we know enough to scrape consists of elements called tags. The most basic tag is the tag. This tag tells the web browser that everything inside of it is HTML. We can make a simple HTML document just using this tag:We haven’t added any content to our page yet, so if we viewed our HTML document in a web browser, we wouldn’t see anything:Right inside an html tag, we can put two other tags: the head tag, and the body main content of the web page goes into the body tag. The head tag contains data about the title of the page, and other information that generally isn’t useful in web scraping:
We still haven’t added any content to our page (that goes inside the body tag), so if we open this HTML file in a browser, we still won’t see anything:You may have noticed above that we put the head and body tags inside the html tag. In HTML, tags are nested, and can go inside other ’ll now add our first content to the page, inside a p tag. The p tag defines a paragraph, and any text inside the tag is shown as a separate paragraph:Here’s a paragraph of text!
Here’s a second paragraph of text!
Rendered in a browser, that HTML file will look like this: Here’s a paragraph of text! Here’s a second paragraph of text! Tags have commonly used names that depend on their position in relation to other tags:child — a child is a tag inside another tag. So the two p tags above are both children of the body — a parent is the tag another tag is inside. Above, the html tag is the parent of the body biling — a sibiling is a tag that is nested inside the same parent as another tag. For example, head and body are siblings, since they’re both inside html. Both p tags are siblings, since they’re both inside can also add properties to HTML tags that change their behavior. Below, we’ll add some extra text and hyperlinks using the a tag.
Here’s a paragraph of text! Python
Here’s how this will look:In the above example, we added two a tags. a tags are links, and tell the browser to render a link to another web page. The href property of the tag determines where the link goes. a and p are extremely common html tags. Here are a few others:div — indicates a division, or area, of the page. b — bolds any text inside. i — italicizes any text — creates a — creates an input a full list of tags, look we move into actual web scraping, let’s learn about the class and id properties. These special properties give HTML elements names, and make them easier to interact with when we’re element can have multiple classes, and a class can be shared between elements. Each element can only have one id, and an id can only be used once on a page. Classes and ids are optional, and not all elements will have can add classes and ids to our example:
Here’s a paragraph of text! Learn Data Science Online
Here’s a second paragraph of text! Python
Here’s how this will look:As you can see, adding classes and ids doesn’t change how the tags are rendered at requests libraryNow that we understand the structure of a web page, it’s time to get into the fun part: scraping the content we want! The first thing we’ll need to do to scrape a web page is to download the page. We can download pages using the Python requests requests library will make a GET request to a web server, which will download the HTML contents of a given web page for us. There are several different types of requests we can make using requests, of which GET is just one. If you want to learn more, check out our API ’s try downloading a simple sample website, ll need to first import the requests library, and then download the page using the method:import requests
page = (“)
page
Here is some simple content for this page.
Parsing a page with BeautifulSoupAs you can see above, we now have downloaded an HTML can use the BeautifulSoup library to parse this document, and extract the text from the p first have to import the library, and create an instance of the BeautifulSoup class to parse our document:from bs4 import BeautifulSoup
soup = BeautifulSoup(ntent, ”)We can now print out the HTML content of the page, formatted nicely, using the prettify method on the BeautifulSoup object.
This step isn’t strictly necessary, and we won’t always bother with it, but it can be helpful to look at prettified HTML to make the structure of the and where tags are nested easier to all the tags are nested, we can move through the structure one level at a time. We can first select all the elements at the top level of the page using the children property of soup. Note that children returns a list generator, so we need to call the list function on it:list(ildren)
[‘html’, ‘n’,
Here is some simple content for this page.
]The above tells us that there are two tags at the top level of the page — the initial tag, and the tag. There is a newline character (n) in the list as well. Let’s see what the type of each element in the list is:[type(item) for item in list(ildren)]
[ctype, vigableString, ]As we can see, all of the items are BeautifulSoup objects:The first is a Doctype object, which contains information about the type of the second is a NavigableString, which represents text found in the HTML final item is a Tag object, which contains other nested most important object type, and the one we’ll deal with most often, is the Tag Tag object allows us to navigate through an HTML document, and extract other tags and text. You can learn more about the various BeautifulSoup objects can now select the html tag and its children by taking the third item in the list:html = list(ildren)[2]Each item in the list returned by the children property is also a BeautifulSoup object, so we can also call the children method on, we can find the children inside the html tag:list(ildren)
[‘n’,
Here is some simple content for this page.
, ‘n’]As we can see above, there are two tags here, head, and body. We want to extract the text inside the p tag, so we’ll dive into the body:body = list(ildren)[3]Now, we can get the p tag by finding the children of the body tag:list(ildren)
[‘n’,
Here is some simple content for this page.
, ‘n’]We can now isolate the p tag:p = list(ildren)[1]Once we’ve isolated the tag, we can use the get_text method to extract all of the text inside the t_text()
‘Here is some simple content for this page. ‘Finding all instances of a tag at onceWhat we did above was useful for figuring out how to navigate a page, but it took a lot of commands to do something fairly simple. If we want to extract a single tag, we can instead use the find_all method, which will find all the instances of a tag on a = BeautifulSoup(ntent, ”)
nd_all(‘p’)
[
Here is some simple content for this page.
]Note that find_all returns a list, so we’ll have to loop through, or use list indexing, it to extract nd_all(‘p’)[0]. get_text()
‘Here is some simple content for this page. ‘f you instead only want to find the first instance of a tag, you can use the find method, which will return a single BeautifulSoup (‘p’)
Here is some simple content for this page.
Searching for tags by class and idWe introduced classes and ids earlier, but it probably wasn’t clear why they were asses and ids are used by CSS to determine which HTML elements to apply certain styles to. But when we’re scraping, we can also use them to specify the elements we want to illustrate this principle, we’ll work with the following page:
First paragraph.
Second paragraph.
First outer paragraph.
Second outer paragraph.
We can access the above document at the URL. Let’s first download the page and create a BeautifulSoup object:page = (“)
soup = BeautifulSoup(ntent, ”)
soup
Now, we can use the find_all method to search for items by class or by id. In the below example, we’ll search for any p tag that has the class nd_all(‘p’, class_=’outer-text’)
[
First outer paragraph.
,
Second outer paragraph.
]In the below example, we’ll look for any tag that has the class nd_all(class_=”outer-text”)
,
]We can also search for elements by nd_all(id=”first”)
[
]Using CSS SelectorsWe can also search for items using CSS selectors. These selectors are how the CSS language allows developers to specify HTML tags to style. Here are some examples:p a — finds all a tags inside of a p p a — finds all a tags inside of a p tag inside of a body body — finds all body tags inside of an html — finds all p tags with a class of outer-text. p#first — finds all p tags with an id of — finds any p tags with a class of outer-text inside of a body can learn more about CSS selectors autifulSoup objects support searching a page via CSS selectors using the select method. We can use CSS selectors to find all the p tags in our page that are inside of a div like (“div p”)
,
]Note that the select method above returns a list of BeautifulSoup objects, just like find and wnloading weather dataWe now know enough to proceed with extracting information about the local weather from the National Weather Service website! The first step is to find the page we want to scrape. We’ll extract weather information about downtown San Francisco from this page. Specifically, let’s extract data about the extended we can see from the image, the page has information about the extended forecast for the next week, including time of day, temperature, and a brief description of the conditions. Exploring page structure with Chrome DevToolsThe first thing we’ll need to do is inspect the page using Chrome Devtools. If you’re using another browser, Firefox and Safari have can start the developer tools in Chrome by clicking View -> Developer -> Developer Tools. You should end up with a panel at the bottom of the browser like what you see below. Make sure the Elements panel is highlighted:Chrome Developer ToolsThe elements panel will show you all the HTML tags on the page, and let you navigate through them. It’s a really handy feature! By right clicking on the page near where it says “Extended Forecast”, then clicking “Inspect”, we’ll open up the tag that contains the text “Extended Forecast” in the elements panel:The extended forecast textWe can then scroll up in the elements panel to find the “outermost” element that contains all of the text that corresponds to the extended forecasts. In this case, it’s a div tag with the id seven-day-forecast:The div that contains the extended forecast we click around on the console, and explore the div, we’ll discover that each forecast item (like “Tonight”, “Thursday”, and “Thursday Night”) is contained in a div with the class to Start Scraping! We now know enough to download the page and start parsing it. In the below code, we will:Download the web page containing the a BeautifulSoup class to parse the the div with id seven-day-forecast, and assign to seven_dayInside seven_day, find each individual forecast item. Extract and print the first forecast = (“)
seven_day = (id=”seven-day-forecast”)
forecast_items = nd_all(class_=”tombstone-container”)
tonight = forecast_items[0]
print(ettify())
Tonight
Mostly Clear
Low: 49 °F
Extracting information from the pageAs we can see, inside the forecast item tonight is all the information we want. There are four pieces of information we can extract:The name of the forecast item — in this case, description of the conditions — this is stored in the title property of img. A short description of the conditions — in this case, Mostly temperature low — in this case, 49 ’ll extract the name of the forecast item, the short description, and the temperature first, since they’re all similar:period = (class_=”period-name”). get_text()
short_desc = (class_=”short-desc”). get_text()
temp = (class_=”temp”). get_text()
print(period)
print(short_desc)
print(temp)
Low: 49 °FNow, we can extract the title attribute from the img tag. To do this, we just treat the BeautifulSoup object like a dictionary, and pass in the attribute we want as a key:img = (“img”)
desc = img[‘title’]
print(desc)
Tonight: Mostly clear, with a low around 49. Extracting all the information from the pageNow that we know how to extract each individual piece of information, we can combine our knowledge with CSS selectors and list comprehensions to extract everything at the below code, we will:Select all items with the class period-name inside an item with the class tombstone-container in a list comprehension to call the get_text method on each BeautifulSoup riod_tags = (“. tombstone-container “)
periods = [t_text() for pt in period_tags]
periods
[‘Tonight’,
‘Thursday’,
‘ThursdayNight’,
‘Friday’,
‘FridayNight’,
‘Saturday’,
‘SaturdayNight’,
‘Sunday’,
‘SundayNight’]As we can see above, our technique gets us each of the period names, in order. We can apply the same technique to get the other three fields:short_descs = [t_text() for sd in (“. tombstone-container “)]
temps = [t_text() for t in (“. tombstone-container “)]
descs = [d[“title”] for d in (“. tombstone-container img”)]print(short_descs)print(temps)print(descs)
[‘Mostly Clear’, ‘Sunny’, ‘Mostly Clear’, ‘Sunny’, ‘Slight ChanceRain’, ‘Rain Likely’, ‘Rain Likely’, ‘Rain Likely’, ‘Chance Rain’]
[‘Low: 49 °F’, ‘High: 63 °F’, ‘Low: 50 °F’, ‘High: 67 °F’, ‘Low: 57 °F’, ‘High: 64 °F’, ‘Low: 57 °F’, ‘High: 64 °F’, ‘Low: 55 °F’]
[‘Tonight: Mostly clear, with a low around 49. ‘, ‘Thursday: Sunny, with a high near 63. North wind 3 to 5 mph. ‘, ‘Thursday Night: Mostly clear, with a low around 50. Light and variable wind becoming east southeast 5 to 8 mph after midnight. ‘, ‘Friday: Sunny, with a high near 67. Southeast wind around 9 mph. ‘, ‘Friday Night: A 20 percent chance of rain after 11pm. Partly cloudy, with a low around 57. South southeast wind 13 to 15 mph, with gusts as high as 20 mph. New precipitation amounts of less than a tenth of an inch possible. ‘, ‘Saturday: Rain likely. Cloudy, with a high near 64. Chance of precipitation is 70%. New precipitation amounts between a quarter and half of an inch possible. ‘, ‘Saturday Night: Rain likely. Cloudy, with a low around 57. Chance of precipitation is 60%. ‘, ‘Sunday: Rain likely. ‘, ‘Sunday Night: A chance of rain. Mostly cloudy, with a low around 55. ‘]Combining our data into a Pandas DataframeWe can now combine the data into a Pandas DataFrame and analyze it. A DataFrame is an object that can store tabular data, making data analysis easy. If you want to learn more about Pandas, check out our free to start course order to do this, we’ll call the DataFrame class, and pass in each list of items that we have. We pass them in as part of a dictionary key will become a column in the DataFrame, and each list will become the values in the column:import pandas as pd
weather = Frame({
“period”: periods,
“short_desc”: short_descs,
“temp”: temps,
“desc”:descs})
weather
desc
period
short_desc
temp
0
Tonight: Mostly clear, with a low around 49. W…
1
Thursday: Sunny, with a high near 63. North wi…
Thursday
Sunny
High: 63 °F
2
Thursday Night: Mostly clear, with a low aroun…
ThursdayNight
Low: 50 °F
3
Friday: Sunny, with a high near 67. Southeast …
Friday
High: 67 °F
4
Friday Night: A 20 percent chance of rain afte…
FridayNight
Slight ChanceRain
Low: 57 °F
5
Saturday: Rain likely. Cloudy, with a high ne…
Saturday
Rain Likely
High: 64 °F
6
Saturday Night: Rain likely. Cloudy, with a l…
SaturdayNight
7
Sunday: Rain likely. Cloudy, with a high near…
Sunday
8
Sunday Night: A chance of rain. Mostly cloudy…
SundayNight
Chance Rain
Low: 55 °F
We can now do some analysis on the data. For example, we can use a regular expression and the method to pull out the numeric temperature values:temp_nums = weather[“temp”](“(? Pd+)”, expand=False)
weather[“temp_num”] = (‘int’)
temp_nums
0 49
1 63
2 50
3 67
4 57
5 64
6 57
7 64
8 55
Name: temp_num, dtype: objectWe could then find the mean of all the high and low temperatures:weather[“temp_num”]()
58. 444444444444443We could also only select the rows that happen at night:is_night = weather[“temp”](“Low”)
weather[“is_night”] = is_night
is_night
0 True
1 False
2 True
3 False
4 True
5 False
6 True
7 False
8 True
Name: temp, dtype: boolweather[is_night]
Name: temp, dtype: bool
temp_num
49
True
50
57
55
Next Steps For This Web Scraping ProjectIf you’ve made it this far, congratulations! You should now have a good understanding of how to scrape web pages and extract data. Of course, there’s still a lot more to learn! If you want to go further, a good next step would be to pick a site and try some web scraping on your own. Some good examples of data to scrape are:News articlesSports scoresWeather forecastsStock pricesOnline retailer pricesYou may also want to keep scraping the National Weather Service, and see what other data you can extract from the page, or about your own ternatively, if you want to take your web scraping skills to the next level, you can check out our interactive course, which covers both the basics of web scraping and using Python to connect to APIs. With those two skills under your belt, you’ll be able to collect lots of unique and interesting datasets from sites all over the web! Learn to scrape the web with Python, right in your browser! Our interactive APIs and Web Scraping in Python skill path will help you learn the skills you need to unlock new worlds of data with Python. (No credit card required! )beginner, data mining, python, python tutorials, scraping, tutorial, Tutorials, web scraping
Beautiful Soup 4.9.0 documentation – Crummy
Beautiful Soup is a
Python library for pulling data out of HTML and XML files. It works
with your favorite parser to provide idiomatic ways of navigating,
searching, and modifying the parse tree. It commonly saves programmers
hours or days of work.
These instructions illustrate all major features of Beautiful Soup 4,
with examples. I show you what the library is good for, how it works,
how to use it, how to make it do what you want, and what to do when it
violates your expectations.
This document covers Beautiful Soup version 4. 9. 3. The examples in
this documentation should work the same way in Python 2. 7 and Python
3. 8.
You might be looking for the documentation for Beautiful Soup 3.
If so, you should know that Beautiful Soup 3 is no longer being
developed and that support for it will be dropped on or after December
31, 2020. If you want to learn about the differences between Beautiful
Soup 3 and Beautiful Soup 4, see Porting code to BS4.
This documentation has been translated into other languages by
Beautiful Soup users:
这篇文档当然还有中文版.
このページは日本語で利用できます(外部リンク)
이 문서는 한국어 번역도 가능합니다.
Este documento também está disponível em Português do Brasil.
Эта документация доступна на русском языке.
Getting help¶
If you have questions about Beautiful Soup, or run into problems,
send mail to the discussion group. If
your problem involves parsing an HTML document, be sure to mention
what the diagnose() function says about
that document.
Here’s an HTML document I’ll be using as an example throughout this
document. It’s part of a story from Alice in Wonderland:
html_doc = “””
The Dormouse’s story
Once upon a time there were three little sisters; and their names were
Elsie,
Lacie and
Tillie;
and they lived at the bottom of a well.
…
“””
Running the “three sisters” document through Beautiful Soup gives us a
BeautifulSoup object, which represents the document as a nested
data structure:
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_doc, ”)
print(ettify())
#
#
#
# The Dormouse’s story
#
#
#
#
#
#
#
#
# Once upon a time there were three little sisters; and their names were
#
# Elsie
#
#,
#
# Lacie
# and
#
# Tillie
#; and they lived at the bottom of a well.
#…
#
#
Here are some simple ways to navigate that data structure:
#
# u’title’
# u’The Dormouse’s story’
# u’head’
soup. p
#
The Dormouse’s story
soup. p[‘class’]
soup. a
# Elsie
nd_all(‘a’)
# [Elsie,
# Lacie,
# Tillie]
(id=”link3″)
# Tillie
One common task is extracting all the URLs found within a page’s tags:
for link in nd_all(‘a’):
print((‘href’))
# # #
Another common task is extracting all the text from a page:
print(t_text())
#
# Elsie,
# Lacie and
# Tillie;
# and they lived at the bottom of a well.
Does this look like what you need? If so, read on.
If you’re using a recent version of Debian or Ubuntu Linux, you can
install Beautiful Soup with the system package manager:
$ apt-get install python-bs4 (for Python 2)
$ apt-get install python3-bs4 (for Python 3)
Beautiful Soup 4 is published through PyPi, so if you can’t install it
with the system packager, you can install it with easy_install or
pip. The package name is beautifulsoup4, and the same package
works on Python 2 and Python 3. Make sure you use the right version of
pip or easy_install for your Python version (these may be named
pip3 and easy_install3 respectively if you’re using Python 3).
$ easy_install beautifulsoup4
$ pip install beautifulsoup4
(The BeautifulSoup package is not what you want. That’s
the previous major release, Beautiful Soup 3. Lots of software uses
BS3, so it’s still available, but if you’re writing new code you
should install beautifulsoup4. )
If you don’t have easy_install or pip installed, you can
download the Beautiful Soup 4 source tarball and
install it with
$ python install
If all else fails, the license for Beautiful Soup allows you to
package the entire library with your application. You can download the
tarball, copy its bs4 directory into your application’s codebase,
and use Beautiful Soup without installing it at all.
I use Python 2. 7 and Python 3. 8 to develop Beautiful Soup, but it
should work with other recent versions.
Problems after installation¶
Beautiful Soup is packaged as Python 2 code. When you install it for
use with Python 3, it’s automatically converted to Python 3 code. If
you don’t install the package, the code won’t be converted. There have
also been reports on Windows machines of the wrong version being
installed.
If you get the ImportError “No module named HTMLParser”, your
problem is that you’re running the Python 2 version of the code under
Python 3.
If you get the ImportError “No module named ”, your
problem is that you’re running the Python 3 version of the code under
Python 2.
In both cases, your best bet is to completely remove the Beautiful
Soup installation from your system (including any directory created
when you unzipped the tarball) and try the installation again.
If you get the SyntaxError “Invalid syntax” on the line
ROOT_TAG_NAME = u'[document]’, you need to convert the Python 2
code to Python 3. You can do this either by installing the package:
$ python3 install
or by manually running Python’s 2to3 conversion script on the
bs4 directory:
$ 2to3-3. 2 -w bs4
Installing a parser¶
Beautiful Soup supports the HTML parser included in Python’s standard
library, but it also supports a number of third-party Python parsers.
One is the lxml parser. Depending on your setup,
you might install lxml with one of these commands:
$ apt-get install python-lxml
$ easy_install lxml
$ pip install lxml
Another alternative is the pure-Python html5lib parser, which parses HTML the way a
web browser does. Depending on your setup, you might install html5lib
with one of these commands:
$ apt-get install python-html5lib
$ easy_install html5lib
$ pip install html5lib
This table summarizes the advantages and disadvantages of each parser library:
Parser
Typical usage
Advantages
Disadvantages
Python’s
BeautifulSoup(markup, “”)
Batteries included
Decent speed
Lenient (As of Python 2. 7. 3
and 3. 2. )
Not as fast as lxml,
less lenient than
html5lib.
lxml’s HTML parser
BeautifulSoup(markup, “lxml”)
Very fast
Lenient
External C dependency
lxml’s XML parser
BeautifulSoup(markup, “lxml-xml”)
BeautifulSoup(markup, “xml”)
The only currently supported
XML parser
html5lib
BeautifulSoup(markup, “html5lib”)
Extremely lenient
Parses pages the same way a
web browser does
Creates valid HTML5
Very slow
External Python
dependency
If you can, I recommend you install and use lxml for speed. If you’re
using a very old version of Python – earlier than 2. 3 or 3. 2 –
it’s essential that you install lxml or html5lib. Python’s built-in
HTML parser is just not very good in those old versions.
Note that if a document is invalid, different parsers will generate
different Beautiful Soup trees for it. See Differences
between parsers for details.
To parse a document, pass it into the BeautifulSoup
constructor. You can pass in a string or an open filehandle:
with open(“”) as fp:
soup = BeautifulSoup(fp, ”)
soup = BeautifulSoup(“a web page“, ”)
First, the document is converted to Unicode, and HTML entities are
converted to Unicode characters:
print(BeautifulSoup(“Sacré bleu! “, “”))
# Sacré bleu!
Beautiful Soup then parses the document using the best available
parser. It will use an HTML parser unless you specifically tell it to
use an XML parser. (See Parsing XML. )
Beautiful Soup transforms a complex HTML document into a complex tree
of Python objects. But you’ll only ever have to deal with about four
kinds of objects: Tag, NavigableString, BeautifulSoup,
and Comment.
Tag¶
A Tag object corresponds to an XML or HTML tag in the original document:
soup = BeautifulSoup(‘Extremely bold‘, ”)
tag = soup. b
type(tag)
#
Tags have a lot of attributes and methods, and I’ll cover most of them
in Navigating the tree and Searching the tree. For now, the most
important features of a tag are its name and attributes.
Name¶
Every tag has a name, accessible as
If you change a tag’s name, the change will be reflected in any HTML
markup generated by Beautiful Soup:
= “blockquote”
tag
#
Extremely bold
Attributes¶
A tag may have any number of attributes. The tag has an attribute “id” whose value is
“boldest”. You can access a tag’s attributes by treating the tag like
a dictionary:
tag = BeautifulSoup(‘bold‘, ”). b
tag[‘id’]
# ‘boldest’
You can access that dictionary directly as
# {‘id’: ‘boldest’}
You can add, remove, and modify a tag’s attributes. Again, this is
done by treating the tag as a dictionary:
tag[‘id’] = ‘verybold’
tag[‘another-attribute’] = 1
#
del tag[‘id’]
del tag[‘another-attribute’]
# bold
# KeyError: ‘id’
(‘id’)
# None
Multi-valued attributes¶
HTML 4 defines a few attributes that can have multiple values. HTML 5
removes a couple of them, but defines a few more. The most common
multi-valued attribute is class (that is, a tag can have more than
one CSS class). Others include rel, rev, accept-charset,
headers, and accesskey. Beautiful Soup presents the value(s)
of a multi-valued attribute as a list:
css_soup = BeautifulSoup(‘
‘, ”)
css_soup. p[‘class’]
# [‘body’]
css_soup = BeautifulSoup(‘
‘, ”)
# [‘body’, ‘strikeout’]
If an attribute looks like it has more than one value, but it’s not
a multi-valued attribute as defined by any version of the HTML
standard, Beautiful Soup will leave the attribute alone:
id_soup = BeautifulSoup(‘
‘, ”)
id_soup. p[‘id’]
# ‘my id’
When you turn a tag back into a string, multiple attribute values are
consolidated:
rel_soup = BeautifulSoup(‘
Back to the homepage
‘, ”)
rel_soup. a[‘rel’]
# [‘index’]
rel_soup. a[‘rel’] = [‘index’, ‘contents’]
print(rel_soup. p)
#
Back to the homepage
You can disable this by passing multi_valued_attributes=None as a
keyword argument into the BeautifulSoup constructor:
no_list_soup = BeautifulSoup(‘
‘, ”, multi_valued_attributes=None)
no_list_soup. p[‘class’]
# ‘body strikeout’
You can use get_attribute_list to get a value that’s always a
list, whether or not it’s a multi-valued atribute:
t_attribute_list(‘id’)
# [“my id”]
If you parse a document as XML, there are no multi-valued attributes:
xml_soup = BeautifulSoup(‘
‘, ‘xml’)
xml_soup. p[‘class’]
Again, you can configure this using the multi_valued_attributes argument:
class_is_multi= { ‘*’: ‘class’}
xml_soup = BeautifulSoup(‘
‘, ‘xml’, multi_valued_attributes=class_is_multi) “, “xml”)
You probably won’t need to do this, but if you do, use the defaults as
a guide. They implement the rules described in the HTML specification:
from er import builder_registry
(‘html’). DEFAULT_CDATA_LIST_ATTRIBUTES
NavigableString¶
A string corresponds to a bit of text within a tag. Beautiful Soup
uses the NavigableString class to contain these bits of text:
# ‘Extremely bold’
type()
#
A NavigableString is just like a Python Unicode string, except
that it also supports some of the features described in Navigating
the tree and Searching the tree. You can convert a
NavigableString to a Unicode string with unicode() (in
Python 2) or str (in Python 3):
unicode_string = str()
unicode_string
type(unicode_string)
#
You can’t edit a string in place, but you can replace one string with
another, using replace_with():
(“No longer bold”)
# No longer bold
NavigableString supports most of the features described in
Navigating the tree and Searching the tree, but not all of
them. In particular, since a string can’t contain anything (the way a
tag may contain a string or another tag), strings don’t support the. contents or attributes, or the find() method.
If you want to use a NavigableString outside of Beautiful Soup,
you should call unicode() on it to turn it into a normal Python
Unicode string. If you don’t, your string will carry around a
reference to the entire Beautiful Soup parse tree, even when you’re
done using Beautiful Soup. This is a big waste of memory.
BeautifulSoup¶
The BeautifulSoup object represents the parsed document as a
whole. For most purposes, you can treat it as a Tag
object. This means it supports most of the methods described in
Navigating the tree and Searching the tree.
You can also pass a BeautifulSoup object into one of the methods
defined in Modifying the tree, just as you would a Tag. This
lets you do things like combine two parsed documents:
doc = BeautifulSoup(“
(text=”INSERT FOOTER HERE”). replace_with(footer)
# ‘INSERT FOOTER HERE’
print(doc)
# xml version="1. 0" encoding="utf-8"? >
#
Since the BeautifulSoup object doesn’t correspond to an actual
HTML or XML tag, it has no name and no attributes. But sometimes it’s
useful to look at its, so it’s been given the special
“[document]”:
Here’s the “Three sisters” HTML document again:
html_doc = “””
I’ll use this as an example to show you how to move from one part of
a document to another.
Going down¶
Tags may contain strings and other tags. These elements are the tag’s
children. Beautiful Soup provides a lot of different attributes for
navigating and iterating over a tag’s children.
Note that Beautiful Soup strings don’t support any of these
attributes, because a string can’t have children.
Navigating using tag names¶
The simplest way to navigate the parse tree is to say the name of the
tag you want. If you want the tag, just say
#
You can do use this trick again and again to zoom in on a certain part
of the parse tree. This code gets the first tag beneath the tag:
# The Dormouse’s story
Using a tag name as an attribute will give you only the first tag by that
name:
If you need to get all the tags, or anything more complicated
than the first tag with a certain name, you’ll need to use one of the
methods described in Searching the tree, such as find_all():
# Tillie]. contents and. children¶
A tag’s children are available in a list called. contents:
head_tag =
head_tag
ntents
# [
title_tag = ntents[0]
title_tag
# [‘The Dormouse’s story’]
The BeautifulSoup object itself has children. In this case, the
tag is the child of the BeautifulSoup object. :
len(ntents)
# 1
ntents[0]
# ‘html’
A string does not have. contents, because it can’t contain
anything:
text = ntents[0]
# AttributeError: ‘NavigableString’ object has no attribute ‘contents’
Instead of getting them as a list, you can iterate over a tag’s
children using the. children generator:
for child in ildren:
print(child)
# The Dormouse’s story. descendants¶
The. children attributes only consider a tag’s
direct children. For instance, the tag has a single direct
child–the
But the
story”. There’s a sense in which that string is also a child of the
tag. The. descendants attribute lets you iterate over all
of a tag’s children, recursively: its direct children, the children of
its direct children, and so on:
for child in scendants:
The tag has only one child, but it has two descendants: the
only has one direct child (the tag), but it has a whole lot of
descendants:
len(list(ildren))
len(list(scendants))
# 26
¶
If a tag has only one child, and that child is a NavigableString,
the child is made available as
# ‘The Dormouse’s story’
If a tag’s only child is another tag, and that tag has a, then the parent tag is considered to have the same
as its child:
If a tag contains more than one thing, then it’s not clear what
should refer to, so is defined to be
None:
print()
# None. strings and stripped_strings¶
If there’s more than one thing inside a tag, you can still look at
just the strings. Use the. strings generator:
for string in rings:
print(repr(string))
‘\n’
# “The Dormouse’s story”
# ‘\n’
# ‘Once upon a time there were three little sisters; and their names were\n’
# ‘Elsie’
# ‘, \n’
# ‘Lacie’
# ‘ and\n’
# ‘Tillie’
# ‘;\nand they lived at the bottom of a well. ‘
# ‘… ‘
These strings tend to have a lot of extra whitespace, which you can
remove by using the. stripped_strings generator instead:
for string in ripped_strings:
# ‘Once upon a time there were three little sisters; and their names were’
# ‘, ‘
# ‘and’
# ‘;\n and they lived at the bottom of a well. ‘
Here, strings consisting entirely of whitespace are ignored, and
whitespace at the beginning and end of strings is removed.
Going up¶
Continuing the “family tree” analogy, every tag and every string has a
parent: the tag that contains it.
You can access an element’s parent with the attribute. In
the example “three sisters” document, the tag is the parent
of the
title_tag =
The title string itself has a parent: the
it:
The parent of a top-level tag like is the BeautifulSoup object
itself:
html_tag =
#
And the of a BeautifulSoup object is defined as None:
# None. parents¶
You can iterate over all of an element’s parents with. parents. This example uses. parents to travel from an tag
buried deep within the document, to the very top of the document:
link = soup. a
link
for parent in rents:
# p
# body
# html
# [document]
Going sideways¶
Consider a simple document like this:
sibling_soup = BeautifulSoup(“text1
#
# text1
#
# text2
#
The tag and the
children of the same tag. We call them siblings. When a document is
pretty-printed, siblings show up at the same indentation level. You
can also use this relationship in the code you write.. next_sibling and. previous_sibling¶
You can use. previous_sibling to navigate
between page elements that are on the same level of the parse tree:
xt_sibling
#
evious_sibling
# text1
The tag has a. next_sibling, but no. previous_sibling,
because there’s nothing before the tag on the same level of the
tree. For the same reason, the
but no. next_sibling:
print(evious_sibling)
print(xt_sibling)
The strings “text1” and “text2” are not siblings, because they don’t
have the same parent:
# ‘text1’
In real documents, the. next_sibling or. previous_sibling of a
tag will usually be a string containing whitespace. Going back to the
“three sisters” document:
# Elsie
# Lacie
# Tillie
You might think that the. next_sibling of the first tag would
be the second tag. But actually, it’s a string: the comma and
newline that separate the first tag from the second:
# ‘, \n ‘
The second tag is actually the. next_sibling of the comma:
# Lacie. next_siblings and. previous_siblings¶
You can iterate over a tag’s siblings with. next_siblings or. previous_siblings:
for sibling in xt_siblings:
print(repr(sibling))
# Lacie
# ‘; and they lived at the bottom of a well. ‘
for sibling in (id=”link3″). previous_siblings:
Going back and forth¶
Take a look at the beginning of the “three sisters” document:
#
An HTML parser takes this string of characters and turns it into a
series of events: “open an tag”, “open a tag”, “open a
tag”, and so on. Beautiful Soup offers tools for reconstructing the
initial parse of the document.. next_element and. previous_element¶
The. next_element attribute of a string or tag points to whatever
was parsed immediately afterwards. It might be the same as. next_sibling, but it’s usually drastically different.
Here’s the final tag in the “three sisters” document. Its. next_sibling is a string: the conclusion of the sentence that was
interrupted by the start of the tag. :
last_a_tag = (“a”, id=”link3″)
last_a_tag
But the. next_element of that tag, the thing that was parsed
immediately after the tag, is not the rest of that sentence:
it’s the word “Tillie”:
xt_element
That’s because in the original markup, the word “Tillie” appeared
before that semicolon. The parser encountered an tag, then the
word “Tillie”, then the closing tag, then the semicolon and rest of
the sentence. The semicolon is on the same level as the tag, but the
word “Tillie” was encountered first.
The. previous_element attribute is the exact opposite of. next_element. It points to whatever element was parsed
immediately before this one:
evious_element
# Tillie. next_elements and. previous_elements¶
You should get the idea by now. You can use these iterators to move
forward or backward in the document as it was parsed:
for element in xt_elements:
print(repr(element))
#
…
Beautiful Soup defines a lot of methods for searching the parse tree,
but they’re all very similar. I’m going to spend a lot of time explaining
the two most popular methods: find() and find_all(). The other
methods take almost exactly the same arguments, so I’ll just cover
them briefly.
Once again, I’ll be using the “three sisters” document as an example:
By passing in a filter to an argument like find_all(), you can
zoom in on the parts of the document you’re interested in.
Kinds of filters¶
Before talking in detail about find_all() and similar methods, I
want to show examples of different filters you can pass into these
methods. These filters show up again and again, throughout the
search API. You can use them to filter based on a tag’s name,
on its attributes, on the text of a string, or on some combination of
these.
A string¶
The simplest filter is a string. Pass a string to a search method and
Beautiful Soup will perform a match against that exact string. This
code finds all the tags in the document:
nd_all(‘b’)
# [The Dormouse’s story]
If you pass in a byte string, Beautiful Soup will assume the string is
encoded as UTF-8. You can avoid this by passing in a Unicode string instead.
A regular expression¶
If you pass in a regular expression object, Beautiful Soup will filter
against that regular expression using its search() method. This code
finds all the tags whose names start with the letter “b”; in this
case, the tag and the tag:
import re
for tag in nd_all(mpile(“^b”)):
# b
This code finds all the tags whose names contain the letter ‘t’:
for tag in nd_all(mpile(“t”)):
# title
A list¶
If you pass in a list, Beautiful Soup will allow a string match
against any item in that list. This code finds all the tags
and all the tags:
nd_all([“a”, “b”])
# [The Dormouse’s story,
# Elsie,
True¶
The value True matches everything it can. This code finds all
the tags in the document, but none of the text strings:
for tag in nd_all(True):
# head
# a
A function¶
If none of the other matches work for you, define a function that
takes an element as its only argument. The function should return
True if the argument matches, and False otherwise.
Here’s a function that returns True if a tag defines the “class”
attribute but doesn’t define the “id” attribute:
def has_class_but_no_id(tag):
return tag. has_attr(‘class’) and not tag. has_attr(‘id’)
Pass this function into find_all() and you’ll pick up all the
tags:
nd_all(has_class_but_no_id)
# [
The Dormouse’s story
,
#
Once upon a time there were…bottom of a well.
,
#
…
]
This function only picks up the
The Dormouse’s story
]
nd_all(“a”)
nd_all(id=”link2″)
# [Lacie]
(mpile(“sisters”))
Some of these should look familiar, but others are new. What does it
mean to pass in a value for string, or id? Why does
find_all(“p”, “title”) find a
tag with the CSS class “title”?
Let’s look at the arguments to find_all().
The name argument¶
Pass in a value for name and you’ll tell Beautiful Soup to only
consider tags with certain names. Text strings will be ignored, as
will tags whose names that don’t match.
This is the simplest usage:
Recall from Kinds of filters that the value to name can be a
string, a regular expression, a list, a function, or the value
True.
The keyword arguments¶
Any argument that’s not recognized will be turned into a filter on one
of a tag’s attributes. If you pass in a value for an argument called id,
Beautiful Soup will filter against each tag’s ‘id’ attribute:
nd_all(id=’link2′)
If you pass in a value for href, Beautiful Soup will filter
against each tag’s ‘href’ attribute:
nd_all(mpile(“elsie”))
# [Elsie]
You can filter an attribute based on a string, a regular
expression, a list, a function, or the value True.
This code finds all tags whose id attribute has a value,
regardless of what the value is:
nd_all(id=True)
You can filter multiple attributes at once by passing in more than one
keyword argument:
nd_all(mpile(“elsie”), id=’link1′)
Some attributes, like the data-* attributes in HTML 5, have names that
can’t be used as the names of keyword arguments:
data_soup = BeautifulSoup(‘
‘, ”)
nd_all(data-foo=”value”)
# SyntaxError: keyword can’t be an expression
You can use these attributes in searches by putting them into a
dictionary and passing the dictionary into find_all() as the
attrs argument:
nd_all(attrs={“data-foo”: “value”})
# [
]
You can’t use a keyword argument to search for HTML’s ‘name’ element,
because Beautiful Soup uses the name argument to contain the name
of the tag itself. Instead, you can give a value to ‘name’ in the
name_soup = BeautifulSoup(‘‘, ”)
nd_all(name=”email”)
# []
nd_all(attrs={“name”: “email”})
# []
Searching by CSS class¶
It’s very useful to search for a tag that has a certain CSS class, but
the name of the CSS attribute, “class”, is a reserved word in
Python. Using class as a keyword argument will give you a syntax
error. As of Beautiful Soup 4. 1. 2, you can search by CSS class using
the keyword argument class_:
nd_all(“a”, class_=”sister”)
As with any keyword argument, you can pass class_ a string, a regular
expression, a function, or True:
nd_all(mpile(“itl”))
def has_six_characters(css_class):
return css_class is not None and len(css_class) == 6
nd_all(class_=has_six_characters)
Remember that a single tag can have multiple
values for its “class” attribute. When you search for a tag that
matches a certain CSS class, you’re matching against any of its CSS
classes:
nd_all(“p”, class_=”strikeout”)
# [
]
nd_all(“p”, class_=”body”)
You can also search for the exact string value of the class attribute:
nd_all(“p”, class_=”body strikeout”)
But searching for variants of the string value won’t work:
nd_all(“p”, class_=”strikeout body”)
If you want to search for tags that match two or more CSS classes, you
should use a CSS selector:
(“p. “)
In older versions of Beautiful Soup, which don’t have the class_
shortcut, you can use the attrs trick mentioned above. Create a
dictionary whose value for “class” is the string (or regular
expression, or whatever) you want to search for:
nd_all(“a”, attrs={“class”: “sister”})
The string argument¶
With string you can search for strings instead of tags. As with
name and the keyword arguments, you can pass in a string, a
regular expression, a list, a function, or the value True.
Here are some examples:
nd_all(string=”Elsie”)
# [‘Elsie’]
nd_all(string=[“Tillie”, “Elsie”, “Lacie”])
# [‘Elsie’, ‘Lacie’, ‘Tillie’]
nd_all(mpile(“Dormouse”))
# [“The Dormouse’s story”, “The Dormouse’s story”]
def is_the_only_string_within_a_tag(s):
“””Return True if this string is the only child of its parent tag. “””
return (s ==)
nd_all(string=is_the_only_string_within_a_tag)
# [“The Dormouse’s story”, “The Dormouse’s story”, ‘Elsie’, ‘Lacie’, ‘Tillie’, ‘… ‘]
Although string is for finding strings, you can combine it with
arguments that find tags: Beautiful Soup will find all tags whose
matches your value for string. This code finds the
tags whose is “Elsie”:
nd_all(“a”, string=”Elsie”)
# [Elsie]
The string argument is new in Beautiful Soup 4. 4. 0. In earlier
versions it was called text:
nd_all(“a”, text=”Elsie”)
The limit argument¶
find_all() returns all the tags and strings that match your
filters. This can take a while if the document is large. If you don’t
need all the results, you can pass in a number for limit. This
works just like the LIMIT keyword in SQL. It tells Beautiful Soup to
stop gathering results after it’s found a certain number.
There are three links in the “three sisters” document, but this code
only finds the first two:
nd_all(“a”, limit=2)
# Lacie]
The recursive argument¶
If you call nd_all(), Beautiful Soup will examine all the
descendants of mytag: its children, its children’s children, and
so on. If you only want Beautiful Soup to consider direct children,
you can pass in recursive=False. See the differe
Extracting Data from HTML with BeautifulSoup – Pluralsight
IntroductionNowadays everyone is talking about data and how it is helping to learn hidden patterns and new insights. The right set of data can help a business to improve its marketing strategy and that can increase the overall sales. And let’s not forget the popular example in which a politician can know the public’s opinion before elections. Data is powerful, but it does not come for free. Gathering the right data is always expensive; think of surveys or marketing campaigns, etc. The internet is a pool of data and, with the right set of skills, one can use this data in a way to gain a lot of new information. You can always copy paste the data to your excel or CSV file but that is also time-consuming and expensive. Why not hire a software developer who can get the data into a readable format by writing some jiber-jabber? Yes, it is possible to extract data from Web and this “jibber-jabber” is called Web cording to Wikipedia, Web Scraping is:Web scraping, web harvesting, or web data extraction is data scraping used for extracting data from websitesBeautifulSoup is one popular library provided by Python to scrape data from the web. To get the best out of it, one needs only to have a basic knowledge of HTML, which is covered in the guide. Components of a WebpageIf you know the basic HTML, you can skip this basic syntax of any webpage is:1
2
3
4
5
6
7
8
My first Web Scraping with Beautiful soup
9
Let’s scrap the website using python.
10
11htmlEvery tag in HTML can have attribute information (i. e., class, id, href, and other useful information) that helps in identifying the element more information about basic HTML tags, check out for Scraping Any WebsiteTo scrape a website using Python, you need to perform these four basic steps:Sending an HTTP GET request to the URL of the webpage that you want to scrape, which will respond with HTML content. We can do this by using the Request library of Python. Fetching and parsing the data using Beautifulsoup and maintain the data in some data structure such as Dict or List. Analyzing the HTML tags and their attributes, such as class, id, and other HTML tag attributes. Also, identifying your HTML tags where your content lives. Outputting the data in any file format such as CSV, XLSX, JSON, etc. Understanding and Inspecting the DataNow that you know about basic HTML and its tags, you need to first do the inspection of the page which you want to scrape. Inspection is the most important job in web scraping; without knowing the structure of the webpage, it is very hard to get the needed information. To help with inspection, every browser like Google Chrome or Mozilla Firefox comes with a handy tool called developer this guide, we will be working with wikipedia to scrap some of its table data from the page List of countries by GDP (nominal). This page contains a Lists heading which contains three tables of countries sorted by their rank and its GDP value as per “International Monetary Fund”, “World Bank”, and “United Nations”. Note, that these three tables are enclosed in an outer know about any element that you wish to scrape, just right-click on that text and examine the tags and attributes of the into the CodeIn this guide, we will be learning how to do a simple web scraping using Python and stall the Essential Python Libraries1pip3 install requests beautifulsoup4 shellNote: If you are using Windows, use pip instead of pip3Importing the Essential LibrariesImport the “requests” library to fetch the page content and bs4 (Beautiful Soup) for parsing the HTML page content. 1from bs4 import BeautifulSoup
2import requestspythonCollecting and Parsing a WebpageIn the next step, we will make a GET request to the url and will create a parse Tree object(soup) with the help of BeautifulSoup and Python built-in “lxml” parser. 1# importing the libraries
2from bs4 import BeautifulSoup
3import requests
4
5url=”(nominal)”
6
7# Make a GET request to fetch the raw HTML content
8html_content = (url)
9
10# Parse the html content
11soup = BeautifulSoup(html_content, “lxml”)
12print(ettify()) # print the parsed data of htmlpythonWith our BeautifulSoup object i. e., soup we can move on and collect the required table going to the actual code, let’s first play with the soup object and print some basic information from it:Example 1:Let’s just first print the title of the will give an output as follows:1
2 print(“Inner Text: {}”())
3 print(“Title: {}”((“title”)))
4 print(“href: {}”((“href”)))pythonThis will output all the available links along with its mentioned attributes from the, let’s get back to the track and find our goal table. Analyzing the outer table, we can see that it has special attributes which include class as wikitable and has two tr tags inside you uncollapse the tr tag, you will find that the first tr tag is for the headings of all three tables and the next tr tag is for the table data for all three inner ‘s first get all three table headings:Note that we are removing the newlines and spaces from left and right of the text by using simple strings methods available in Python. 1gdp_table = (“table”, attrs={“class”: “wikitable”})
2gdp_table_data = (“tr”) # contains 2 rows
3
4# Get all the headings of Lists
5headings = []
6for td in gdp_table_data[0]. find_all(“td”):
7 # remove any newlines and extra spaces from left and right
8 ((‘\n’, ‘ ‘)())
10print(headings)pythonThis will give an output as:1[‘Per the International Monetary Fund (2018)’, ‘Per the World Bank (2017)’, ‘Per the United Nations (2017)’]Moving on to the second tr tag of the outer table, let’s get the content of all the three tables by iterating over each table and its rows.
1data = {}
2for table, heading in zip(gdp_table_data[1]. find_all(“table”), headings):
3 # Get headers of table i. e., Rank, Country, GDP.
4 t_headers = []
5 for th in nd_all(“th”):
6 # remove any newlines and extra spaces from left and right
7 ((‘\n’, ‘ ‘)())
8 # Get all the rows of table
9 table_data = []
10 for tr in (“tr”): # find all tr’s from table’s tbody
11 t_row = {}
12 # Each table row is stored in the form of
13 # t_row = {‘Rank’: ”, ‘Country/Territory’: ”, ‘GDP(US$million)’: ”}
14
15 # find all td’s(3) in tr and zip it with t_header
16 for td, th in zip(nd_all(“td”), t_headers):
17 t_row[th] = (‘\n’, ”)()
18 (t_row)
19
20 # Put the data for the table with his heading.
21 data[heading] = table_data
22
23print(data)pythonWriting Data to CSVNow that we have created our data structure, we can export it to a CSV file by just iterating over it. 1import csv
2
3for topic, table in ():
4 # Create csv file for each table
5 with open(f”{topic}”, ‘w’) as out_file:
6 # Each 3 table has headers as following
7 headers = [
8 “Country/Territory”,
9 “GDP(US$million)”,
10 “Rank”
11] # == t_headers
12 writer = csv. DictWriter(out_file, headers)
13 # write the header
14 writer. writeheader()
15 for row in table:
16 if row:
17 writer. writerow(row)pythonPutting It TogetherLet’s join all the above code snippets. Our complete code looks like this:1# importing the libraries
4import csv
5
7# Step 1: Sending a HTTP request to a URL
8url = “(nominal)”
9# Make a GET request to fetch the raw HTML content
10html_content = (url)
11
12
13# Step 2: Parse the html content
14soup = BeautifulSoup(html_content, “lxml”)
15# print(ettify()) # print the parsed data of html
16
17
18# Step 3: Analyze the HTML tag, where your content lives
19# Create a data dictionary to store the data.
20data = {}
21#Get the table having the class wikitable
22gdp_table = (“table”, attrs={“class”: “wikitable”})
23gdp_table_data = (“tr”) # contains 2 rows
24
25# Get all the headings of Lists
26headings = []
27for td in gdp_table_data[0]. find_all(“td”):
28 # remove any newlines and extra spaces from left and right
29 ((‘\n’, ‘ ‘)())
30
31# Get all the 3 tables contained in “gdp_table”
32for table, heading in zip(gdp_table_data[1]. find_all(“table”), headings):
33 # Get headers of table i. e., Rank, Country, GDP.
34 t_headers = []
35 for th in nd_all(“th”):
36 # remove any newlines and extra spaces from left and right
37 ((‘\n’, ‘ ‘)())
38
39 # Get all the rows of table
40 table_data = []
41 for tr in (“tr”): # find all tr’s from table’s tbody
42 t_row = {}
43 # Each table row is stored in the form of
44 # t_row = {‘Rank’: ”, ‘Country/Territory’: ”, ‘GDP(US$million)’: ”}
45
46 # find all td’s(3) in tr and zip it with t_header
47 for td, th in zip(nd_all(“td”), t_headers):
48 t_row[th] = (‘\n’, ”)()
49 (t_row)
50
51 # Put the data for the table with his heading.
52 data[heading] = table_data
53
54
55# Step 4: Export the data to csv
56″””
57For this example let’s create 3 seperate csv for
583 tables respectively
59″””
60for topic, table in ():
61 # Create csv file for each table
62 with open(f”{topic}”, ‘w’) as out_file:
63 # Each 3 table has headers as following
64 headers = [
65 “Country/Territory”,
66 “GDP(US$million)”,
67 “Rank”
68] # == t_headers
69 writer = csv. DictWriter(out_file, headers)
70 # write the header
71 writer. writeheader()
72 for row in table:
73 if row:
74 writer. writerow(row)pythonBEWARE -> Scraping rulesNow that you have a basic idea about scraping with Python, it is important to know the Legality of web scraping before starting scraping a website. Generally, if you are using scraped data for personal use and do not plan to republish that data, it may not cause any problems. Read the Terms of Use, Conditions of Use, and also the before scraping the website. You must follow the rules before scraping, otherwise, the website owner has every right to take legal action against nclusionThe above guide went through the process of how to scrape a Wikipedia page using Python3 and Beautiful Soup and finally exporting it to a CSV file. We have learned how to scrape a basic website and fetch all the useful data in just a couple of can further continue to expand the awesomeness of the art of scraping by jumping for new websites. Some good examples of data to scrape are:Customer reviews and product pagesBeautiful Soup is simple for small-scale web scraping. If you want to scrape webpages on a large scale, you can consider more advanced techniques like Scrapy and Selenium. Here are the some of my scraping guides:Hope you like this guide. If you have any queries regarding this topic, feel free to contact me at Links
Frequently Asked Questions about how to use beautifulsoup
How do you scrape data using BeautifulSoup?
To scrape a website using Python, you need to perform these four basic steps:Sending an HTTP GET request to the URL of the webpage that you want to scrape, which will respond with HTML content. … Fetching and parsing the data using Beautifulsoup and maintain the data in some data structure such as Dict or List.More items…•Dec 19, 2019
How do you scrape a website with Python and BeautifulSoup?
Implementing Web Scraping in Python with BeautifulSoupSteps involved in web scraping:Step 1: Installing the required third-party libraries.Step 2: Accessing the HTML content from webpage.Step 3: Parsing the HTML content.Step 4: Searching and navigating through the parse tree.May 15, 2021
Why BeautifulSoup is used in Python?
Beautiful Soup is a Python library that is used for web scraping purposes to pull the data out of HTML and XML files. It creates a parse tree from page source code that can be used to extract data in a hierarchical and more readable manner.Dec 4, 2020