• November 17, 2024

Python Beautifulsoup Example

Tutorial: Web Scraping with Python Using Beautiful Soup

Tutorial: Web Scraping with Python Using Beautiful Soup

Published: March 30, 2021 Learn how to scrape the web with Python! The internet is an absolutely massive source of data — data that we can access using web scraping and Python! In fact, web scraping is often the only way we can access data. There is a lot of information out there that isn’t available in convenient CSV exports or easy-to-connect APIs. And websites themselves are often valuable sources of data — consider, for example, the kinds of analysis you could do if you could download every post on a web access those sorts of on-page datasets, we’ll have to use web scraping. Don’t worry if you’re still a total beginner! In this tutorial we’re going to cover how to do web scraping with Python from scratch, starting with some answers to frequently-asked, we’ll work through an actual web scraping project, focusing on weather ‘ll work together to scrape weather data from the web to support a weather before we start writing any Python, we’ve got to cover the basics! If you’re already familiar with the concept of web scraping, feel free to scroll past these questions and jump right into the tutorial! The Fundamentals of Web Scraping:What is Web Scraping in Python? Some websites offer data sets that are downloadable in CSV format, or accessible via an Application Programming Interface (API). But many websites with useful data don’t offer these convenient nsider, for example, the National Weather Service’s website. It contains up-to-date weather forecasts for every location in the US, but that weather data isn’t accessible as a CSV or via API. It has to be viewed on the NWS site:If we wanted to analyze this data, or download it for use in some other app, we wouldn’t want to painstakingly copy-paste everything. Web scraping is a technique that lets us use programming to do the heavy lifting. We’ll write some code that looks at the NWS site, grabs just the data we want to work with, and outputs it in the format we this tutorial, we’ll show you how to perform web scraping using Python 3 and the Beautiful Soup library. We’ll be scraping weather forecasts from the National Weather Service, and then analyzing them using the Pandas to be clear, lots of programming languages can be used to scrape the web! We also teach web scraping in R, for example. For this tutorial, though, we’ll be sticking with Does Web Scraping Work? When we scrape the web, we write code that sends a request to the server that’s hosting the page we specified. The server will return the source code — HTML, mostly — for the page (or pages) we far, we’re essentially doing the same thing a web browser does — sending a server request with a specific URL and asking the server to return the code for that unlike a web browser, our web scraping code won’t interpret the page’s source code and display the page visually. Instead, we’ll write some custom code that filters through the page’s source code looking for specific elements we’ve specified, and extracting whatever content we’ve instructed it to example, if we wanted to get all of the data from inside a table that was displayed on a web page, our code would be written to go through these steps in sequence:1Request the content (source code) of a specific URL from the server2Download the content that is returned3Identify the elements of the page that are part of the table we want4Extract and (if necessary) reformat those elements into a dataset we can analyze or use in whatever way we that all sounds very complicated, don’t worry! Python and Beautiful Soup have built-in features designed to make this relatively straightforward. One thing that’s important to note: from a server’s perspective, requesting a page via web scraping is the same as loading it in a web browser. When we use code to submit these requests, we might be “loading” pages much faster than a regular user, and thus quickly eating up the website owner’s server Use Python for Web Scraping? As previously mentioned, it’s possible to do web scraping with many programming ever, one of the most popular approaches is to use Python and the Beautiful Soup library, as we’ll do in this tutorial. Learning to do this with Python will mean that there are lots of tutorials, how-to videos, and bits of example code out there to help you deepen your knowledge once you’ve mastered the Beautiful Soup Web Scraping Legal? Unfortunately, there’s not a cut-and-dry answer here. Some websites explicitly allow web scraping. Others explicitly forbid it. Many websites don’t offer any clear guidance one way or the scraping any website, we should look for a terms and conditions page to see if there are explicit rules about scraping. If there are, we should follow them. If there are not, then it becomes more of a judgement member, though, that web scraping consumes server resources for the host website. If we’re just scraping one page once, that isn’t going to cause a problem. But if our code is scraping 1, 000 pages once every ten minutes, that could quickly get expensive for the website, in addition to following any and all explicit rules about web scraping posted on the site, it’s also a good idea to follow these best practices:Web Scraping Best Practices:Never scrape more frequently than you need nsider caching the content you scrape so that it’s only downloaded pauses into your code using functions like () to keep from overwhelming servers with too many requests too our case for this tutorial, the NWS’s data is public domain and its terms do not forbid web scraping, so we’re in the clear to to scrape the web with Python, right in your browser! Our interactive APIs and Web Scraping in Python skill path will help you learn the skills you need to unlock new worlds of data with Python. (No credit card required! ) The Components of a Web PageBefore we start writing code, we need to understand a little bit about the structure of a web page. We’ll use the site’s structure to write code that gets us the data we want to scrape, so understanding that structure is an important first step for any web scraping we visit a web page, our web browser makes a request to a web server. This request is called a GET request, since we’re getting files from the server. The server then sends back files that tell our browser how to render the page for us. These files will typically include:HTML — the main content of the — used to add styling to make the page look — Javascript files add interactivity to web — image formats, such as JPG and PNG, allow web pages to show our browser receives all the files, it renders the page and displays it to ’s a lot that happens behind the scenes to render a page nicely, but we don’t need to worry about most of it when we’re web scraping. When we perform web scraping, we’re interested in the main content of the web page, so we look primarily at the MLHyperText Markup Language (HTML) is the language that web pages are created in. HTML isn’t a programming language, like Python, though. It’s a markup language that tells a browser how to display content. HTML has many functions that are similar to what you might find in a word processor like Microsoft Word — it can make text bold, create paragraphs, and so you’re already familiar with HTML, feel free to jump to the next section of this tutorial. Otherwise, let’s take a quick tour through HTML so we know enough to scrape consists of elements called tags. The most basic tag is the tag. This tag tells the web browser that everything inside of it is HTML. We can make a simple HTML document just using this tag:We haven’t added any content to our page yet, so if we viewed our HTML document in a web browser, we wouldn’t see anything:Right inside an html tag, we can put two other tags: the head tag, and the body main content of the web page goes into the body tag. The head tag contains data about the title of the page, and other information that generally isn’t useful in web scraping:We still haven’t added any content to our page (that goes inside the body tag), so if we open this HTML file in a browser, we still won’t see anything:You may have noticed above that we put the head and body tags inside the html tag. In HTML, tags are nested, and can go inside other ’ll now add our first content to the page, inside a p tag. The p tag defines a paragraph, and any text inside the tag is shown as a separate paragraph:

Here’s a paragraph of text!

Here’s a second paragraph of text!

Rendered in a browser, that HTML file will look like this: Here’s a paragraph of text! Here’s a second paragraph of text! Tags have commonly used names that depend on their position in relation to other tags:child — a child is a tag inside another tag. So the two p tags above are both children of the body — a parent is the tag another tag is inside. Above, the html tag is the parent of the body biling — a sibiling is a tag that is nested inside the same parent as another tag. For example, head and body are siblings, since they’re both inside html. Both p tags are siblings, since they’re both inside can also add properties to HTML tags that change their behavior. Below, we’ll add some extra text and hyperlinks using the a tag.

Here’s a paragraph of text! Python

Here’s how this will look:In the above example, we added two a tags. a tags are links, and tell the browser to render a link to another web page. The href property of the tag determines where the link goes. a and p are extremely common html tags. Here are a few others:div — indicates a division, or area, of the page. b — bolds any text inside. i — italicizes any text — creates a — creates an input a full list of tags, look we move into actual web scraping, let’s learn about the class and id properties. These special properties give HTML elements names, and make them easier to interact with when we’re element can have multiple classes, and a class can be shared between elements. Each element can only have one id, and an id can only be used once on a page. Classes and ids are optional, and not all elements will have can add classes and ids to our example:

Here’s a paragraph of text! Learn Data Science Online

Here’s a second paragraph of text! Python

Here’s how this will look:As you can see, adding classes and ids doesn’t change how the tags are rendered at requests libraryNow that we understand the structure of a web page, it’s time to get into the fun part: scraping the content we want! The first thing we’ll need to do to scrape a web page is to download the page. We can download pages using the Python requests requests library will make a GET request to a web server, which will download the HTML contents of a given web page for us. There are several different types of requests we can make using requests, of which GET is just one. If you want to learn more, check out our API ’s try downloading a simple sample website, ll need to first import the requests library, and then download the page using the method:import requests
page = (“)
page
After running our request, we get a Response object. This object has a status_code property, which indicates if the page was downloaded successfully:A status_code of 200 means that the page downloaded successfully. We won’t fully dive into status codes here, but a status code starting with a 2 generally indicates success, and a code starting with a 4 or a 5 indicates an can print out the HTML content of the page using the content ntent



A simple example page

Here is some simple content for this page.


Parsing a page with BeautifulSoupAs you can see above, we now have downloaded an HTML can use the BeautifulSoup library to parse this document, and extract the text from the p first have to import the library, and create an instance of the BeautifulSoup class to parse our document:from bs4 import BeautifulSoup
soup = BeautifulSoup(ntent, ”)We can now print out the HTML content of the page, formatted nicely, using the prettify method on the BeautifulSoup object.
This step isn’t strictly necessary, and we won’t always bother with it, but it can be helpful to look at prettified HTML to make the structure of the and where tags are nested easier to all the tags are nested, we can move through the structure one level at a time. We can first select all the elements at the top level of the page using the children property of soup. Note that children returns a list generator, so we need to call the list function on it:list(ildren)
[‘html’, ‘n’, A simple example page

Here is some simple content for this page.

]The above tells us that there are two tags at the top level of the page — the initial tag, and the tag. There is a newline character (n) in the list as well. Let’s see what the type of each element in the list is:[type(item) for item in list(ildren)]
[ctype, vigableString, ]As we can see, all of the items are BeautifulSoup objects:The first is a Doctype object, which contains information about the type of the second is a NavigableString, which represents text found in the HTML final item is a Tag object, which contains other nested most important object type, and the one we’ll deal with most often, is the Tag Tag object allows us to navigate through an HTML document, and extract other tags and text. You can learn more about the various BeautifulSoup objects can now select the html tag and its children by taking the third item in the list:html = list(ildren)[2]Each item in the list returned by the children property is also a BeautifulSoup object, so we can also call the children method on, we can find the children inside the html tag:list(ildren)
[‘n’, A simple example page , ‘n’,

Here is some simple content for this page.

, ‘n’]As we can see above, there are two tags here, head, and body. We want to extract the text inside the p tag, so we’ll dive into the body:body = list(ildren)[3]Now, we can get the p tag by finding the children of the body tag:list(ildren)
[‘n’,

Here is some simple content for this page.

, ‘n’]We can now isolate the p tag:p = list(ildren)[1]Once we’ve isolated the tag, we can use the get_text method to extract all of the text inside the t_text()
‘Here is some simple content for this page. ‘Finding all instances of a tag at onceWhat we did above was useful for figuring out how to navigate a page, but it took a lot of commands to do something fairly simple. If we want to extract a single tag, we can instead use the find_all method, which will find all the instances of a tag on a = BeautifulSoup(ntent, ”)
nd_all(‘p’)
[

Here is some simple content for this page.

]Note that find_all returns a list, so we’ll have to loop through, or use list indexing, it to extract nd_all(‘p’)[0]. get_text()
‘Here is some simple content for this page. ‘f you instead only want to find the first instance of a tag, you can use the find method, which will return a single BeautifulSoup (‘p’)

Here is some simple content for this page.

Searching for tags by class and idWe introduced classes and ids earlier, but it probably wasn’t clear why they were asses and ids are used by CSS to determine which HTML elements to apply certain styles to. But when we’re scraping, we can also use them to specify the elements we want to illustrate this principle, we’ll work with the following page:

First paragraph.

Second paragraph.


First outer paragraph.

Second outer paragraph.
We can access the above document at the URL. Let’s first download the page and create a BeautifulSoup object:page = (“)
soup = BeautifulSoup(ntent, ”)
soup
A simple example page<br />



Now, we can use the find_all method to search for items by class or by id. In the below example, we’ll search for any p tag that has the class nd_all(‘p’, class_=’outer-text’)
[

First outer paragraph.

,

Second outer paragraph.

]In the below example, we’ll look for any tag that has the class nd_all(class_=”outer-text”)

,

]We can also search for elements by nd_all(id=”first”)
[

]Using CSS SelectorsWe can also search for items using CSS selectors. These selectors are how the CSS language allows developers to specify HTML tags to style. Here are some examples:p a — finds all a tags inside of a p p a — finds all a tags inside of a p tag inside of a body body — finds all body tags inside of an html — finds all p tags with a class of outer-text. p#first — finds all p tags with an id of — finds any p tags with a class of outer-text inside of a body can learn more about CSS selectors autifulSoup objects support searching a page via CSS selectors using the select method. We can use CSS selectors to find all the p tags in our page that are inside of a div like (“div p”)

,

]Note that the select method above returns a list of BeautifulSoup objects, just like find and wnloading weather dataWe now know enough to proceed with extracting information about the local weather from the National Weather Service website! The first step is to find the page we want to scrape. We’ll extract weather information about downtown San Francisco from this page. Specifically, let’s extract data about the extended we can see from the image, the page has information about the extended forecast for the next week, including time of day, temperature, and a brief description of the conditions. Exploring page structure with Chrome DevToolsThe first thing we’ll need to do is inspect the page using Chrome Devtools. If you’re using another browser, Firefox and Safari have can start the developer tools in Chrome by clicking View -> Developer -> Developer Tools. You should end up with a panel at the bottom of the browser like what you see below. Make sure the Elements panel is highlighted:Chrome Developer ToolsThe elements panel will show you all the HTML tags on the page, and let you navigate through them. It’s a really handy feature! By right clicking on the page near where it says “Extended Forecast”, then clicking “Inspect”, we’ll open up the tag that contains the text “Extended Forecast” in the elements panel:The extended forecast textWe can then scroll up in the elements panel to find the “outermost” element that contains all of the text that corresponds to the extended forecasts. In this case, it’s a div tag with the id seven-day-forecast:The div that contains the extended forecast we click around on the console, and explore the div, we’ll discover that each forecast item (like “Tonight”, “Thursday”, and “Thursday Night”) is contained in a div with the class to Start Scraping! We now know enough to download the page and start parsing it. In the below code, we will:Download the web page containing the a BeautifulSoup class to parse the the div with id seven-day-forecast, and assign to seven_dayInside seven_day, find each individual forecast item. Extract and print the first forecast = (“)
seven_day = (id=”seven-day-forecast”)
forecast_items = nd_all(class_=”tombstone-container”)
tonight = forecast_items[0]
print(ettify())

Tonight


Tonight: Mostly clear, with a low around 49. West northwest wind 12 to 17 mph decreasing to 6 to 11 mph after midnight. Winds could gust as high as 23 mph.

Mostly Clear

Low: 49 °F

Extracting information from the pageAs we can see, inside the forecast item tonight is all the information we want. There are four pieces of information we can extract:The name of the forecast item — in this case, description of the conditions — this is stored in the title property of img. A short description of the conditions — in this case, Mostly temperature low — in this case, 49 ’ll extract the name of the forecast item, the short description, and the temperature first, since they’re all similar:period = (class_=”period-name”). get_text()
short_desc = (class_=”short-desc”). get_text()
temp = (class_=”temp”). get_text()
print(period)
print(short_desc)
print(temp)
Low: 49 °FNow, we can extract the title attribute from the img tag. To do this, we just treat the BeautifulSoup object like a dictionary, and pass in the attribute we want as a key:img = (“img”)
desc = img[‘title’]
print(desc)
Tonight: Mostly clear, with a low around 49. Extracting all the information from the pageNow that we know how to extract each individual piece of information, we can combine our knowledge with CSS selectors and list comprehensions to extract everything at the below code, we will:Select all items with the class period-name inside an item with the class tombstone-container in a list comprehension to call the get_text method on each BeautifulSoup riod_tags = (“. tombstone-container “)
periods = [t_text() for pt in period_tags]
periods
[‘Tonight’,
‘Thursday’,
‘ThursdayNight’,
‘Friday’,
‘FridayNight’,
‘Saturday’,
‘SaturdayNight’,
‘Sunday’,
‘SundayNight’]As we can see above, our technique gets us each of the period names, in order. We can apply the same technique to get the other three fields:short_descs = [t_text() for sd in (“. tombstone-container “)]
temps = [t_text() for t in (“. tombstone-container “)]
descs = [d[“title”] for d in (“. tombstone-container img”)]print(short_descs)print(temps)print(descs)
[‘Mostly Clear’, ‘Sunny’, ‘Mostly Clear’, ‘Sunny’, ‘Slight ChanceRain’, ‘Rain Likely’, ‘Rain Likely’, ‘Rain Likely’, ‘Chance Rain’]
[‘Low: 49 °F’, ‘High: 63 °F’, ‘Low: 50 °F’, ‘High: 67 °F’, ‘Low: 57 °F’, ‘High: 64 °F’, ‘Low: 57 °F’, ‘High: 64 °F’, ‘Low: 55 °F’]
[‘Tonight: Mostly clear, with a low around 49. ‘, ‘Thursday: Sunny, with a high near 63. North wind 3 to 5 mph. ‘, ‘Thursday Night: Mostly clear, with a low around 50. Light and variable wind becoming east southeast 5 to 8 mph after midnight. ‘, ‘Friday: Sunny, with a high near 67. Southeast wind around 9 mph. ‘, ‘Friday Night: A 20 percent chance of rain after 11pm. Partly cloudy, with a low around 57. South southeast wind 13 to 15 mph, with gusts as high as 20 mph. New precipitation amounts of less than a tenth of an inch possible. ‘, ‘Saturday: Rain likely. Cloudy, with a high near 64. Chance of precipitation is 70%. New precipitation amounts between a quarter and half of an inch possible. ‘, ‘Saturday Night: Rain likely. Cloudy, with a low around 57. Chance of precipitation is 60%. ‘, ‘Sunday: Rain likely. ‘, ‘Sunday Night: A chance of rain. Mostly cloudy, with a low around 55. ‘]Combining our data into a Pandas DataframeWe can now combine the data into a Pandas DataFrame and analyze it. A DataFrame is an object that can store tabular data, making data analysis easy. If you want to learn more about Pandas, check out our free to start course order to do this, we’ll call the DataFrame class, and pass in each list of items that we have. We pass them in as part of a dictionary key will become a column in the DataFrame, and each list will become the values in the column:import pandas as pd
weather = Frame({
“period”: periods,
“short_desc”: short_descs,
“temp”: temps,
“desc”:descs})
weather
desc
period
short_desc
temp
0
Tonight: Mostly clear, with a low around 49. W…
1
Thursday: Sunny, with a high near 63. North wi…
Thursday
Sunny
High: 63 °F
2
Thursday Night: Mostly clear, with a low aroun…
ThursdayNight
Low: 50 °F
3
Friday: Sunny, with a high near 67. Southeast …
Friday
High: 67 °F
4
Friday Night: A 20 percent chance of rain afte…
FridayNight
Slight ChanceRain
Low: 57 °F
5
Saturday: Rain likely. Cloudy, with a high ne…
Saturday
Rain Likely
High: 64 °F
6
Saturday Night: Rain likely. Cloudy, with a l…
SaturdayNight
7
Sunday: Rain likely. Cloudy, with a high near…
Sunday
8
Sunday Night: A chance of rain. Mostly cloudy…
SundayNight
Chance Rain
Low: 55 °F
We can now do some analysis on the data. For example, we can use a regular expression and the method to pull out the numeric temperature values:temp_nums = weather[“temp”](“(? Pd+)”, expand=False)
weather[“temp_num”] = (‘int’)
temp_nums
0 49
1 63
2 50
3 67
4 57
5 64
6 57
7 64
8 55
Name: temp_num, dtype: objectWe could then find the mean of all the high and low temperatures:weather[“temp_num”]()
58. 444444444444443We could also only select the rows that happen at night:is_night = weather[“temp”](“Low”)
weather[“is_night”] = is_night
is_night
0 True
1 False
2 True
3 False
4 True
5 False
6 True
7 False
8 True
Name: temp, dtype: boolweather[is_night]
Name: temp, dtype: bool
temp_num
49
True
50
57
55
Next Steps For This Web Scraping ProjectIf you’ve made it this far, congratulations! You should now have a good understanding of how to scrape web pages and extract data. Of course, there’s still a lot more to learn! If you want to go further, a good next step would be to pick a site and try some web scraping on your own. Some good examples of data to scrape are:News articlesSports scoresWeather forecastsStock pricesOnline retailer pricesYou may also want to keep scraping the National Weather Service, and see what other data you can extract from the page, or about your own ternatively, if you want to take your web scraping skills to the next level, you can check out our interactive course, which covers both the basics of web scraping and using Python to connect to APIs. With those two skills under your belt, you’ll be able to collect lots of unique and interesting datasets from sites all over the web! Learn to scrape the web with Python, right in your browser! Our interactive APIs and Web Scraping in Python skill path will help you learn the skills you need to unlock new worlds of data with Python. (No credit card required! )beginner, data mining, python, python tutorials, scraping, tutorial, Tutorials, web scraping
Beautiful Soup: Build a Web Scraper With Python

Beautiful Soup: Build a Web Scraper With Python

Watch Now This tutorial has a related video course created by the Real Python team. Watch it together with the written tutorial to deepen your understanding: Web Scraping With Beautiful Soup and Python
The incredible amount of data on the Internet is a rich resource for any field of research or personal interest. To effectively harvest that data, you’ll need to become skilled at web scraping. The Python libraries requests and Beautiful Soup are powerful tools for the job. If you like to learn with hands-on examples and have a basic understanding of Python and HTML, then this tutorial is for you.
In this tutorial, you’ll learn how to:
Inspect the HTML structure of your target site with your browser’s developer tools
Decipher data encoded in URLs
Use requests and Beautiful Soup for scraping and parsing data from the Web
Step through a web scraping pipeline from start to finish
Build a script that fetches job offers from the Web and displays relevant information in your console
Working through this project will give you the knowledge of the process and tools you need to scrape any static website out there on the World Wide Web. You can download the project source code by clicking on the link below:
Let’s get started!
What Is Web Scraping?
Web scraping is the process of gathering information from the Internet. Even copying and pasting the lyrics of your favorite song is a form of web scraping! However, the words “web scraping” usually refer to a process that involves automation. Some websites don’t like it when automatic scrapers gather their data, while others don’t mind.
If you’re scraping a page respectfully for educational purposes, then you’re unlikely to have any problems. Still, it’s a good idea to do some research on your own and make sure that you’re not violating any Terms of Service before you start a large-scale project.
Reasons for Web Scraping
Say you’re a surfer, both online and in real life, and you’re looking for employment. However, you’re not looking for just any job. With a surfer’s mindset, you’re waiting for the perfect opportunity to roll your way!
There’s a job site that offers precisely the kinds of jobs you want. Unfortunately, a new position only pops up once in a blue moon, and the site doesn’t provide an email notification service. You think about checking up on it every day, but that doesn’t sound like the most fun and productive way to spend your time.
Thankfully, the world offers other ways to apply that surfer’s mindset! Instead of looking at the job site every day, you can use Python to help automate your job search’s repetitive parts. Automated web scraping can be a solution to speed up the data collection process. You write your code once, and it will get the information you want many times and from many pages.
In contrast, when you try to get the information you want manually, you might spend a lot of time clicking, scrolling, and searching, especially if you need large amounts of data from websites that are regularly updated with new content. Manual web scraping can take a lot of time and repetition.
There’s so much information on the Web, and new information is constantly added. You’ll probably be interested in at least some of that data, and much of it is just out there for the taking. Whether you’re actually on the job hunt or you want to download all the lyrics of your favorite artist, automated web scraping can help you accomplish your goals.
Challenges of Web Scraping
The Web has grown organically out of many sources. It combines many different technologies, styles, and personalities, and it continues to grow to this day. In other words, the Web is a hot mess! Because of this, you’ll run into some challenges when scraping the Web:
Variety: Every website is different. While you’ll encounter general structures that repeat themselves, each website is unique and will need personal treatment if you want to extract the relevant information.
Durability: Websites constantly change. Say you’ve built a shiny new web scraper that automatically cherry-picks what you want from your resource of interest. The first time you run your script, it works flawlessly. But when you run the same script only a short while later, you run into a discouraging and lengthy stack of tracebacks!
Unstable scripts are a realistic scenario, as many websites are in active development. Once the site’s structure has changed, your scraper might not be able to navigate the sitemap correctly or find the relevant information. The good news is that many changes to websites are small and incremental, so you’ll likely be able to update your scraper with only minimal adjustments.
However, keep in mind that because the Internet is dynamic, the scrapers you’ll build will probably require constant maintenance. You can set up continuous integration to run scraping tests periodically to ensure that your main script doesn’t break without your knowledge.
An Alternative to Web Scraping: APIs
Some website providers offer application programming interfaces (APIs) that allow you to access their data in a predefined manner. With APIs, you can avoid parsing HTML. Instead, you can access the data directly using formats like JSON and XML. HTML is primarily a way to present content to users visually.
When you use an API, the process is generally more stable than gathering the data through web scraping. That’s because developers create APIs to be consumed by programs rather than by human eyes.
The front-end presentation of a site might change often, but such a change in the website’s design doesn’t affect its API structure. The structure of an API is usually more permanent, which means it’s a more reliable source of the site’s data.
However, APIs can change as well. The challenges of both variety and durability apply to APIs just as they do to websites. Additionally, it’s much harder to inspect the structure of an API by yourself if the provided documentation lacks quality.
The approach and tools you need to gather information using APIs are outside the scope of this tutorial. To learn more about it, check out API Integration in Python.
Scrape the Fake Python Job Site
In this tutorial, you’ll build a web scraper that fetches Python software developer job listings from the Fake Python Jobs site. It’s an example site with fake job postings that you can freely scrape to train your skills. Your web scraper will parse the HTML on the site to pick out the relevant information and filter that content for specific words.
You can scrape any site on the Internet that you can look at, but the difficulty of doing so depends on the site. This tutorial offers you an introduction to web scraping to help you understand the overall process. Then, you can apply this same process for every website you’ll want to scrape.
Throughout the tutorial, you’ll also encounter a few exercise blocks. You can click to expand them and challenge yourself by completing the tasks described there.
Step 1: Inspect Your Data Source
Before you write any Python code, you need to get to know the website that you want to scrape. That should be your first step for any web scraping project you want to tackle. You’ll need to understand the site structure to extract the information that’s relevant for you. Start by opening the site you want to scrape with your favorite browser.
Explore the Website
Click through the site and interact with it just like any typical job searcher would. For example, you can scroll through the main page of the website:
You can see many job postings in a card format, and each of them has two buttons. If you click Apply, then you’ll see a new page that contains more detailed descriptions of the selected job. You might also notice that the URL in your browser’s address bar changes when you interact with the website.
Decipher the Information in URLs
A programmer can encode a lot of information in a URL. Your web scraping journey will be much easier if you first become familiar with how URLs work and what they’re made of. For example, you might find yourself on a details page that has the following URL:
You can deconstruct the above URL into two main parts:
The base URL represents the path to the search functionality of the website. In the example above, the base URL is The specific site location that ends with is the path to the job description’s unique resource.
Any job posted on this website will use the same base URL. However, the unique resources’ location will be different depending on what specific job posting you’re viewing.
URLs can hold more information than just the location of a file. Some websites use query parameters to encode values that you submit when performing a search. You can think of them as query strings that you send to the database to retrieve specific records.
You’ll find query parameters at the end of a URL. For example, if you go to Indeed and search for “software developer” in “Australia” through their search bar, you’ll see that the URL changes to include these values as query parameters:
The query parameters in this URL are? q=software+developer&l=Australia. Query parameters consist of three parts:
Start: The beginning of the query parameters is denoted by a question mark (? ).
Information: The pieces of information constituting one query parameter are encoded in key-value pairs, where related keys and values are joined together by an equals sign (key=value).
Separator: Every URL can have multiple query parameters, separated by an ampersand symbol (&).
Equipped with this information, you can pick apart the URL’s query parameters into two key-value pairs:
q=software+developer selects the type of job.
l=Australia selects the location of the job.
Try to change the search parameters and observe how that affects your URL. Go ahead and enter new values in the search bar up top:
Change these values to observe the changes in the URL.
Next, try to change the values directly in your URL. See what happens when you paste the following URL into your browser’s address bar:
If you change and submit the values in the website’s search box, then it’ll be directly reflected in the URL’s query parameters and vice versa. If you change either of them, then you’ll see different results on the website.
As you can see, exploring the URLs of a site can give you insight into how to retrieve data from the website’s server.
Head back to Fake Python Jobs and continue exploring it. This site is a purely static website that doesn’t operate on top of a database, which is why you won’t have to work with query parameters in this scraping tutorial.
Inspect the Site Using Developer Tools
Next, you’ll want to learn more about how the data is structured for display. You’ll need to understand the page structure to pick what you want from the HTML response that you’ll collect in one of the upcoming steps.
Developer tools can help you understand the structure of a website. All modern browsers come with developer tools installed. In this section, you’ll see how to work with the developer tools in Chrome. The process will be very similar to other modern browsers.
In Chrome on macOS, you can open up the developer tools through the menu by selecting View → Developer → Developer Tools. On Windows and Linux, you can access them by clicking the top-right menu button (⋮) and selecting More Tools → Developer Tools. You can also access your developer tools by right-clicking on the page and selecting the Inspect option or using a keyboard shortcut:
Mac: Cmd+Alt+I
Windows/Linux: Ctrl+Shift+I
Developer tools allow you to interactively explore the site’s document object model (DOM) to better understand your source. To dig into your page’s DOM, select the Elements tab in developer tools. You’ll see a structure with clickable HTML elements. You can expand, collapse, and even edit elements right in your browser:
The HTML on the right represents the structure of the page you can see on the left.
You can think of the text displayed in your browser as the HTML structure of that page. If you’re interested, then you can read more about the difference between the DOM and HTML on CSS-TRICKS.
When you right-click elements on the page, you can select Inspect to zoom to their location in the DOM. You can also hover over the HTML text on your right and see the corresponding elements light up on the page.
Click to expand the exercise block for a specific task to practice using your developer tools:
Find a single job posting. What HTML element is it wrapped in, and what other HTML elements does it contain?
Play around and explore! The more you get to know the page you’re working with, the easier it will be to scrape it. However, don’t get too overwhelmed with all that HTML text. You’ll use the power of programming to step through this maze and cherry-pick the information that’s relevant to you.
Step 2: Scrape HTML Content From a Page
Now that you have an idea of what you’re working with, it’s time to start using Python. First, you’ll want to get the site’s HTML code into your Python script so that you can interact with it. For this task, you’ll use Python’s requests library.
Create a virtual environment for your project before you install any external package. Activate your new virtual environment, then type the following command in your terminal to install the external requests library:
$ python -m pip install requests
Then open up a new file in your favorite text editor. All you need to retrieve the HTML are a few lines of code:
import requests
URL = ”
page = (URL)
print()
This code issues an HTTP GET request to the given URL. It retrieves the HTML data that the server sends back and stores that data in a Python object.
If you print the attribute of page, then you’ll notice that it looks just like the HTML that you inspected earlier with your browser’s developer tools. You successfully fetched the static site content from the Internet! You now have access to the site’s HTML from within your Python script.
Static Websites
The website that you’re scraping in this tutorial serves static HTML content. In this scenario, the server that hosts the site sends back HTML documents that already contain all the data that you’ll get to see as a user.
When you inspected the page with developer tools earlier on, you discovered that a job posting consists of the following long and messy-looking HTML:


Senior Python Developer

Payne, Roberts and Davis

Stewartbury, AA

Learn
>Apply

It can be challenging to wrap your head around a long block of HTML code. To make it easier to read, you can use an HTML formatter to clean it up automatically. Good readability helps you better understand the structure of any code block. While it may or may not help improve the HTML formatting, it’s always worth a try.
The HTML you’ll encounter will sometimes be confusing. Luckily, the HTML of this job board has descriptive class names on the elements that you’re interested in:
class=”title is-5″ contains the title of the job posting.
class=”subtitle is-6 company” contains the name of the company that offers the position.
class=”location” contains the location where you’d be working.
In case you ever get lost in a large pile of HTML, remember that you can always go back to your browser and use the developer tools to further explore the HTML structure interactively.
By now, you’ve successfully harnessed the power and user-friendly design of Python’s requests library. With only a few lines of code, you managed to scrape static HTML content from the Web and make it available for further processing.
However, there are more challenging situations that you might encounter when you’re scraping websites. Before you learn how to pick the relevant information from the HTML that you just scraped, you’ll take a quick look at two of these more challenging situations.
Hidden Websites
Some pages contain information that’s hidden behind a login. That means you’ll need an account to be able to scrape anything from the page. The process to make an HTTP request from your Python script is different from how you access a page from your browser. Just because you can log in to the page through your browser doesn’t mean you’ll be able to scrape it with your Python script.
However, the requests library comes with the built-in capacity to handle authentication. With these techniques, you can log in to websites when making the HTTP request from your Python script and then scrape information that’s hidden behind a login. You won’t need to log in to access the job board information, which is why this tutorial won’t cover authentication.
Dynamic Websites
In this tutorial, you’ll learn how to scrape a static website. Static sites are straightforward to work with because the server sends you an HTML page that already contains all the page information in the response. You can parse that HTML response and immediately begin to pick out the relevant data.
On the other hand, with a dynamic website, the server might not send back any HTML at all. Instead, you could receive JavaScript code as a response. This code will look completely different from what you saw when you inspected the page with your browser’s developer tools.
What happens in the browser is not the same as what happens in your script. Your browser will diligently execute the JavaScript code it receives from a server and create the DOM and HTML for you locally. However, if you request a dynamic website in your Python script, then you won’t get the HTML page content.
When you use requests, you only receive what the server sends back. In the case of a dynamic website, you’ll end up with some JavaScript code instead of HTML. The only way to go from the JavaScript code you received to the content that you’re interested in is to execute the code, just like your browser does. The requests library can’t do that for you, but there are other solutions that can.
For example, requests-html is a project created by the author of the requests library that allows you to render JavaScript using syntax that’s similar to the syntax in requests. It also includes capabilities for parsing the data by using Beautiful Soup under the hood.
You won’t go deeper into scraping dynamically-generated content in this tutorial. For now, it’s enough to remember to look into one of the options mentioned above if you need to scrape a dynamic website.
Step 3: Parse HTML Code With Beautiful Soup
You’ve successfully scraped some HTML from the Internet, but when you look at it, it just seems like a huge mess. There are tons of HTML elements here and there, thousands of attributes scattered around—and wasn’t there some JavaScript mixed in as well? It’s time to parse this lengthy code response with the help of Python to make it more accessible and pick out the data you want.
Beautiful Soup is a Python library for parsing structured data. It allows you to interact with HTML in a similar way to how you interact with a web page using developer tools. The library exposes a couple of intuitive functions you can use to explore the HTML you received. To get started, use your terminal to install Beautiful Soup:
$ python -m pip install beautifulsoup4
Then, import the library in your Python script and create a Beautiful Soup object:
from bs4 import BeautifulSoup
soup = BeautifulSoup(ntent, “”)
When you add the two highlighted lines of code, you create a Beautiful Soup object that takes ntent, which is the HTML content you scraped earlier, as its input.
The second argument, “”, makes sure that you use the appropriate parser for HTML content.
Find Elements by ID
In an HTML web page, every element can have an id attribute assigned. As the name already suggests, that id attribute makes the element uniquely identifiable on the page. You can begin to parse your page by selecting a specific element by its ID.
Switch back to developer tools and identify the HTML object that contains all the job postings. Explore by hovering over parts of the page and using right-click to Inspect.
The element you’re looking for is a

with an id attribute that has the value “ResultsContainer”. It has some other attributes as well, but below is the gist of what you’re looking for:


Beautiful Soup allows you to find that specific HTML element by its ID:
results = (id=”ResultsContainer”)
For easier viewing, you can prettify any Beautiful Soup object when you print it out. If you call. prettify() on the results variable that you just assigned above, then you’ll see all the HTML contained within the

:
print(ettify())
When you use the element’s ID, you can pick out one element from among the rest of the HTML. Now you can work with only this specific part of the page’s HTML. It looks like the soup just got a little thinner! However, it’s still quite dense.
Find Elements by HTML Class Name
You’ve seen that every job posting is wrapped in a

element with the class card-content. Now you can work with your new object called results and select only the job postings in it. These are, after all, the parts of the HTML that you’re interested in! You can do this in one line of code:
job_elements = nd_all(“div”, class_=”card-content”)
Here, you call. find_all() on a Beautiful Soup object, which returns an iterable containing all the HTML for all the job listings displayed on that page.
Take a look at all of them:
for job_element in job_elements:
print(job_element, end=”\n”*2)
That’s already pretty neat, but there’s still a lot of HTML! You saw earlier that your page has descriptive class names on some elements. You can pick out those child elements from each job posting with ():
title_element = (“h2″, class_=”title”)
company_element = (“h3″, class_=”company”)
location_element = (“p”, class_=”location”)
print(title_element)
print(company_element)
print(location_element)
Each job_element is another BeautifulSoup() object. Therefore, you can use the same methods on it as you did on its parent element, results.
With this code snippet, you’re getting closer and closer to the data that you’re actually interested in. Still, there’s a lot going on with all those HTML tags and attributes floating around:
Next, you’ll learn how to narrow down this output to access only the text content you’re interested in.
Find Elements by Class Name and Text Content
Not all of the job listings are developer jobs. Instead of printing out all the jobs listed on the website, you’ll first filter them using keywords.
You know that job titles in the page are kept within

elements. To filter for only specific jobs, you can use the string argument:
python_jobs = nd_all(“h2″, string=”Python”)
This code finds all

elements where the contained string matches “Python” exactly. Note that you’re directly calling the method on your first results variable. If you go ahead and print() the output of the above code snippet to your console, then you might be disappointed because it’ll be empty:
>>>>>> print(python_jobs)
[]
There was a Python job in the search results, so why is it not showing up?
When you use string= as you did above, your program looks for that string exactly. Any differences in the spelling, capitalization, or whitespace will prevent the element from matching. In the next section, you’ll find a way to make your search string more general.
Pass a Function to a Beautiful Soup Method
In addition to strings, you can sometimes pass functions as arguments to Beautiful Soup methods. You can change the previous line of code to use a function instead:
python_jobs = nd_all(
“h2”, string=lambda text: “python” in ())
Now you’re passing an anonymous function to the string= argument. The lambda function looks at the text of each

element, converts it to lowercase, and checks whether the substring “python” is found anywhere. You can check whether you managed to identify all the Python jobs with this approach:
>>>>>> print(len(python_jobs))
10
Your program has found 10 matching job posts that include the word “python” in their job title!
Finding elements depending on their text content is a powerful way to filter your HTML response for specific information. Beautiful Soup allows you to use either exact strings or functions as arguments for filtering text in Beautiful Soup objects.
However, when you try to run your scraper to print out the information of the filtered Python jobs, you’ll run into an error:
AttributeError: ‘NoneType’ object has no attribute ‘text’
This message is a common error that you’ll run into a lot when you’re scraping information from the Internet. Inspect the HTML of an element in your python_jobs list. What does it look like? Where do you think the error is coming from?
Identify Error Conditions
When you look at a single element in python_jobs, you’ll see that it consists of only the

element that contains the job title:
When you revisit the code you used to select the items, you’ll see that that’s what you targeted. You filtered for only the

title elements of the job postings that contain the word “python”. As you can see, these elements don’t include the rest of the information about the job.
The error message you received earlier was related to this:
You tried to find the job title, the company name, and the job’s location in each element in python_jobs, but each element contains only the job title text.
Your diligent parsing library still looks for the other ones, too, and returns None because it can’t find them. Then, print() fails with the shown error message when you try to extract the attribute from one of these None objects.
The text you’re looking for is nested in sibling elements of the

elements your filter returned. Beautiful Soup can help you to select sibling, child, and parent elements of each Beautiful Soup object.
Access Parent Elements
One way to get access to all the information you need is to step up in the hierarchy of the DOM starting from the

elements that you identified. Take another look at the HTML of a single job posting. Find the

element that contains the job title as well as its closest parent element that contains all the information that you’re interested in:
The

element with the card-content class contains all the information you want. It’s a third-level parent of the

title element that you found using your filter.
With this information in mind, you can now use the elements in python_jobs and fetch their great-grandparent elements instead to get access to all the information you want:
python_job_elements = [
for h2_element in python_jobs]
You added a list comprehension that operates on each of the

title elements in python_jobs that you got by filtering with the lambda expression. You’re selecting the parent element of the parent element of the parent element of each

title element. That’s three generations up!
When you were looking at the HTML of a single job posting, you identified that this specific parent element with the class name card-content contains all the information you need.
Now you can adapt the code in your for loop to iterate over the parent elements instead:
for job_element in python_job_elements:
# — snip —
When you run your script another time, you’ll see that your code once again has access to all the relevant information. That’s because you’re now looping over the

elements instead of just the

title elements.
Using the attribute that each Beautiful Soup object comes with gives you an intuitive way of stepping through your DOM structure and addressing the elements you need. You can also access child elements and sibling elements in a similar manner. Read up on navigating the tree for more information.
Keep Practicing
If you’ve written the code alongside this tutorial, then you can run your script as is, and you’ll see the fake job information pop up in your terminal. Your next step is to tackle a real-life job board! To keep practicing your new skills, revisit the web scraping process using any or all of the following sites:
PythonJobs
Remote(dot)co
Indeed
The linked websites return their search results as static HTML responses, similar to the Fake Python job board. Therefore, you can scrape them using only requests and Beautiful Soup.
Start going through this tutorial again from the top using one of these other sites. You’ll see that each website’s structure is different and that you’ll need to rebuild the code in a slightly different way to fetch the data you want. Tackling this challenge is a great way to practice the concepts that you just learned. While it might make you sweat every so often, your coding skills will be stronger for it!
During your second attempt, you can also explore additional features of Beautiful Soup. Use the documentation as your guidebook and inspiration. Extra practice will help you become more proficient at web scraping using Python, requests, and Beautiful Soup.
To wrap up your journey into web scraping, you could then give your code a final makeover and create a command-line interface (CLI) app that scrapes one of the job boards and filters the results by a keyword that you can input on each execution. Your CLI tool could allow you to search for specific types of jobs or jobs in particular locations.
If you’re interested in learning how to adapt your script as a command-line interface, then check out How to Build Command-Line Interfaces in Python With argparse.
Conclusion
The requests library gives you a user-friendly way to fetch static HTML from the Internet using Python. You can then parse the HTML with another package called Beautiful Soup. Both packages are trusted and helpful companions for your web scraping adventures. You’ll find that Beautiful Soup will cater to most of your parsing needs, including navigation and advanced searching.
In this tutorial, you learned how to scrape data from the Web using Python, requests, and Beautiful Soup. You built a script that fetches job postings from the Internet and went through the complete web scraping process from start to finish.
You learned how to:
Decipher the data encoded in URLs
Download the page’s HTML content using Python’s requests library
Parse the downloaded HTML with Beautiful Soup to extract relevant information
With this broad pipeline in mind and two powerful libraries in your tool kit, you can go out and see what other websites you can scrape. Have fun, and always remember to be respectful and use your programming skills responsibly.
You can download the source code for the sample script that you built in this tutorial by clicking the link below:
Watch Now This tutorial has a related video course created by the Real Python team. Watch it together with the written tutorial to deepen your understanding: Web Scraping With Beautiful Soup and Python
Implementing Web Scraping in Python with BeautifulSoup

Implementing Web Scraping in Python with BeautifulSoup

There are mainly two ways to extract data from a website:Use the API of the website (if it exists). For example, Facebook has the Facebook Graph API which allows retrieval of data posted on the HTML of the webpage and extract useful information/data from it. This technique is called web scraping or web harvesting or web data article discusses the steps involved in web scraping using the implementation of a Web Scraping framework of Python called Beautiful involved in web scraping:Send an HTTP request to the URL of the webpage you want to access. The server responds to the request by returning the HTML content of the webpage. For this task, we will use a third-party HTTP library for we have accessed the HTML content, we are left with the task of parsing the data. Since most of the HTML data is nested, we cannot extract data simply through string processing. One needs a parser which can create a nested/tree structure of the HTML data. There are many HTML parser libraries available but the most advanced one is, all we need to do is navigating and searching the parse tree that we created, i. e. tree traversal. For this task, we will be using another third-party python library, Beautiful Soup. It is a Python library for pulling data out of HTML and XML 1: Installing the required third-party librariesEasiest way to install external libraries in python is to use pip. pip is a package management system used to install and manage software packages written in you need to do is:pip install requests
pip install html5lib
pip install bs4Another way is to download them manually from these links:requestshtml5libbeautifulsoup4Step 2: Accessing the HTML content from webpageLet us try to understand this piece of of all import the requests, specify the URL of the webpage you want to a HTTP request to the specified URL and save the response from server in a response object called, as print ntent to get the raw HTML content of the webpage. It is of ‘string’ 3: Parsing the HTML contentA really nice thing about the BeautifulSoup library is that it is built on the top of the HTML parsing libraries like html5lib, lxml,, etc. So BeautifulSoup object and specify the parser library can be created at the same the example above, soup = BeautifulSoup(ntent, ‘html5lib’)We create a BeautifulSoup object by passing two ntent: It is the raw HTML ml5lib: Specifying the HTML parser we want to ettify() is printed, it gives the visual representation of the parse tree created from the raw HTML 4: Searching and navigating through the parse treeNow, we would like to extract some useful data from the HTML content. The soup object contains all the data in the nested structure which could be programmatically extracted. In our example, we are scraping a webpage consisting of some quotes. So, we would like to create a program to save those quotes (and all relevant information about them) requestsfrom bs4 import BeautifulSoupimport csvr = (URL)soup = BeautifulSoup(ntent, ‘html5lib’)quotes=[] table = (‘div’, attrs = {‘id’:’all_quotes’}) for row in ndAll(‘div’, attrs = {‘class’:’col-6 col-lg-3 text-center margin-30px-bottom sm-margin-30px-top’}): quote = {} quote[‘theme’] = quote[‘url’] = row. a[‘href’] quote[‘img’] = [‘src’] quote[‘lines’] = [‘alt’](” #”)[0] quote[‘author’] = [‘alt’](” #”)[1] (quote)filename = ”with open(filename, ‘w’, newline=”) as f: w = csv. DictWriter(f, [‘theme’, ‘url’, ‘img’, ‘lines’, ‘author’]) w. writeheader() for quote in quotes: w. writerow(quote)Before moving on, we recommend you to go through the HTML content of the webpage which we printed using ettify() method and try to find a pattern or a way to navigate to the is noticed that all the quotes are inside a div container whose id is ‘all_quotes’. So, we find that div element (termed as table in above code) using find() method:table = (‘div’, attrs = {‘id’:’all_quotes’}) The first argument is the HTML tag you want to search and second argument is a dictionary type element to specify the additional attributes associated with that tag. find() method returns the first matching element. You can try to print ettify() to get a sense of what this piece of code, in the table element, one can notice that each quote is inside a div container whose class is quote. So, we iterate through each div container whose class is, we use findAll() method which is similar to find method in terms of arguments but it returns a list of all matching elements. Each quote is now iterated using a variable called is one sample row HTML content for better understanding:Now consider this piece of code:for row in nd_all_next(‘div’, attrs = {‘class’: ‘col-6 col-lg-3 text-center margin-30px-bottom sm-margin-30px-top’}):
quote = {}
quote[‘theme’] =
quote[‘url’] = row. a[‘href’]
quote[‘img’] = [‘src’]
quote[‘lines’] = [‘alt’](” #”)[0]
quote[‘author’] = [‘alt’](” #”)[1]
(quote)We create a dictionary to save all information about a quote. The nested structure can be accessed using dot notation. To access the text inside an HTML element, we use:quote[‘theme’] = can add, remove, modify and access a tag’s attributes. This is done by treating the tag as a dictionary:quote[‘url’] = row. a[‘href’]Lastly, all the quotes are appended to the list called nally, we would like to save all our data in some CSV lename = ”
with open(filename, ‘w’, newline=”) as f:
w = csv. DictWriter(f, [‘theme’, ‘url’, ‘img’, ‘lines’, ‘author’])
w. writeheader()
for quote in quotes:
w. writerow(quote)Here we create a CSV file called and save all the quotes in it for any further, this was a simple example of how to create a web scraper in Python. From here, you can try to scrap any other website of your choice. In case of any queries, post them below in comments: Web Scraping is considered as illegal in many cases. It may also cause your IP to be blocked permanently by a blog is contributed by Nikhil Kumar. If you like GeeksforGeeks and would like to contribute, you can also write an article using or mail your article to See your article appearing on the GeeksforGeeks main page and help other write comments if you find anything incorrect, or you want to share more information about the topic discussed above.

Frequently Asked Questions about python beautifulsoup example

How do I use BeautifulSoup in Python?

First, we need to import all the libraries that we are going to use. Next, declare a variable for the url of the page. Then, make use of the Python urllib2 to get the HTML page of the url declared. Finally, parse the page into BeautifulSoup format so we can use BeautifulSoup to work on it.Jun 10, 2017

What is a BeautifulSoup in Python?

Beautiful Soup is a Python package for parsing HTML and XML documents (including having malformed markup, i.e. non-closed tags, so named after tag soup). It creates a parse tree for parsed pages that can be used to extract data from HTML, which is useful for web scraping.

How do I use BeautifulSoup to scrape?

Implementing Web Scraping in Python with BeautifulSoupSteps involved in web scraping:Step 1: Installing the required third-party libraries.Step 2: Accessing the HTML content from webpage.Step 3: Parsing the HTML content.Step 4: Searching and navigating through the parse tree.May 15, 2021

Leave a Reply

Your email address will not be published. Required fields are marked *