• December 22, 2024

Python Scrape

Beautiful Soup: Build a Web Scraper With Python

Watch Now This tutorial has a related video course created by the Real Python team. Watch it together with the written tutorial to deepen your understanding: Web Scraping With Beautiful Soup and Python
The incredible amount of data on the Internet is a rich resource for any field of research or personal interest. To effectively harvest that data, you’ll need to become skilled at web scraping. The Python libraries requests and Beautiful Soup are powerful tools for the job. If you like to learn with hands-on examples and have a basic understanding of Python and HTML, then this tutorial is for you.
In this tutorial, you’ll learn how to:
Inspect the HTML structure of your target site with your browser’s developer tools
Decipher data encoded in URLs
Use requests and Beautiful Soup for scraping and parsing data from the Web
Step through a web scraping pipeline from start to finish
Build a script that fetches job offers from the Web and displays relevant information in your console
Working through this project will give you the knowledge of the process and tools you need to scrape any static website out there on the World Wide Web. You can download the project source code by clicking on the link below:
Let’s get started!
What Is Web Scraping?
Web scraping is the process of gathering information from the Internet. Even copying and pasting the lyrics of your favorite song is a form of web scraping! However, the words “web scraping” usually refer to a process that involves automation. Some websites don’t like it when automatic scrapers gather their data, while others don’t mind.
If you’re scraping a page respectfully for educational purposes, then you’re unlikely to have any problems. Still, it’s a good idea to do some research on your own and make sure that you’re not violating any Terms of Service before you start a large-scale project.
Reasons for Web Scraping
Say you’re a surfer, both online and in real life, and you’re looking for employment. However, you’re not looking for just any job. With a surfer’s mindset, you’re waiting for the perfect opportunity to roll your way!
There’s a job site that offers precisely the kinds of jobs you want. Unfortunately, a new position only pops up once in a blue moon, and the site doesn’t provide an email notification service. You think about checking up on it every day, but that doesn’t sound like the most fun and productive way to spend your time.
Thankfully, the world offers other ways to apply that surfer’s mindset! Instead of looking at the job site every day, you can use Python to help automate your job search’s repetitive parts. Automated web scraping can be a solution to speed up the data collection process. You write your code once, and it will get the information you want many times and from many pages.
In contrast, when you try to get the information you want manually, you might spend a lot of time clicking, scrolling, and searching, especially if you need large amounts of data from websites that are regularly updated with new content. Manual web scraping can take a lot of time and repetition.
There’s so much information on the Web, and new information is constantly added. You’ll probably be interested in at least some of that data, and much of it is just out there for the taking. Whether you’re actually on the job hunt or you want to download all the lyrics of your favorite artist, automated web scraping can help you accomplish your goals.
Challenges of Web Scraping
The Web has grown organically out of many sources. It combines many different technologies, styles, and personalities, and it continues to grow to this day. In other words, the Web is a hot mess! Because of this, you’ll run into some challenges when scraping the Web:
Variety: Every website is different. While you’ll encounter general structures that repeat themselves, each website is unique and will need personal treatment if you want to extract the relevant information.
Durability: Websites constantly change. Say you’ve built a shiny new web scraper that automatically cherry-picks what you want from your resource of interest. The first time you run your script, it works flawlessly. But when you run the same script only a short while later, you run into a discouraging and lengthy stack of tracebacks!
Unstable scripts are a realistic scenario, as many websites are in active development. Once the site’s structure has changed, your scraper might not be able to navigate the sitemap correctly or find the relevant information. The good news is that many changes to websites are small and incremental, so you’ll likely be able to update your scraper with only minimal adjustments.
However, keep in mind that because the Internet is dynamic, the scrapers you’ll build will probably require constant maintenance. You can set up continuous integration to run scraping tests periodically to ensure that your main script doesn’t break without your knowledge.
An Alternative to Web Scraping: APIs
Some website providers offer application programming interfaces (APIs) that allow you to access their data in a predefined manner. With APIs, you can avoid parsing HTML. Instead, you can access the data directly using formats like JSON and XML. HTML is primarily a way to present content to users visually.
When you use an API, the process is generally more stable than gathering the data through web scraping. That’s because developers create APIs to be consumed by programs rather than by human eyes.
The front-end presentation of a site might change often, but such a change in the website’s design doesn’t affect its API structure. The structure of an API is usually more permanent, which means it’s a more reliable source of the site’s data.
However, APIs can change as well. The challenges of both variety and durability apply to APIs just as they do to websites. Additionally, it’s much harder to inspect the structure of an API by yourself if the provided documentation lacks quality.
The approach and tools you need to gather information using APIs are outside the scope of this tutorial. To learn more about it, check out API Integration in Python.
Scrape the Fake Python Job Site
In this tutorial, you’ll build a web scraper that fetches Python software developer job listings from the Fake Python Jobs site. It’s an example site with fake job postings that you can freely scrape to train your skills. Your web scraper will parse the HTML on the site to pick out the relevant information and filter that content for specific words.
You can scrape any site on the Internet that you can look at, but the difficulty of doing so depends on the site. This tutorial offers you an introduction to web scraping to help you understand the overall process. Then, you can apply this same process for every website you’ll want to scrape.
Throughout the tutorial, you’ll also encounter a few exercise blocks. You can click to expand them and challenge yourself by completing the tasks described there.
Step 1: Inspect Your Data Source
Before you write any Python code, you need to get to know the website that you want to scrape. That should be your first step for any web scraping project you want to tackle. You’ll need to understand the site structure to extract the information that’s relevant for you. Start by opening the site you want to scrape with your favorite browser.
Explore the Website
Click through the site and interact with it just like any typical job searcher would. For example, you can scroll through the main page of the website:
You can see many job postings in a card format, and each of them has two buttons. If you click Apply, then you’ll see a new page that contains more detailed descriptions of the selected job. You might also notice that the URL in your browser’s address bar changes when you interact with the website.
Decipher the Information in URLs
A programmer can encode a lot of information in a URL. Your web scraping journey will be much easier if you first become familiar with how URLs work and what they’re made of. For example, you might find yourself on a details page that has the following URL:
You can deconstruct the above URL into two main parts:
The base URL represents the path to the search functionality of the website. In the example above, the base URL is The specific site location that ends with is the path to the job description’s unique resource.
Any job posted on this website will use the same base URL. However, the unique resources’ location will be different depending on what specific job posting you’re viewing.
URLs can hold more information than just the location of a file. Some websites use query parameters to encode values that you submit when performing a search. You can think of them as query strings that you send to the database to retrieve specific records.
You’ll find query parameters at the end of a URL. For example, if you go to Indeed and search for “software developer” in “Australia” through their search bar, you’ll see that the URL changes to include these values as query parameters:
The query parameters in this URL are? q=software+developer&l=Australia. Query parameters consist of three parts:
Start: The beginning of the query parameters is denoted by a question mark (? ).
Information: The pieces of information constituting one query parameter are encoded in key-value pairs, where related keys and values are joined together by an equals sign (key=value).
Separator: Every URL can have multiple query parameters, separated by an ampersand symbol (&).
Equipped with this information, you can pick apart the URL’s query parameters into two key-value pairs:
q=software+developer selects the type of job.
l=Australia selects the location of the job.
Try to change the search parameters and observe how that affects your URL. Go ahead and enter new values in the search bar up top:
Change these values to observe the changes in the URL.
Next, try to change the values directly in your URL. See what happens when you paste the following URL into your browser’s address bar:
If you change and submit the values in the website’s search box, then it’ll be directly reflected in the URL’s query parameters and vice versa. If you change either of them, then you’ll see different results on the website.
As you can see, exploring the URLs of a site can give you insight into how to retrieve data from the website’s server.
Head back to Fake Python Jobs and continue exploring it. This site is a purely static website that doesn’t operate on top of a database, which is why you won’t have to work with query parameters in this scraping tutorial.
Inspect the Site Using Developer Tools
Next, you’ll want to learn more about how the data is structured for display. You’ll need to understand the page structure to pick what you want from the HTML response that you’ll collect in one of the upcoming steps.
Developer tools can help you understand the structure of a website. All modern browsers come with developer tools installed. In this section, you’ll see how to work with the developer tools in Chrome. The process will be very similar to other modern browsers.
In Chrome on macOS, you can open up the developer tools through the menu by selecting View → Developer → Developer Tools. On Windows and Linux, you can access them by clicking the top-right menu button (⋮) and selecting More Tools → Developer Tools. You can also access your developer tools by right-clicking on the page and selecting the Inspect option or using a keyboard shortcut:
Mac: Cmd+Alt+I
Windows/Linux: Ctrl+Shift+I
Developer tools allow you to interactively explore the site’s document object model (DOM) to better understand your source. To dig into your page’s DOM, select the Elements tab in developer tools. You’ll see a structure with clickable HTML elements. You can expand, collapse, and even edit elements right in your browser:
The HTML on the right represents the structure of the page you can see on the left.
You can think of the text displayed in your browser as the HTML structure of that page. If you’re interested, then you can read more about the difference between the DOM and HTML on CSS-TRICKS.
When you right-click elements on the page, you can select Inspect to zoom to their location in the DOM. You can also hover over the HTML text on your right and see the corresponding elements light up on the page.
Click to expand the exercise block for a specific task to practice using your developer tools:
Find a single job posting. What HTML element is it wrapped in, and what other HTML elements does it contain?
Play around and explore! The more you get to know the page you’re working with, the easier it will be to scrape it. However, don’t get too overwhelmed with all that HTML text. You’ll use the power of programming to step through this maze and cherry-pick the information that’s relevant to you.
Step 2: Scrape HTML Content From a Page
Now that you have an idea of what you’re working with, it’s time to start using Python. First, you’ll want to get the site’s HTML code into your Python script so that you can interact with it. For this task, you’ll use Python’s requests library.
Create a virtual environment for your project before you install any external package. Activate your new virtual environment, then type the following command in your terminal to install the external requests library:
$ python -m pip install requests
Then open up a new file in your favorite text editor. All you need to retrieve the HTML are a few lines of code:
import requests
URL = ”
page = (URL)
print()
This code issues an HTTP GET request to the given URL. It retrieves the HTML data that the server sends back and stores that data in a Python object.
If you print the attribute of page, then you’ll notice that it looks just like the HTML that you inspected earlier with your browser’s developer tools. You successfully fetched the static site content from the Internet! You now have access to the site’s HTML from within your Python script.
Static Websites
The website that you’re scraping in this tutorial serves static HTML content. In this scenario, the server that hosts the site sends back HTML documents that already contain all the data that you’ll get to see as a user.
When you inspected the page with developer tools earlier on, you discovered that a job posting consists of the following long and messy-looking HTML:


Senior Python Developer

Payne, Roberts and Davis

Stewartbury, AA

Learn
>Apply

It can be challenging to wrap your head around a long block of HTML code. To make it easier to read, you can use an HTML formatter to clean it up automatically. Good readability helps you better understand the structure of any code block. While it may or may not help improve the HTML formatting, it’s always worth a try.
The HTML you’ll encounter will sometimes be confusing. Luckily, the HTML of this job board has descriptive class names on the elements that you’re interested in:
class=”title is-5″ contains the title of the job posting.
class=”subtitle is-6 company” contains the name of the company that offers the position.
class=”location” contains the location where you’d be working.
In case you ever get lost in a large pile of HTML, remember that you can always go back to your browser and use the developer tools to further explore the HTML structure interactively.
By now, you’ve successfully harnessed the power and user-friendly design of Python’s requests library. With only a few lines of code, you managed to scrape static HTML content from the Web and make it available for further processing.
However, there are more challenging situations that you might encounter when you’re scraping websites. Before you learn how to pick the relevant information from the HTML that you just scraped, you’ll take a quick look at two of these more challenging situations.
Hidden Websites
Some pages contain information that’s hidden behind a login. That means you’ll need an account to be able to scrape anything from the page. The process to make an HTTP request from your Python script is different from how you access a page from your browser. Just because you can log in to the page through your browser doesn’t mean you’ll be able to scrape it with your Python script.
However, the requests library comes with the built-in capacity to handle authentication. With these techniques, you can log in to websites when making the HTTP request from your Python script and then scrape information that’s hidden behind a login. You won’t need to log in to access the job board information, which is why this tutorial won’t cover authentication.
Dynamic Websites
In this tutorial, you’ll learn how to scrape a static website. Static sites are straightforward to work with because the server sends you an HTML page that already contains all the page information in the response. You can parse that HTML response and immediately begin to pick out the relevant data.
On the other hand, with a dynamic website, the server might not send back any HTML at all. Instead, you could receive JavaScript code as a response. This code will look completely different from what you saw when you inspected the page with your browser’s developer tools.
What happens in the browser is not the same as what happens in your script. Your browser will diligently execute the JavaScript code it receives from a server and create the DOM and HTML for you locally. However, if you request a dynamic website in your Python script, then you won’t get the HTML page content.
When you use requests, you only receive what the server sends back. In the case of a dynamic website, you’ll end up with some JavaScript code instead of HTML. The only way to go from the JavaScript code you received to the content that you’re interested in is to execute the code, just like your browser does. The requests library can’t do that for you, but there are other solutions that can.
For example, requests-html is a project created by the author of the requests library that allows you to render JavaScript using syntax that’s similar to the syntax in requests. It also includes capabilities for parsing the data by using Beautiful Soup under the hood.
You won’t go deeper into scraping dynamically-generated content in this tutorial. For now, it’s enough to remember to look into one of the options mentioned above if you need to scrape a dynamic website.
Step 3: Parse HTML Code With Beautiful Soup
You’ve successfully scraped some HTML from the Internet, but when you look at it, it just seems like a huge mess. There are tons of HTML elements here and there, thousands of attributes scattered around—and wasn’t there some JavaScript mixed in as well? It’s time to parse this lengthy code response with the help of Python to make it more accessible and pick out the data you want.
Beautiful Soup is a Python library for parsing structured data. It allows you to interact with HTML in a similar way to how you interact with a web page using developer tools. The library exposes a couple of intuitive functions you can use to explore the HTML you received. To get started, use your terminal to install Beautiful Soup:
$ python -m pip install beautifulsoup4
Then, import the library in your Python script and create a Beautiful Soup object:
from bs4 import BeautifulSoup
soup = BeautifulSoup(ntent, “”)
When you add the two highlighted lines of code, you create a Beautiful Soup object that takes ntent, which is the HTML content you scraped earlier, as its input.
The second argument, “”, makes sure that you use the appropriate parser for HTML content.
Find Elements by ID
In an HTML web page, every element can have an id attribute assigned. As the name already suggests, that id attribute makes the element uniquely identifiable on the page. You can begin to parse your page by selecting a specific element by its ID.
Switch back to developer tools and identify the HTML object that contains all the job postings. Explore by hovering over parts of the page and using right-click to Inspect.
The element you’re looking for is a

with an id attribute that has the value “ResultsContainer”. It has some other attributes as well, but below is the gist of what you’re looking for:


Beautiful Soup allows you to find that specific HTML element by its ID:
results = (id=”ResultsContainer”)
For easier viewing, you can prettify any Beautiful Soup object when you print it out. If you call. prettify() on the results variable that you just assigned above, then you’ll see all the HTML contained within the

:
print(ettify())
When you use the element’s ID, you can pick out one element from among the rest of the HTML. Now you can work with only this specific part of the page’s HTML. It looks like the soup just got a little thinner! However, it’s still quite dense.
Find Elements by HTML Class Name
You’ve seen that every job posting is wrapped in a

element with the class card-content. Now you can work with your new object called results and select only the job postings in it. These are, after all, the parts of the HTML that you’re interested in! You can do this in one line of code:
job_elements = nd_all(“div”, class_=”card-content”)
Here, you call. find_all() on a Beautiful Soup object, which returns an iterable containing all the HTML for all the job listings displayed on that page.
Take a look at all of them:
for job_element in job_elements:
print(job_element, end=”\n”*2)
That’s already pretty neat, but there’s still a lot of HTML! You saw earlier that your page has descriptive class names on some elements. You can pick out those child elements from each job posting with ():
title_element = (“h2″, class_=”title”)
company_element = (“h3″, class_=”company”)
location_element = (“p”, class_=”location”)
print(title_element)
print(company_element)
print(location_element)
Each job_element is another BeautifulSoup() object. Therefore, you can use the same methods on it as you did on its parent element, results.
With this code snippet, you’re getting closer and closer to the data that you’re actually interested in. Still, there’s a lot going on with all those HTML tags and attributes floating around:
Next, you’ll learn how to narrow down this output to access only the text content you’re interested in.
Find Elements by Class Name and Text Content
Not all of the job listings are developer jobs. Instead of printing out all the jobs listed on the website, you’ll first filter them using keywords.
You know that job titles in the page are kept within

elements. To filter for only specific jobs, you can use the string argument:
python_jobs = nd_all(“h2″, string=”Python”)
This code finds all

elements where the contained string matches “Python” exactly. Note that you’re directly calling the method on your first results variable. If you go ahead and print() the output of the above code snippet to your console, then you might be disappointed because it’ll be empty:
>>>>>> print(python_jobs)
[]
There was a Python job in the search results, so why is it not showing up?
When you use string= as you did above, your program looks for that string exactly. Any differences in the spelling, capitalization, or whitespace will prevent the element from matching. In the next section, you’ll find a way to make your search string more general.
Pass a Function to a Beautiful Soup Method
In addition to strings, you can sometimes pass functions as arguments to Beautiful Soup methods. You can change the previous line of code to use a function instead:
python_jobs = nd_all(
“h2”, string=lambda text: “python” in ())
Now you’re passing an anonymous function to the string= argument. The lambda function looks at the text of each

element, converts it to lowercase, and checks whether the substring “python” is found anywhere. You can check whether you managed to identify all the Python jobs with this approach:
>>>>>> print(len(python_jobs))
10
Your program has found 10 matching job posts that include the word “python” in their job title!
Finding elements depending on their text content is a powerful way to filter your HTML response for specific information. Beautiful Soup allows you to use either exact strings or functions as arguments for filtering text in Beautiful Soup objects.
However, when you try to run your scraper to print out the information of the filtered Python jobs, you’ll run into an error:
AttributeError: ‘NoneType’ object has no attribute ‘text’
This message is a common error that you’ll run into a lot when you’re scraping information from the Internet. Inspect the HTML of an element in your python_jobs list. What does it look like? Where do you think the error is coming from?
Identify Error Conditions
When you look at a single element in python_jobs, you’ll see that it consists of only the

element that contains the job title:
When you revisit the code you used to select the items, you’ll see that that’s what you targeted. You filtered for only the

title elements of the job postings that contain the word “python”. As you can see, these elements don’t include the rest of the information about the job.
The error message you received earlier was related to this:
You tried to find the job title, the company name, and the job’s location in each element in python_jobs, but each element contains only the job title text.
Your diligent parsing library still looks for the other ones, too, and returns None because it can’t find them. Then, print() fails with the shown error message when you try to extract the attribute from one of these None objects.
The text you’re looking for is nested in sibling elements of the

elements your filter returned. Beautiful Soup can help you to select sibling, child, and parent elements of each Beautiful Soup object.
Access Parent Elements
One way to get access to all the information you need is to step up in the hierarchy of the DOM starting from the

elements that you identified. Take another look at the HTML of a single job posting. Find the

element that contains the job title as well as its closest parent element that contains all the information that you’re interested in:
The

element with the card-content class contains all the information you want. It’s a third-level parent of the

title element that you found using your filter.
With this information in mind, you can now use the elements in python_jobs and fetch their great-grandparent elements instead to get access to all the information you want:
python_job_elements = [
for h2_element in python_jobs]
You added a list comprehension that operates on each of the

title elements in python_jobs that you got by filtering with the lambda expression. You’re selecting the parent element of the parent element of the parent element of each

title element. That’s three generations up!
When you were looking at the HTML of a single job posting, you identified that this specific parent element with the class name card-content contains all the information you need.
Now you can adapt the code in your for loop to iterate over the parent elements instead:
for job_element in python_job_elements:
# — snip —
When you run your script another time, you’ll see that your code once again has access to all the relevant information. That’s because you’re now looping over the

elements instead of just the

title elements.
Using the attribute that each Beautiful Soup object comes with gives you an intuitive way of stepping through your DOM structure and addressing the elements you need. You can also access child elements and sibling elements in a similar manner. Read up on navigating the tree for more information.
Keep Practicing
If you’ve written the code alongside this tutorial, then you can run your script as is, and you’ll see the fake job information pop up in your terminal. Your next step is to tackle a real-life job board! To keep practicing your new skills, revisit the web scraping process using any or all of the following sites:
PythonJobs
Remote(dot)co
Indeed
The linked websites return their search results as static HTML responses, similar to the Fake Python job board. Therefore, you can scrape them using only requests and Beautiful Soup.
Start going through this tutorial again from the top using one of these other sites. You’ll see that each website’s structure is different and that you’ll need to rebuild the code in a slightly different way to fetch the data you want. Tackling this challenge is a great way to practice the concepts that you just learned. While it might make you sweat every so often, your coding skills will be stronger for it!
During your second attempt, you can also explore additional features of Beautiful Soup. Use the documentation as your guidebook and inspiration. Extra practice will help you become more proficient at web scraping using Python, requests, and Beautiful Soup.
To wrap up your journey into web scraping, you could then give your code a final makeover and create a command-line interface (CLI) app that scrapes one of the job boards and filters the results by a keyword that you can input on each execution. Your CLI tool could allow you to search for specific types of jobs or jobs in particular locations.
If you’re interested in learning how to adapt your script as a command-line interface, then check out How to Build Command-Line Interfaces in Python With argparse.
Conclusion
The requests library gives you a user-friendly way to fetch static HTML from the Internet using Python. You can then parse the HTML with another package called Beautiful Soup. Both packages are trusted and helpful companions for your web scraping adventures. You’ll find that Beautiful Soup will cater to most of your parsing needs, including navigation and advanced searching.
In this tutorial, you learned how to scrape data from the Web using Python, requests, and Beautiful Soup. You built a script that fetches job postings from the Internet and went through the complete web scraping process from start to finish.
You learned how to:
Decipher the data encoded in URLs
Download the page’s HTML content using Python’s requests library
Parse the downloaded HTML with Beautiful Soup to extract relevant information
With this broad pipeline in mind and two powerful libraries in your tool kit, you can go out and see what other websites you can scrape. Have fun, and always remember to be respectful and use your programming skills responsibly.
You can download the source code for the sample script that you built in this tutorial by clicking the link below:
Watch Now This tutorial has a related video course created by the Real Python team. Watch it together with the written tutorial to deepen your understanding: Web Scraping With Beautiful Soup and Python
Is Web Scraping Illegal? Depends on What the Meaning of the Word Is

Is Web Scraping Illegal? Depends on What the Meaning of the Word Is

Depending on who you ask, web scraping can be loved or hated.
Web scraping has existed for a long time and, in its good form, it’s a key underpinning of the internet. “Good bots” enable, for example, search engines to index web content, price comparison services to save consumers money, and market researchers to gauge sentiment on social media.
“Bad bots, ” however, fetch content from a website with the intent of using it for purposes outside the site owner’s control. Bad bots make up 20 percent of all web traffic and are used to conduct a variety of harmful activities, such as denial of service attacks, competitive data mining, online fraud, account hijacking, data theft, stealing of intellectual property, unauthorized vulnerability scans, spam and digital ad fraud.
So, is it Illegal to Scrape a Website?
So is it legal or illegal? Web scraping and crawling aren’t illegal by themselves. After all, you could scrape or crawl your own website, without a hitch.
Startups love it because it’s a cheap and powerful way to gather data without the need for partnerships. Big companies use web scrapers for their own gain but also don’t want others to use bots against them.
The general opinion on the matter does not seem to matter anymore because in the past 12 months it has become very clear that the federal court system is cracking down more than ever.
Let’s take a look back. Web scraping started in a legal grey area where the use of bots to scrape a website was simply a nuisance. Not much could be done about the practice until in 2000 eBay filed a preliminary injunction against Bidder’s Edge. In the injunction eBay claimed that the use of bots on the site, against the will of the company violated Trespass to Chattels law.
The court granted the injunction because users had to opt in and agree to the terms of service on the site and that a large number of bots could be disruptive to eBay’s computer systems. The lawsuit was settled out of court so it all never came to a head but the legal precedent was set.
In 2001 however, a travel agency sued a competitor who had “scraped” its prices from its Web site to help the rival set its own prices. The judge ruled that the fact that this scraping was not welcomed by the site’s owner was not sufficient to make it “unauthorized access” for the purpose of federal hacking laws.
Two years later the legal standing for eBay v Bidder’s Edge was implicitly overruled in the “Intel v. Hamidi”, a case interpreting California’s common law trespass to chattels. It was the wild west once again. Over the next several years the courts ruled time and time again that simply putting “do not scrape us” in your website terms of service was not enough to warrant a legally binding agreement. For you to enforce that term, a user must explicitly agree or consent to the terms. This left the field wide open for scrapers to do as they wish.
Fast forward a few years and you start seeing a shift in opinion. In 2009 Facebook won one of the first copyright suits against a web scraper. This laid the groundwork for numerous lawsuits that tie any web scraping with a direct copyright violation and very clear monetary damages. The most recent case being AP v Meltwater where the courts stripped what is referred to as fair use on the internet.
Previously, for academic, personal, or information aggregation people could rely on fair use and use web scrapers. The court now gutted the fair use clause that companies had used to defend web scraping. The court determined that even small percentages, sometimes as little as 4. 5% of the content, are significant enough to not fall under fair use. The only caveat the court made was based on the simple fact that this data was available for purchase. Had it not been, it is unclear how they would have ruled. Then a few months back the gauntlet was dropped.
Andrew Auernheimer was convicted of hacking based on the act of web scraping. Although the data was unprotected and publically available via AT&T’s website, the fact that he wrote web scrapers to harvest that data in mass amounted to “brute force attack”. He did not have to consent to terms of service to deploy his bots and conduct the web scraping. The data was not available for purchase. It wasn’t behind a login. He did not even financially gain from the aggregation of the data. Most importantly, it was buggy programing by AT&T that exposed this information in the first place. Yet Andrew was at fault. This isn’t just a civil suit anymore. This charge is a felony violation that is on par with hacking or denial of service attacks and carries up to a 15-year sentence for each charge.
In 2016, Congress passed its first legislation specifically to target bad bots — the Better Online Ticket Sales (BOTS) Act, which bans the use of software that circumvents security measures on ticket seller websites. Automated ticket scalping bots use several techniques to do their dirty work including web scraping that incorporates advanced business logic to identify scalping opportunities, input purchase details into shopping carts, and even resell inventory on secondary markets.
To counteract this type of activity, the BOTS Act:
Prohibits the circumvention of a security measure used to enforce ticket purchasing limits for an event with an attendance capacity of greater than 200 persons.
Prohibits the sale of an event ticket obtained through such a circumvention violation if the seller participated in, had the ability to control, or should have known about it.
Treats violations as unfair or deceptive acts under the Federal Trade Commission Act. The bill provides authority to the FTC and states to enforce against such violations.
In other words, if you’re a venue, organization or ticketing software platform, it is still on you to defend against this fraudulent activity during your major onsales.
The UK seems to have followed the US with its Digital Economy Act 2017 which achieved Royal Assent in April. The Act seeks to protect consumers in a number of ways in an increasingly digital society, including by “cracking down on ticket touts by making it a criminal offence for those that misuse bot technology to sweep up tickets and sell them at inflated prices in the secondary market. ”
In the summer of 2017, LinkedIn sued hiQ Labs, a San Francisco-based startup. hiQ was scraping publicly available LinkedIn profiles to offer clients, according to its website, “a crystal ball that helps you determine skills gaps or turnover risks months ahead of time. ”
You might find it unsettling to think that your public LinkedIn profile could be used against you by your employer.
Yet a judge on Aug. 14, 2017 decided this is okay. Judge Edward Chen of the U. S. District Court in San Francisco agreed with hiQ’s claim in a lawsuit that Microsoft-owned LinkedIn violated antitrust laws when it blocked the startup from accessing such data. He ordered LinkedIn to remove the barriers within 24 hours. LinkedIn has filed to appeal.
The ruling contradicts previous decisions clamping down on web scraping. And it opens a Pandora’s box of questions about social media user privacy and the right of businesses to protect themselves from data hijacking.
There’s also the matter of fairness. LinkedIn spent years creating something of real value. Why should it have to hand it over to the likes of hiQ — paying for the servers and bandwidth to host all that bot traffic on top of their own human users, just so hiQ can ride LinkedIn’s coattails?
I am in the business of blocking bots. Chen’s ruling has sent a chill through those of us in the cybersecurity industry devoted to fighting web-scraping bots.
I think there is a legitimate need for some companies to be able to prevent unwanted web scrapers from accessing their site.
In October of 2017, and as reported by Bloomberg, Ticketmaster sued Prestige Entertainment, claiming it used computer programs to illegally buy as many as 40 percent of the available seats for performances of “Hamilton” in New York and the majority of the tickets Ticketmaster had available for the Mayweather v. Pacquiao fight in Las Vegas two years ago.
Prestige continued to use the illegal bots even after it paid a $3. 35 million to settle New York Attorney General Eric Schneiderman’s probe into the ticket resale industry.
Under that deal, Prestige promised to abstain from using bots, Ticketmaster said in the complaint. Ticketmaster asked for unspecified compensatory and punitive damages and a court order to stop Prestige from using bots.
Are the existing laws too antiquated to deal with the problem? Should new legislation be introduced to provide more clarity? Most sites don’t have any web scraping protections in place. Do the companies have some burden to prevent web scraping?
As the courts try to further decide the legality of scraping, companies are still having their data stolen and the business logic of their websites abused. Instead of looking to the law to eventually solve this technology problem, it’s time to start solving it with anti-bot and anti-scraping technology today.
Get the latest from imperva
The latest news from our experts in the fast-changing world of application, data, and edge security.
Subscribe to our blog
Web Scraping using Python - DataCamp

Web Scraping using Python – DataCamp

Web scraping is a term used to describe the use of a program or algorithm to extract and process large amounts of data from the web. Whether you are a data scientist, engineer, or anybody who analyzes large amounts of datasets, the ability to scrape data from the web is a useful skill to have. Let’s say you find data from the web, and there is no direct way to download it, web scraping using Python is a skill you can use to extract the data into a useful form that can be imported.
In this tutorial, you will learn about the following:
• Data extraction from the web using Python’s Beautiful Soup module
• Data manipulation and cleaning using Python’s Pandas library
• Data visualization using Python’s Matplotlib library
The dataset used in this tutorial was taken from a 10K race that took place in Hillsboro, OR on June 2017. Specifically, you will analyze the performance of the 10K runners and answer questions such as:
• What was the average finish time for the runners?
• Did the runners’ finish times follow a normal distribution?
• Were there any performance differences between males and females of various age groups?
Using Jupyter Notebook, you should start by importing the necessary modules (pandas, numpy,, seaborn). If you don’t have Jupyter Notebook installed, I recommend installing it using the Anaconda Python distribution which is available on the internet. To easily display the plots, make sure to include the line%matplotlib inline as shown below.
import pandas as pd
import numpy as np
import as plt
import seaborn as sns%matplotlib inline
To perform web scraping, you should also import the libraries shown below. The quest module is used to open URLs. The Beautiful Soup package is used to extract data from html files. The Beautiful Soup library’s name is bs4 which stands for Beautiful Soup, version 4.
from quest import urlopen
from bs4 import BeautifulSoup
After importing necessary modules, you should specify the URL containing the dataset and pass it to urlopen() to get the html of the page.
url = ”
html = urlopen(url)
Getting the html of the page is just the first step. Next step is to create a Beautiful Soup object from the html. This is done by passing the html to the BeautifulSoup() function. The Beautiful Soup package is used to parse the html, that is, take the raw html text and break it into Python objects. The second argument ‘lxml’ is the html parser whose details you do not need to worry about at this point.
soup = BeautifulSoup(html, ‘lxml’)
type(soup)
autifulSoup
The soup object allows you to extract interesting information about the website you’re scraping such as getting the title of the page as shown below.
# Get the title
title =
print(title)
2017 Intel Great Place to Run 10K \ Urban Clash Games Race Results
You can also get the text of the webpage and quickly print it out to check if it is what you expect.
# Print out the text
text = t_text()
#print()
You can view the html of the webpage by right-clicking anywhere on the webpage and selecting “Inspect. ” This is what the result looks like.
You can use the find_all() method of soup to extract useful html tags within a webpage. Examples of useful tags include < a > for hyperlinks, < table > for tables, < tr > for table rows, < th > for table headers, and < td > for table cells. The code below shows how to extract all the hyperlinks within the webpage.
nd_all(‘a’)
[5K,
Individual Results,
Team Results,
[email protected],
Results,
,
,
Huber Timing,
]
As you can see from the output above, html tags sometimes come with attributes such as class, src, etc. These attributes provide additional information about html elements. You can use a for loop and the get(‘”href”) method to extract and print out only hyperlinks.
all_links = nd_all(“a”)
for link in all_links:
print((“href”))
/results/2017GPTR
#individual
#team
mailto:[email protected]
#tabs-1
None
To print out table rows only, pass the ‘tr’ argument in nd_all().
# Print the first 10 rows for sanity check
rows = nd_all(‘tr’)
print(rows[:10])
[

Finishers: 577

,

Male: 414

,

Female: 163

,

Place Bib Name Gender City State Chip Time Chip Pace Gender Place Age Group Age Group Place Time to Start Gun Time Team

,

1 814 JARED WILSON M TIGARD OR 00:36:21 05:51 1 of 414 M 36-45 1 of 152 00:00:03 00:36:24 2 573 NATHAN A SUSTERSIC PORTLAND 00:36:42 05:55 2 of 414 M 26-35 1 of 154 00:36:45 INTEL TEAM F 3 687 FRANCISCO MAYA 00:37:44 06:05 3 of 414 M 46-55 1 of 64 00:00:04 00:37:48 4 623 PAUL MORROW BEAVERTON 00:38:34 06:13 4 of 414 2 of 152 00:38:37 5 569 DEREK G OSBORNE HILLSBORO 00:39:21 06:20 5 of 414 2 of 154 00:39:24 6 642 JONATHON TRAN 00:39:49 06:25 6 of 414 M 18-25 1 of 34 00:00:06 00:39:55

]
The goal of this tutorial is to take a table from a webpage and convert it into a dataframe for easier manipulation using Python. To get there, you should get all table rows in list form first and then convert that list into a dataframe. Below is a for loop that iterates through table rows and prints out the cells of the rows.
for row in rows:
row_td = nd_all(‘td’)
print(row_td)
type(row_td)
[

14TH

,

INTEL TEAM M

,

04:43:23

,

00:58:59 – DANIELLE CASILLAS

,

01:02:06 – RAMYA MERUVA

,

01:17:06 – PALLAVI J SHINDE

,

01:25:11 – NALINI MURARI

]
sultSet
The output above shows that each row is printed with html tags embedded in each row. This is not what you want. You can use remove the html tags using Beautiful Soup or regular expressions.
The easiest way to remove html tags is to use Beautiful Soup, and it takes just one line of code to do this. Pass the string of interest into BeautifulSoup() and use the get_text() method to extract the text without html tags.
str_cells = str(row_td)
cleantext = BeautifulSoup(str_cells, “lxml”). get_text()
print(cleantext)
[14TH, INTEL TEAM M, 04:43:23, 00:58:59 – DANIELLE CASILLAS, 01:02:06 – RAMYA MERUVA, 01:17:06 – PALLAVI J SHINDE, 01:25:11 – NALINI MURARI]
Using regular expressions is highly discouraged since it requires several lines of code and one can easily make mistakes. It requires importing the re (for regular expressions) module. The code below shows how to build a regular expression that finds all the characters inside the < td > html tags and replace them with an empty string for each table row.
First, you compile a regular expression by passing a string to match to mpile(). The dot, star, and question mark (. *? ) will match an opening angle bracket followed by anything and followed by a closing angle bracket. It matches text in a non-greedy fashion, that is, it matches the shortest possible string. If you omit the question mark, it will match all the text between the first opening angle bracket and the last closing angle bracket. After compiling a regular expression, you can use the () method to find all the substrings where the regular expression matches and replace them with an empty string. The full code below generates an empty list, extract text in between html tags for each row, and append it to the assigned list.
import re
list_rows = []
cells = nd_all(‘td’)
str_cells = str(cells)
clean = mpile(‘<. *? >‘)
clean2 = ((clean, ”, str_cells))
(clean2)
print(clean2)
type(clean2)
str
The next step is to convert the list into a dataframe and get a quick view of the first 10 rows using Pandas.
df = Frame(list_rows)
(10)
0
[Finishers:, 577]
1
[Male:, 414]
2
[Female:, 163]
3
[]
4
[1, 814, JARED WILSON, M, TIGARD, OR, 00:36:21…
5
[2, 573, NATHAN A SUSTERSIC, M, PORTLAND, OR,…
6
[3, 687, FRANCISCO MAYA, M, PORTLAND, OR, 00:3…
7
[4, 623, PAUL MORROW, M, BEAVERTON, OR, 00:38:…
8
[5, 569, DEREK G OSBORNE, M, HILLSBORO, OR, 00…
9
[6, 642, JONATHON TRAN, M, PORTLAND, OR, 00:39…
The dataframe is not in the format we want. To clean it up, you should split the “0” column into multiple columns at the comma position. This is accomplished by using the () method.
df1 = df[0](‘, ‘, expand=True)
This looks much better, but there is still work to do. The dataframe has unwanted square brackets surrounding each row. You can use the strip() method to remove the opening square bracket on column “0. ”
df1[0] = df1[0](‘[‘)
The table is missing table headers. You can use the find_all() method to get the table headers.
col_labels = nd_all(‘th’)
Similar to table rows, you can use Beautiful Soup to extract text in between html tags for table headers.
all_header = []
col_str = str(col_labels)
cleantext2 = BeautifulSoup(col_str, “lxml”). get_text()
(cleantext2)
print(all_header)
[‘[Place, Bib, Name, Gender, City, State, Chip Time, Chip Pace, Gender Place, Age Group, Age Group Place, Time to Start, Gun Time, Team]’]
You can then convert the list of headers into a pandas dataframe.
df2 = Frame(all_header)
()
[Place, Bib, Name, Gender, City, State, Chip T…
Similarly, you can split column “0” into multiple columns at the comma position for all rows.
df3 = df2[0](‘, ‘, expand=True)
The two dataframes can be concatenated into one using the concat() method as illustrated below.
frames = [df3, df1]
df4 = (frames)
Below shows how to assign the first row to be the table header.
df5 = ([0])
At this point, the table is almost properly formatted. For analysis, you can start by getting an overview of the data as shown below.

Int64Index: 597 entries, 0 to 595
Data columns (total 14 columns):
[Place 597 non-null object
Bib 596 non-null object
Name 593 non-null object
Gender 593 non-null object
City 593 non-null object
State 593 non-null object
Chip Time 593 non-null object
Chip Pace 578 non-null object
Gender Place 578 non-null object
Age Group 578 non-null object
Age Group Place 578 non-null object
Time to Start 578 non-null object
Gun Time 578 non-null object
Team] 578 non-null object
dtypes: object(14)
memory usage: 70. 0+ KB
(597, 14)
The table has 597 rows and 14 columns. You can drop all rows with any missing values.
df6 = (axis=0, how=’any’)
Also, notice how the table header is replicated as the first row in df5. It can be dropped using the following line of code.
df7 = ([0])
You can perform more data cleaning by renaming the ‘[Place’ and ‘ Team]’ columns. Python is very picky about space. Make sure you include space after the quotation mark in ‘ Team]’.
(columns={‘[Place’: ‘Place’}, inplace=True)
(columns={‘ Team]’: ‘Team’}, inplace=True)
The final data cleaning step involves removing the closing bracket for cells in the “Team” column.
df7[‘Team’] = df7[‘Team’](‘]’)
It took a while to get here, but at this point, the dataframe is in the desired format. Now you can move on to the exciting part and start plotting the data and computing interesting statistics.
The first question to answer is, what was the average finish time (in minutes) for the runners? You need to convert the column “Chip Time” into just minutes. One way to do this is to convert the column to a list first for manipulation.
time_list = df7[‘ Chip Time’]()
# You can use a for loop to convert ‘Chip Time’ to minutes
time_mins = []
for i in time_list:
h, m, s = (‘:’)
math = (int(h) * 3600 + int(m) * 60 + int(s))/60
(math)
#print(time_mins)
The next step is to convert the list back into a dataframe and make a new column (“Runner_mins”) for runner chip times expressed in just minutes.
df7[‘Runner_mins’] = time_mins
The code below shows how to calculate statistics for numeric columns only in the dataframe.
scribe(include=[])
Runner_mins
count
577. 000000
mean
60. 035933
std
11. 970623
min
36. 350000
25%
51. 000000
50%
59. 016667
75%
67. 266667
max
101. 300000
Interestingly, the average chip time for all runners was ~60 mins. The fastest 10K runner finished in 36. 35 mins, and the slowest runner finished in 101. 30 minutes.
A boxplot is another useful tool to visualize summary statistics (maximum, minimum, medium, first quartile, third quartile, including outliers). Below are data summary statistics for the runners shown in a boxplot. For data visualization, it is convenient to first import parameters from the pylab module that comes with matplotlib and set the same size for all figures to avoid doing it for each figure.
from pylab import rcParams
rcParams[‘gsize’] = 15, 5
xplot(column=’Runner_mins’)
(True, axis=’y’)
(‘Chip Time’)
([1], [‘Runners’])
([< at 0x570dd106d8>],
)
The second question to answer is: Did the runners’ finish times follow a normal distribution?
Below is a distribution plot of runners’ chip times plotted using the seaborn library. The distribution looks almost normal.
x = df7[‘Runner_mins’]
ax = sns. distplot(x, hist=True, kde=True, rug=False, color=’m’, bins=25, hist_kws={‘edgecolor’:’black’})
The third question deals with whether there were any performance differences between males and females of various age groups. Below is a distribution plot of chip times for males and females.
f_fuko = [df7[‘ Gender’]==’ F’][‘Runner_mins’]
m_fuko = [df7[‘ Gender’]==’ M’][‘Runner_mins’]
sns. distplot(f_fuko, hist=True, kde=True, rug=False, hist_kws={‘edgecolor’:’black’}, label=’Female’)
sns. distplot(m_fuko, hist=False, kde=True, rug=False, hist_kws={‘edgecolor’:’black’}, label=’Male’)
< at 0x570e301fd0>
The distribution indicates that females were slower than males on average. You can use the groupby() method to compute summary statistics for males and females separately as shown below.
g_stats = oupby(” Gender”, as_index=True). describe()
print(g_stats)
Runner_mins \
count mean std min 25% 50%
Gender
F 163. 0 66. 119223 12. 184440 43. 766667 58. 758333 64. 616667
M 414. 0 57. 640821 11. 011857 36. 350000 49. 395833 55. 791667
75% max
F 72. 058333 101. 300000
M 64. 804167 98. 516667
The average chip time for all females and males was ~66 mins and ~58 mins, respectively. Below is a side-by-side boxplot comparison of male and female finish times.
xplot(column=’Runner_mins’, by=’ Gender’)
ptitle(“”)
C:\Users\smasango\AppData\Local\Continuum\anaconda3\lib\site-packages\numpy\core\ FutureWarning: reshape is deprecated and will raise in a subsequent release. Please use (… ) instead
return getattr(obj, method)(*args, **kwds)
Text(0. 5, 0. 98, ”)
In this tutorial, you performed web scraping using Python. You used the Beautiful Soup library to parse html data and convert it into a form that can be used for analysis. You performed cleaning of the data in Python and created useful plots (box plots, bar plots, and distribution plots) to reveal interesting trends using Python’s matplotlib and seaborn libraries. After this tutorial, you should be able to use Python to easily scrape data from the web, apply cleaning techniques and extract useful insights from the data.
If you would like to learn more about Python, take DataCamp’s free Intro to Python for Data Science course.

Frequently Asked Questions about python scrape

Is Python scraping legal?

Web scraping and crawling aren’t illegal by themselves. After all, you could scrape or crawl your own website, without a hitch. … Big companies use web scrapers for their own gain but also don’t want others to use bots against them.

What is scraping in Python?

Web scraping is a term used to describe the use of a program or algorithm to extract and process large amounts of data from the web. … Whether you are a data scientist, engineer, or anybody who analyzes large amounts of datasets, the ability to scrape data from the web is a useful skill to have.Jul 26, 2018

Is it legal to scrape API?

It is perfectly legal if you scrape data from websites for public consumption and use it for analysis. However, it is not legal if you scrape confidential information for profit. For example, scraping private contact information without permission, and sell them to a 3rd party for profit is illegal.Aug 16, 2021

Leave a Reply