Python Get Web Page
Get webpage contents with Python? – Stack Overflow
I’m using Python 3. 1, if that helps.
Anyways, I’m trying to get the contents of this webpage. I Googled for a little bit and tried different things, but they didn’t work. I’m guessing that this should be an easy task, but… I can’t get it. :/.
Results of urllib, urllib2:
>>> import urllib2
Traceback (most recent call last):
File “
import urllib2
ImportError: No module named urllib2
>>> import urllib
>>> urllib. urlopen(“)
File “
urllib. urlopen(“)
AttributeError: ‘module’ object has no attribute ‘urlopen’
>>>
Thank you, Jason. :D.
import quest
page = quest. urlopen(”)
print(())
Nir Duan5, 5104 gold badges21 silver badges38 bronze badges
asked Dec 3 ’09 at 22:25
6
The best way to do this these day is to use the ‘requests’ library:
import requests
response = (”)
print (atus_code)
print (ntent)
answered May 9 ’14 at 13:02
Jonathan HartleyJonathan Hartley14. 3k8 gold badges73 silver badges77 bronze badges
0
Because you’re using Python 3. 1, you need to use the new Python 3. 1 APIs.
Try:
quest. urlopen(”)
Alternately, it looks like you’re working from Python 2 examples. Write it in Python 2, then use the 2to3 tool to convert it. On Windows, is in \python31\tools\scripts. Can someone else point out where to find on other platforms?
Edit
These days, I write Python 2 and 3 compatible code by using six.
from import urllib
Assuming you have six installed, that runs on both Python 2 and Python 3.
answered Dec 3 ’09 at 22:38
Jason R. CoombsJason R. Coombs37. 6k8 gold badges77 silver badges85 bronze badges
4
If you ask me. try this one
resp = urllib2. urlopen(”)
and read the normal way ie
page = ()
Good luck though
Sumit1, 7591 gold badge25 silver badges33 bronze badges
answered Nov 14 ’13 at 9:02
ZukoZuko2, 37625 silver badges28 bronze badges
You can use urlib2 and parse the HTML yourself.
Or try Beautiful Soup to do some of the parsing for you.
answered Dec 3 ’09 at 22:29
JasDevJasDev7066 silver badges13 bronze badges
3
Also you can use faster_than_requests package. That’s very fast and simple:
import faster_than_requests as r
content = t2str(“)
Look at this comparison:
answered Sep 21 ’19 at 19:54
ChalistChalist2, 7465 gold badges35 silver badges62 bronze badges
A solution with works with Python 2. X and Python 3. X:
try:
# For Python 3. 0 and later
from quest import urlopen
except ImportError:
# Fall back to Python 2’s urllib2
from urllib2 import urlopen
url = ”
response = urlopen(url)
data = str(())
answered Jul 18 ’16 at 3:38
Martin ThomaMartin Thoma97. 5k124 gold badges520 silver badges792 bronze badges
Suppose you want to GET a webpage’s content. The following code does it:
# -*- coding: utf-8 -*-
# python
# example of getting a web page
from urllib import urlopen
print urlopen(“)()
answered Sep 10 ’18 at 18:18
Not the answer you’re looking for? Browse other questions tagged python python-3. x or ask your own question.
A Practical Introduction to Web Scraping in Python
Web scraping is the process of collecting and parsing raw data from the Web, and the Python community has come up with some pretty powerful web scraping tools.
The Internet hosts perhaps the greatest source of information—and misinformation—on the planet. Many disciplines, such as data science, business intelligence, and investigative reporting, can benefit enormously from collecting and analyzing data from websites.
In this tutorial, you’ll learn how to:
Parse website data using string methods and regular expressions
Parse website data using an HTML parser
Interact with forms and other website components
Scrape and Parse Text From Websites
Collecting data from websites using an automated process is known as web scraping. Some websites explicitly forbid users from scraping their data with automated tools like the ones you’ll create in this tutorial. Websites do this for two possible reasons:
The site has a good reason to protect its data. For instance, Google Maps doesn’t let you request too many results too quickly.
Making many repeated requests to a website’s server may use up bandwidth, slowing down the website for other users and potentially overloading the server such that the website stops responding entirely.
Let’s start by grabbing all the HTML code from a single web page. You’ll use a page on Real Python that’s been set up for use with this tutorial.
Your First Web Scraper
One useful package for web scraping that you can find in Python’s standard library is urllib, which contains tools for working with URLs. In particular, the quest module contains a function called urlopen() that can be used to open a URL within a program.
In IDLE’s interactive window, type the following to import urlopen():
>>>>>> from quest import urlopen
The web page that we’ll open is at the following URL:
>>>>>> url = ”
To open the web page, pass url to urlopen():
>>>>>> page = urlopen(url)
urlopen() returns an HTTPResponse object:
>>>>>> page
< object at 0x105fef820>
To extract the HTML from the page, first use the HTTPResponse object’s () method, which returns a sequence of bytes. Then use () to decode the bytes to a string using UTF-8:
>>>>>> html_bytes = ()
>>> html = (“utf-8”)
Now you can print the HTML to see the contents of the web page:
>>>>>> print(html)
Name: Aphrodite
Favorite animal: Dove
Favorite color: Red
Hometown: Mount Olympus
Once you have the HTML as text, you can extract information from it in a couple of different ways.
A Primer on Regular Expressions
Regular expressions—or regexes for short—are patterns that can be used to search for text within a string. Python supports regular expressions through the standard library’s re module.
To work with regular expressions, the first thing you need to do is import the re module:
Regular expressions use special characters called metacharacters to denote different patterns. For instance, the asterisk character (*) stands for zero or more of whatever comes just before the asterisk.
In the following example, you use findall() to find any text within a string that matches a given regular expression:
>>>>>> ndall(“ab*c”, “ac”)
[‘ac’]
The first argument of ndall() is the regular expression that you want to match, and the second argument is the string to test. In the above example, you search for the pattern “ab*c” in the string “ac”.
The regular expression “ab*c” matches any part of the string that begins with an “a”, ends with a “c”, and has zero or more instances of “b” between the two. ndall() returns a list of all matches. The string “ac” matches this pattern, so it’s returned in the list.
Here’s the same pattern applied to different strings:
>>>>>> ndall(“ab*c”, “abcd”)
[‘abc’]
>>> ndall(“ab*c”, “acc”)
>>> ndall(“ab*c”, “abcac”)
[‘abc’, ‘ac’]
>>> ndall(“ab*c”, “abdc”)
[]
Notice that if no match is found, then findall() returns an empty list.
Pattern matching is case sensitive. If you want to match this pattern regardless of the case, then you can pass a third argument with the value re. IGNORECASE:
>>>>>> ndall(“ab*c”, “ABC”)
>>> ndall(“ab*c”, “ABC”, re. IGNORECASE)
[‘ABC’]
You can use a period (. ) to stand for any single character in a regular expression. For instance, you could find all the strings that contain the letters “a” and “c” separated by a single character as follows:
>>>>>> ndall(“a. c”, “abc”)
>>> ndall(“a. c”, “abbc”)
>>> ndall(“a. c”, “ac”)
>>> ndall(“a. c”, “acc”)
[‘acc’]
The pattern. * inside a regular expression stands for any character repeated any number of times. For instance, “a. *c” can be used to find every substring that starts with “a” and ends with “c”, regardless of which letter—or letters—are in between:
>>>>>> ndall(“a. *c”, “abc”)
>>> ndall(“a. *c”, “abbc”)
[‘abbc’]
>>> ndall(“a. *c”, “ac”)
>>> ndall(“a. *c”, “acc”)
Often, you use () to search for a particular pattern inside a string. This function is somewhat more complicated than ndall() because it returns an object called a MatchObject that stores different groups of data. This is because there might be matches inside other matches, and () returns every possible result.
The details of the MatchObject are irrelevant here. For now, just know that calling () on a MatchObject will return the first and most inclusive result, which in most cases is just what you want:
>>>>>> match_results = (“ab*c”, “ABC”, re. IGNORECASE)
>>> ()
‘ABC’
There’s one more function in the re module that’s useful for parsing out text. (), which is short for substitute, allows you to replace text in a string that matches a regular expression with new text. It behaves sort of like the. replace() string method.
The arguments passed to () are the regular expression, followed by the replacement text, followed by the string. Here’s an example:
>>>>>> string = “Everything is
>>> string = (“<. *>“, “ELEPHANTS”, string)
>>> string
‘Everything is ELEPHANTS. ‘
Perhaps that wasn’t quite what you expected to happen.
() uses the regular expression “<. *>” to find and replace everything between the first < and last >, which spans from the beginning of
Alternatively, you can use the non-greedy matching pattern *?, which works the same way as * except that it matches the shortest possible string of text:
>>> string = (“<. *? >“, “ELEPHANTS”, string)
“Everything is ELEPHANTS if it’s in ELEPHANTS. ”
This time, () finds two matches,
Check Your Understanding
Expand the block below to check your understanding.
Write a program that grabs the full HTML from the following URL:
Then use () to display the text following “Name:” and “Favorite Color:” (not including any leading spaces or trailing HTML tags that might appear on the same line).
You can expand the block below to see a solution.
First, import the urlopen function from the quest module:
from quest import urlopen
Then open the URL and use the () method of the HTTPResponse object returned by urlopen() to read the page’s HTML:
url = ”
html_page = urlopen(url)
html_text = ()(“utf-8”)
() returns a byte string, so you use () to decode the bytes using the UTF-8 encoding.
Now that you have the HTML source of the web page as a string assigned to the html_text variable, you can extract Dionysus’s name and favorite color from his profile. The structure of the HTML for Dionysus’s profile is the same as Aphrodite’s profile that you saw earlier.
You can get the name by finding the string “Name:” in the text and extracting everything that comes after the first occurence of the string and before the next HTML tag. That is, you need to extract everything after the colon (:) and before the first angle bracket (<). You can use the same technique to extract the favorite color. The following for loop extracts this text for both the name and favorite color: for string in ["Name: ", "Favorite Color:"]: string_start_idx = (string) text_start_idx = string_start_idx + len(string) next_html_tag_offset = html_text[text_start_idx:]("<") text_end_idx = text_start_idx + next_html_tag_offset raw_text = html_text[text_start_idx: text_end_idx] clean_text = (" \r\n\t") print(clean_text) It looks like there’s a lot going on in this forloop, but it’s just a little bit of arithmetic to calculate the right indices for extracting the desired text. Let’s break it down: You use () to find the starting index of the string, either "Name:" or "Favorite Color:", and then assign the index to string_start_idx. Since the text to extract starts just after the colon in "Name:" or "Favorite Color:", you get the index of the the character immediately after the colon by adding the length of the string to start_string_idx and assign the result to text_start_idx. You calculate the ending index of the text to extract by determining the index of the first angle bracket (<) relative to text_start_idx and assign this value to next_html_tag_offset. Then you add that value to text_start_idx and assign the result to text_end_idx. You extract the text by slicing html_text from text_start_idx to text_end_idx and assign this string to raw_text. You remove any whitespace from the beginning and end of raw_text using () and assign the result to clean_text. At the end of the loop, you use print() to display the extracted text. The final output looks like this: This solution is one of many that solves this problem, so if you got the same output with a different solution, then you did great! When you’re ready, you can move on to the next section. Use an HTML Parser for Web Scraping in Python Although regular expressions are great for pattern matching in general, sometimes it’s easier to use an HTML parser that’s explicitly designed for parsing out HTML pages. There are many Python tools written for this purpose, but the Beautiful Soup library is a good one to start with. Install Beautiful Soup To install Beautiful Soup, you can run the following in your terminal: $ python3 -m pip install beautifulsoup4 Run pip show to see the details of the package you just installed: $ python3 -m pip show beautifulsoup4 Name: beautifulsoup4 Version: 4. 9. 1 Summary: Screen-scraping library Home-page: Author: Leonard Richardson Author-email: License: MIT Location: c:\realpython\venv\lib\site-packages Requires: Required-by: In particular, notice that the latest version at the time of writing was 4. 1. Create a BeautifulSoup Object Type the following program into a new editor window: from bs4 import BeautifulSoup page = urlopen(url) html = ()("utf-8") soup = BeautifulSoup(html, "") This program does three things: Opens the URL using urlopen() from the quest module Reads the HTML from the page as a string and assigns it to the html variable Creates a BeautifulSoup object and assigns it to the soup variable The BeautifulSoup object assigned to soup is created with two arguments. The first argument is the HTML to be parsed, and the second argument, the string "", tells the object which parser to use behind the scenes. "" represents Python’s built-in HTML parser. Use a BeautifulSoup Object Save and run the above program. When it’s finished running, you can use the soup variable in the interactive window to parse the content of html in various ways. For example, BeautifulSoup objects have a. get_text() method that can be used to extract all the text from the document and automatically remove any HTML tags. Type the following code into IDLE’s interactive window: >>>>>> print(t_text())
Profile: Dionysus
Name: Dionysus
Favorite animal: Leopard
Favorite Color: Wine
There are a lot of blank lines in this output. These are the result of newline characters in the HTML document’s text. You can remove them with the string. replace() method if you need to.
Often, you need to get only specific text from an HTML document. Using Beautiful Soup first to extract the text and then using the () string method is sometimes easier than working with regular expressions.
However, sometimes the HTML tags themselves are the elements that point out the data you want to retrieve. For instance, perhaps you want to retrieve the URLs for all the images on the page. These links are contained in the src attribute of HTML tags.
In this case, you can use find_all() to return a list of all instances of that particular tag:
>>>>>> nd_all(“img”)
[, ]
This returns a list of all tags in the HTML document. The objects in the list look like they might be strings representing the tags, but they’re actually instances of the Tag object provided by Beautiful Soup. Tag objects provide a simple interface for working with the information they contain.
Let’s explore this a little by first unpacking the Tag objects from the list:
>>>>>> image1, image2 = nd_all(“img”)
Each Tag object has a property that returns a string containing the HTML tag type:
You can access the HTML attributes of the Tag object by putting their name between square brackets, just as if the attributes were keys in a dictionary.
For example, the tag has a single attribute, src, with the value “/static/”. Likewise, an HTML tag such as the link has two attributes, href and target.
To get the source of the images in the Dionysus profile page, you access the src attribute using the dictionary notation mentioned above:
>>>>>> image1[“src”]
‘/static/’
>>> image2[“src”]
Certain tags in HTML documents can be accessed by properties of the Tag object. For example, to get the
>>>>>>
If you look at the source of the Dionysus profile by navigating to the profile page, right-clicking on the page, and selecting View page source, then you’ll notice that the
Beautiful Soup automatically cleans up the tags for you by removing the extra space in the opening tag and the extraneous forward slash (/) in the closing tag.
You can also retrieve just the string between the title tags with the property of the Tag object:
‘Profile: Dionysus’
One of the more useful features of Beautiful Soup is the ability to search for specific kinds of tags whose attributes match certain values. For example, if you want to find all the tags that have a src attribute equal to the value /static/, then you can provide the following additional argument to. find_all():
>>>>>> nd_all(“img”, src=”/static/”)
[]
This example is somewhat arbitrary, and the usefulness of this technique may not be apparent from the example. If you spend some time browsing various websites and viewing their page sources, then you’ll notice that many websites have extremely complicated HTML structures.
When scraping data from websites with Python, you’re often interested in particular parts of the page. By spending some time looking through the HTML document, you can identify tags with unique attributes that you can use to extract the data you need.
Then, instead of relying on complicated regular expressions or using () to search through the document, you can directly access the particular tag you’re interested in and extract the data you need.
In some cases, you may find that Beautiful Soup doesn’t offer the functionality you need. The lxml library is somewhat trickier to get started with but offers far more flexibility than Beautiful Soup for parsing HTML documents. You may want to check it out once you’re comfortable using Beautiful Soup.
BeautifulSoup is great for scraping data from a website’s HTML, but it doesn’t provide any way to work with HTML forms. For example, if you need to search a website for some query and then scrape the results, then BeautifulSoup alone won’t get you very far.
Write a program that grabs the full HTML from the page at the URL Using Beautiful Soup, print out a list of all the links on the page by looking for HTML tags with the name a and retrieving the value taken on by the href attribute of each tag.
The final output should look like this:
You can expand the block below to see a solution:
First, import the urlopen function from the quest module and the BeautifulSoup class from the bs4 package:
Each link URL on the /profiles page is a relative URL, so create a base_url variable with the base URL of the website:
base_url = ”
You can build a full URL by concatenating base_url with a relative URL.
Now open the /profiles page with urlopen() and use () to get the HTML source:
html_page = urlopen(base_url + “/profiles”)
With the HTML source downloaded and decoded, you can create a new BeautifulSoup object to parse the HTML:
soup = BeautifulSoup(html_text, “”)
nd_all(“a”) returns a list of all links in the HTML source. You can loop over this list to print out all the links on the webpage:
for link in nd_all(“a”):
link_url = base_url + link[“href”]
print(link_url)
The relative URL for each link can be accessed through the “href” subscript. Concatenate this value with base_url to create the full link_url.
Interact With HTML Forms
The urllib module you’ve been working with so far in this tutorial is well suited for requesting the contents of a web page. Sometimes, though, you need to interact with a web page to obtain the content you need. For example, you might need to submit a form or click a button to display hidden content.
The Python standard library doesn’t provide a built-in means for working with web pages interactively, but many third-party packages are available from PyPI. Among these, MechanicalSoup is a popular and relatively straightforward package to use.
In essence, MechanicalSoup installs what’s known as a headless browser, which is a web browser with no graphical user interface. This browser is controlled programmatically via a Python program.
Install MechanicalSoup
You can install MechanicalSoup with pip in your terminal:
$ python3 -m pip install MechanicalSoup
You can now view some details about the package with pip show:
$ python3 -m pip show mechanicalsoup
Name: MechanicalSoup
Version: 0. 12. 0
Summary: A Python library for automating interaction with websites
Home-page: Author: UNKNOWN
Author-email: UNKNOWN
Requires: requests, beautifulsoup4, six, lxml
In particular, notice that the latest version at the time of writing was 0. 0. You’ll need to close and restart your IDLE session for MechanicalSoup to load and be recognized after it’s been installed.
Create a Browser Object
Type the following into IDLE’s interactive window:
>>>>>> import mechanicalsoup
>>> browser = owser()
Browser objects represent the headless web browser. You can use them to request a page from the Internet by passing a URL to their () method:
>>> page = (url)
page is a Response object that stores the response from requesting the URL from the browser:
The number 200 represents the status code returned by the request. A status code of 200 means that the request was successful. An unsuccessful request might show a status code of 404 if the URL doesn’t exist or 500 if there’s a server error when making the request.
MechanicalSoup uses Beautiful Soup to parse the HTML from the request. page has a attribute that represents a BeautifulSoup object:
>>>>>> type()
You can view the HTML by inspecting the attribute:
Please log in to access Mount Olympus:
Notice this page has a