• December 22, 2024

Beautifulsoap

Beautiful Soup 4.9.0 documentation – Crummy

Beautiful Soup is a
Python library for pulling data out of HTML and XML files. It works
with your favorite parser to provide idiomatic ways of navigating,
searching, and modifying the parse tree. It commonly saves programmers
hours or days of work.
These instructions illustrate all major features of Beautiful Soup 4,
with examples. I show you what the library is good for, how it works,
how to use it, how to make it do what you want, and what to do when it
violates your expectations.
This document covers Beautiful Soup version 4. 9. 3. The examples in
this documentation should work the same way in Python 2. 7 and Python
3. 8.
You might be looking for the documentation for Beautiful Soup 3.
If so, you should know that Beautiful Soup 3 is no longer being
developed and that support for it will be dropped on or after December
31, 2020. If you want to learn about the differences between Beautiful
Soup 3 and Beautiful Soup 4, see Porting code to BS4.
This documentation has been translated into other languages by
Beautiful Soup users:
这篇文档当然还有中文版.
このページは日本語で利用できます(外部リンク)
이 문서는 한국어 번역도 가능합니다.
Este documento também está disponível em Português do Brasil.
Эта документация доступна на русском языке.
Getting help¶
If you have questions about Beautiful Soup, or run into problems,
send mail to the discussion group. If
your problem involves parsing an HTML document, be sure to mention
what the diagnose() function says about
that document.
Here’s an HTML document I’ll be using as an example throughout this
document. It’s part of a story from Alice in Wonderland:
html_doc = “””The Dormouse’s story

The Dormouse’s story

Once upon a time there were three little sisters; and their names were
Elsie,
Lacie and
Tillie;
and they lived at the bottom of a well.

“””
Running the “three sisters” document through Beautiful Soup gives us a
BeautifulSoup object, which represents the document as a nested
data structure:
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_doc, ”)
print(ettify())
#
#
# <br /> # The Dormouse’s story<br /> #
#
#
#

#
#

#

#

# Once upon a time there were three little sisters; and their names were
#
# Elsie
#

#,
#
# Lacie
# and
#

# Tillie
#; and they lived at the bottom of a well.
#…
#
#
Here are some simple ways to navigate that data structure:
# The Dormouse’s story
# u’title’
# u’The Dormouse’s story’
# u’head’
soup. p
#

The Dormouse’s story

soup. p[‘class’]
soup. a
#
Elsie
nd_all(‘a’)
# [Elsie,
# Lacie,
# Tillie]
(id=”link3″)
# Tillie
One common task is extracting all the URLs found within a page’s tags:
for link in nd_all(‘a’):
print((‘href’))
# # #
Another common task is extracting all the text from a page:
print(t_text())
#
# Elsie,
# Lacie and
# Tillie;
# and they lived at the bottom of a well.
Does this look like what you need? If so, read on.
If you’re using a recent version of Debian or Ubuntu Linux, you can
install Beautiful Soup with the system package manager:
$ apt-get install python-bs4 (for Python 2)
$ apt-get install python3-bs4 (for Python 3)
Beautiful Soup 4 is published through PyPi, so if you can’t install it
with the system packager, you can install it with easy_install or
pip. The package name is beautifulsoup4, and the same package
works on Python 2 and Python 3. Make sure you use the right version of
pip or easy_install for your Python version (these may be named
pip3 and easy_install3 respectively if you’re using Python 3).
$ easy_install beautifulsoup4
$ pip install beautifulsoup4
(The BeautifulSoup package is not what you want. That’s
the previous major release, Beautiful Soup 3. Lots of software uses
BS3, so it’s still available, but if you’re writing new code you
should install beautifulsoup4. )
If you don’t have easy_install or pip installed, you can
download the Beautiful Soup 4 source tarball and
install it with
$ python install
If all else fails, the license for Beautiful Soup allows you to
package the entire library with your application. You can download the
tarball, copy its bs4 directory into your application’s codebase,
and use Beautiful Soup without installing it at all.
I use Python 2. 7 and Python 3. 8 to develop Beautiful Soup, but it
should work with other recent versions.
Problems after installation¶
Beautiful Soup is packaged as Python 2 code. When you install it for
use with Python 3, it’s automatically converted to Python 3 code. If
you don’t install the package, the code won’t be converted. There have
also been reports on Windows machines of the wrong version being
installed.
If you get the ImportError “No module named HTMLParser”, your
problem is that you’re running the Python 2 version of the code under
Python 3.
If you get the ImportError “No module named ”, your
problem is that you’re running the Python 3 version of the code under
Python 2.
In both cases, your best bet is to completely remove the Beautiful
Soup installation from your system (including any directory created
when you unzipped the tarball) and try the installation again.
If you get the SyntaxError “Invalid syntax” on the line
ROOT_TAG_NAME = u'[document]’, you need to convert the Python 2
code to Python 3. You can do this either by installing the package:
$ python3 install
or by manually running Python’s 2to3 conversion script on the
bs4 directory:
$ 2to3-3. 2 -w bs4
Installing a parser¶
Beautiful Soup supports the HTML parser included in Python’s standard
library, but it also supports a number of third-party Python parsers.
One is the lxml parser. Depending on your setup,
you might install lxml with one of these commands:
$ apt-get install python-lxml
$ easy_install lxml
$ pip install lxml
Another alternative is the pure-Python html5lib parser, which parses HTML the way a
web browser does. Depending on your setup, you might install html5lib
with one of these commands:
$ apt-get install python-html5lib
$ easy_install html5lib
$ pip install html5lib
This table summarizes the advantages and disadvantages of each parser library:
Parser
Typical usage
Advantages
Disadvantages
Python’s
BeautifulSoup(markup, “”)
Batteries included
Decent speed
Lenient (As of Python 2. 7. 3
and 3. 2. )
Not as fast as lxml,
less lenient than
html5lib.
lxml’s HTML parser
BeautifulSoup(markup, “lxml”)
Very fast
Lenient
External C dependency
lxml’s XML parser
BeautifulSoup(markup, “lxml-xml”)
BeautifulSoup(markup, “xml”)
The only currently supported
XML parser
html5lib
BeautifulSoup(markup, “html5lib”)
Extremely lenient
Parses pages the same way a
web browser does
Creates valid HTML5
Very slow
External Python
dependency
If you can, I recommend you install and use lxml for speed. If you’re
using a very old version of Python – earlier than 2. 3 or 3. 2 –
it’s essential that you install lxml or html5lib. Python’s built-in
HTML parser is just not very good in those old versions.
Note that if a document is invalid, different parsers will generate
different Beautiful Soup trees for it. See Differences
between parsers for details.
To parse a document, pass it into the BeautifulSoup
constructor. You can pass in a string or an open filehandle:
with open(“”) as fp:
soup = BeautifulSoup(fp, ”)
soup = BeautifulSoup(“a web page“, ”)
First, the document is converted to Unicode, and HTML entities are
converted to Unicode characters:
print(BeautifulSoup(“Sacré bleu! “, “”))
# Sacré bleu!
Beautiful Soup then parses the document using the best available
parser. It will use an HTML parser unless you specifically tell it to
use an XML parser. (See Parsing XML. )
Beautiful Soup transforms a complex HTML document into a complex tree
of Python objects. But you’ll only ever have to deal with about four
kinds of objects: Tag, NavigableString, BeautifulSoup,
and Comment.
Tag¶
A Tag object corresponds to an XML or HTML tag in the original document:
soup = BeautifulSoup(‘Extremely bold‘, ”)
tag = soup. b
type(tag)
#
Tags have a lot of attributes and methods, and I’ll cover most of them
in Navigating the tree and Searching the tree. For now, the most
important features of a tag are its name and attributes.
Name¶
Every tag has a name, accessible as
If you change a tag’s name, the change will be reflected in any HTML
markup generated by Beautiful Soup:
= “blockquote”
tag
#

Extremely bold

Attributes¶
A tag may have any number of attributes. The tag has an attribute “id” whose value is
“boldest”. You can access a tag’s attributes by treating the tag like
a dictionary:
tag = BeautifulSoup(‘bold‘, ”). b
tag[‘id’]
# ‘boldest’
You can access that dictionary directly as
# {‘id’: ‘boldest’}
You can add, remove, and modify a tag’s attributes. Again, this is
done by treating the tag as a dictionary:
tag[‘id’] = ‘verybold’
tag[‘another-attribute’] = 1
#
del tag[‘id’]
del tag[‘another-attribute’]
# bold
# KeyError: ‘id’
(‘id’)
# None
Multi-valued attributes¶
HTML 4 defines a few attributes that can have multiple values. HTML 5
removes a couple of them, but defines a few more. The most common
multi-valued attribute is class (that is, a tag can have more than
one CSS class). Others include rel, rev, accept-charset,
headers, and accesskey. Beautiful Soup presents the value(s)
of a multi-valued attribute as a list:
css_soup = BeautifulSoup(‘

‘, ”)
css_soup. p[‘class’]
# [‘body’]
css_soup = BeautifulSoup(‘

‘, ”)
# [‘body’, ‘strikeout’]
If an attribute looks like it has more than one value, but it’s not
a multi-valued attribute as defined by any version of the HTML
standard, Beautiful Soup will leave the attribute alone:
id_soup = BeautifulSoup(‘

‘, ”)
id_soup. p[‘id’]
# ‘my id’
When you turn a tag back into a string, multiple attribute values are
consolidated:
rel_soup = BeautifulSoup(‘

Back to the homepage

‘, ”)
rel_soup. a[‘rel’]
# [‘index’]
rel_soup. a[‘rel’] = [‘index’, ‘contents’]
print(rel_soup. p)
#

Back to the homepage

You can disable this by passing multi_valued_attributes=None as a
keyword argument into the BeautifulSoup constructor:
no_list_soup = BeautifulSoup(‘

‘, ”, multi_valued_attributes=None)
no_list_soup. p[‘class’]
# ‘body strikeout’
You can use get_attribute_list to get a value that’s always a
list, whether or not it’s a multi-valued atribute:
t_attribute_list(‘id’)
# [“my id”]
If you parse a document as XML, there are no multi-valued attributes:
xml_soup = BeautifulSoup(‘

‘, ‘xml’)
xml_soup. p[‘class’]
Again, you can configure this using the multi_valued_attributes argument:
class_is_multi= { ‘*’: ‘class’}
xml_soup = BeautifulSoup(‘

‘, ‘xml’, multi_valued_attributes=class_is_multi)
You probably won’t need to do this, but if you do, use the defaults as
a guide. They implement the rules described in the HTML specification:
from er import builder_registry
(‘html’). DEFAULT_CDATA_LIST_ATTRIBUTES
NavigableString¶
A string corresponds to a bit of text within a tag. Beautiful Soup
uses the NavigableString class to contain these bits of text:
# ‘Extremely bold’
type()
#
A NavigableString is just like a Python Unicode string, except
that it also supports some of the features described in Navigating
the tree and Searching the tree. You can convert a
NavigableString to a Unicode string with unicode() (in
Python 2) or str (in Python 3):
unicode_string = str()
unicode_string
type(unicode_string)
#
You can’t edit a string in place, but you can replace one string with
another, using replace_with():
(“No longer bold”)
# No longer bold
NavigableString supports most of the features described in
Navigating the tree and Searching the tree, but not all of
them. In particular, since a string can’t contain anything (the way a
tag may contain a string or another tag), strings don’t support the. contents or attributes, or the find() method.
If you want to use a NavigableString outside of Beautiful Soup,
you should call unicode() on it to turn it into a normal Python
Unicode string. If you don’t, your string will carry around a
reference to the entire Beautiful Soup parse tree, even when you’re
done using Beautiful Soup. This is a big waste of memory.
BeautifulSoup¶
The BeautifulSoup object represents the parsed document as a
whole. For most purposes, you can treat it as a Tag
object. This means it supports most of the methods described in
Navigating the tree and Searching the tree.
You can also pass a BeautifulSoup object into one of the methods
defined in Modifying the tree, just as you would a Tag. This
lets you do things like combine two parsed documents:
doc = BeautifulSoup(“INSERT FOOTER HEREHere’s the footer

“, “xml”)
(text=”INSERT FOOTER HERE”). replace_with(footer)
# ‘INSERT FOOTER HERE’
print(doc)
#
#

Here’s the footer


Since the BeautifulSoup object doesn’t correspond to an actual
HTML or XML tag, it has no name and no attributes. But sometimes it’s
useful to look at its, so it’s been given the special
“[document]”:
Here’s the “Three sisters” HTML document again:
html_doc = “””
The Dormouse’s story
I’ll use this as an example to show you how to move from one part of
a document to another.
Going down¶
Tags may contain strings and other tags. These elements are the tag’s
children. Beautiful Soup provides a lot of different attributes for
navigating and iterating over a tag’s children.
Note that Beautiful Soup strings don’t support any of these
attributes, because a string can’t have children.
Navigating using tag names¶
The simplest way to navigate the parse tree is to say the name of the
tag you want. If you want the tag, just say
# The Dormouse’s story
You can do use this trick again and again to zoom in on a certain part
of the parse tree. This code gets the first tag beneath the tag:
# The Dormouse’s story
Using a tag name as an attribute will give you only the first tag by that
name:
If you need to get all the tags, or anything more complicated
than the first tag with a certain name, you’ll need to use one of the
methods described in Searching the tree, such as find_all():
#
Tillie]. contents and. children¶
A tag’s children are available in a list called. contents:
head_tag =
head_tag
ntents
# [The Dormouse’s story]
title_tag = ntents[0]
title_tag
# [‘The Dormouse’s story’]
The BeautifulSoup object itself has children. In this case, the
tag is the child of the BeautifulSoup object. :
len(ntents)
# 1
ntents[0]
# ‘html’
A string does not have. contents, because it can’t contain
anything:
text = ntents[0]
# AttributeError: ‘NavigableString’ object has no attribute ‘contents’
Instead of getting them as a list, you can iterate over a tag’s
children using the. children generator:
for child in ildren:
print(child)
# The Dormouse’s story. descendants¶
The. children attributes only consider a tag’s
direct children. For instance, the tag has a single direct
child–the tag:<br /> But the <title> tag itself has a child: the string “The Dormouse’s<br /> story”. There’s a sense in which that string is also a child of the<br /> <head> tag. The. descendants attribute lets you iterate over all<br /> of a tag’s children, recursively: its direct children, the children of<br /> its direct children, and so on:<br /> for child in scendants:<br /> The <head> tag has only one child, but it has two descendants: the<br /> <title> tag and the <title> tag’s child. The BeautifulSoup object<br /> only has one direct child (the <html> tag), but it has a whole lot of<br /> descendants:<br /> len(list(ildren))<br /> len(list(scendants))<br /> # 26<br /> ¶<br /> If a tag has only one child, and that child is a NavigableString,<br /> the child is made available as<br /> # ‘The Dormouse’s story’<br /> If a tag’s only child is another tag, and that tag has a, then the parent tag is considered to have the same<br /> as its child:<br /> If a tag contains more than one thing, then it’s not clear what<br /> should refer to, so is defined to be<br /> None:<br /> print()<br /> # None. strings and stripped_strings¶<br /> If there’s more than one thing inside a tag, you can still look at<br /> just the strings. Use the. strings generator:<br /> for string in rings:<br /> print(repr(string))<br /> ‘\n’<br /> # “The Dormouse’s story”<br /> # ‘\n’<br /> # ‘Once upon a time there were three little sisters; and their names were\n’<br /> # ‘Elsie’<br /> # ‘, \n’<br /> # ‘Lacie’<br /> # ‘ and\n’<br /> # ‘Tillie’<br /> # ‘;\nand they lived at the bottom of a well. ‘<br /> # ‘… ‘<br /> These strings tend to have a lot of extra whitespace, which you can<br /> remove by using the. stripped_strings generator instead:<br /> for string in ripped_strings:<br /> # ‘Once upon a time there were three little sisters; and their names were’<br /> # ‘, ‘<br /> # ‘and’<br /> # ‘;\n and they lived at the bottom of a well. ‘<br /> Here, strings consisting entirely of whitespace are ignored, and<br /> whitespace at the beginning and end of strings is removed.<br /> Going up¶<br /> Continuing the “family tree” analogy, every tag and every string has a<br /> parent: the tag that contains it.<br /> You can access an element’s parent with the attribute. In<br /> the example “three sisters” document, the <head> tag is the parent<br /> of the <title> tag:<br /> title_tag =<br /> The title string itself has a parent: the <title> tag that contains<br /> it:<br /> The parent of a top-level tag like <html> is the BeautifulSoup object<br /> itself:<br /> html_tag =<br /> # <class 'autifulSoup'><br /> And the of a BeautifulSoup object is defined as None:<br /> # None. parents¶<br /> You can iterate over all of an element’s parents with. parents. This example uses. parents to travel from an <a> tag<br /> buried deep within the document, to the very top of the document:<br /> link = soup. a<br /> link<br /> for parent in rents:<br /> # p<br /> # body<br /> # html<br /> # [document]<br /> Going sideways¶<br /> Consider a simple document like this:<br /> sibling_soup = BeautifulSoup(“<a><b>text1</b><c>text2</c></b></a>“, ”)<br /> # <a><br /> # text1<br /> # <c><br /> # text2<br /> # </c><br /> The <b> tag and the <c> tag are at the same level: they’re both direct<br /> children of the same tag. We call them siblings. When a document is<br /> pretty-printed, siblings show up at the same indentation level. You<br /> can also use this relationship in the code you write.. next_sibling and. previous_sibling¶<br /> You can use. previous_sibling to navigate<br /> between page elements that are on the same level of the parse tree:<br /> xt_sibling<br /> # <c>text2</c><br /> evious_sibling<br /> # <b>text1</b><br /> The <b> tag has a. next_sibling, but no. previous_sibling,<br /> because there’s nothing before the <b> tag on the same level of the<br /> tree. For the same reason, the <c> tag has a. previous_sibling<br /> but no. next_sibling:<br /> print(evious_sibling)<br /> print(xt_sibling)<br /> The strings “text1” and “text2” are not siblings, because they don’t<br /> have the same parent:<br /> # ‘text1’<br /> In real documents, the. next_sibling or. previous_sibling of a<br /> tag will usually be a string containing whitespace. Going back to the<br /> “three sisters” document:<br /> # <a href=" class="sister" id="link1">Elsie</a><br /> # <a href=" class="sister" id="link2">Lacie</a><br /> # <a href=" class="sister" id="link3">Tillie</a><br /> You might think that the. next_sibling of the first <a> tag would<br /> be the second <a> tag. But actually, it’s a string: the comma and<br /> newline that separate the first <a> tag from the second:<br /> # ‘, \n ‘<br /> The second <a> tag is actually the. next_sibling of the comma:<br /> # <a class="sister" href=" id="link2">Lacie</a>. next_siblings and. previous_siblings¶<br /> You can iterate over a tag’s siblings with. next_siblings or. previous_siblings:<br /> for sibling in xt_siblings:<br /> print(repr(sibling))<br /> # <a class="sister" href=" id="link2">Lacie</a><br /> # ‘; and they lived at the bottom of a well. ‘<br /> for sibling in (id=”link3″). previous_siblings:<br /> Going back and forth¶<br /> Take a look at the beginning of the “three sisters” document:<br /> # <html><head><title>The Dormouse’s story
An HTML parser takes this string of characters and turns it into a
series of events: “open an tag”, “open a tag”, “open a
tag”, “add a string”, “close the <title> tag”, “open a </p> <p> tag”, and so on. Beautiful Soup offers tools for reconstructing the<br /> initial parse of the document.. next_element and. previous_element¶<br /> The. next_element attribute of a string or tag points to whatever<br /> was parsed immediately afterwards. It might be the same as. next_sibling, but it’s usually drastically different.<br /> Here’s the final <a> tag in the “three sisters” document. Its. next_sibling is a string: the conclusion of the sentence that was<br /> interrupted by the start of the <a> tag. :<br /> last_a_tag = (“a”, id=”link3″)<br /> last_a_tag<br /> But the. next_element of that <a> tag, the thing that was parsed<br /> immediately after the <a> tag, is not the rest of that sentence:<br /> it’s the word “Tillie”:<br /> xt_element<br /> That’s because in the original markup, the word “Tillie” appeared<br /> before that semicolon. The parser encountered an <a> tag, then the<br /> word “Tillie”, then the closing </a> tag, then the semicolon and rest of<br /> the sentence. The semicolon is on the same level as the <a> tag, but the<br /> word “Tillie” was encountered first.<br /> The. previous_element attribute is the exact opposite of. next_element. It points to whatever element was parsed<br /> immediately before this one:<br /> evious_element<br /> # <a class="sister" href=" id="link3">Tillie</a>. next_elements and. previous_elements¶<br /> You should get the idea by now. You can use these iterators to move<br /> forward or backward in the document as it was parsed:<br /> for element in xt_elements:<br /> print(repr(element))<br /> # </p> <p class="story">… </p> <p>Beautiful Soup defines a lot of methods for searching the parse tree,<br /> but they’re all very similar. I’m going to spend a lot of time explaining<br /> the two most popular methods: find() and find_all(). The other<br /> methods take almost exactly the same arguments, so I’ll just cover<br /> them briefly.<br /> Once again, I’ll be using the “three sisters” document as an example:<br /> By passing in a filter to an argument like find_all(), you can<br /> zoom in on the parts of the document you’re interested in.<br /> Kinds of filters¶<br /> Before talking in detail about find_all() and similar methods, I<br /> want to show examples of different filters you can pass into these<br /> methods. These filters show up again and again, throughout the<br /> search API. You can use them to filter based on a tag’s name,<br /> on its attributes, on the text of a string, or on some combination of<br /> these.<br /> A string¶<br /> The simplest filter is a string. Pass a string to a search method and<br /> Beautiful Soup will perform a match against that exact string. This<br /> code finds all the <b> tags in the document:<br /> nd_all(‘b’)<br /> # [<b>The Dormouse’s story</b>]<br /> If you pass in a byte string, Beautiful Soup will assume the string is<br /> encoded as UTF-8. You can avoid this by passing in a Unicode string instead.<br /> A regular expression¶<br /> If you pass in a regular expression object, Beautiful Soup will filter<br /> against that regular expression using its search() method. This code<br /> finds all the tags whose names start with the letter “b”; in this<br /> case, the <body> tag and the <b> tag:<br /> import re<br /> for tag in nd_all(mpile(“^b”)):<br /> # b<br /> This code finds all the tags whose names contain the letter ‘t’:<br /> for tag in nd_all(mpile(“t”)):<br /> # title<br /> A list¶<br /> If you pass in a list, Beautiful Soup will allow a string match<br /> against any item in that list. This code finds all the <a> tags<br /> and all the <b> tags:<br /> nd_all([“a”, “b”])<br /> # [<b>The Dormouse’s story</b>,<br /> # <a class="sister" href=" id="link1">Elsie</a>,<br /> True¶<br /> The value True matches everything it can. This code finds all<br /> the tags in the document, but none of the text strings:<br /> for tag in nd_all(True):<br /> # head<br /> # a<br /> A function¶<br /> If none of the other matches work for you, define a function that<br /> takes an element as its only argument. The function should return<br /> True if the argument matches, and False otherwise.<br /> Here’s a function that returns True if a tag defines the “class”<br /> attribute but doesn’t define the “id” attribute:<br /> def has_class_but_no_id(tag):<br /> return tag. has_attr(‘class’) and not tag. has_attr(‘id’)<br /> Pass this function into find_all() and you’ll pick up all the </p> <p> tags:<br /> nd_all(has_class_but_no_id)<br /> # [</p> <p class="title"><b>The Dormouse’s story</b></p> <p>,<br /> # </p> <p class="story">Once upon a time there were…bottom of a well. </p> <p>,<br /> # </p> <p class="story">… </p> <p>]<br /> This function only picks up the </p> <p> tags. It doesn’t pick up the <a><br /> tags, because those tags define both “class” and “id”. It doesn’t pick<br /> up tags like <html> and <title>, because those tags don’t define<br /> “class”.<br /> If you pass in a function to filter on a specific attribute like<br /> href, the argument passed into the function will be the attribute<br /> value, not the whole tag. Here’s a function that finds all a tags<br /> whose href attribute does not match a regular expression:<br /> def not_lacie(href):<br /> return href and not mpile(“lacie”)(href)<br /> nd_all(href=not_lacie)<br /> The function can be as complicated as you need it to be. Here’s a<br /> function that returns True if a tag is surrounded by string<br /> objects:<br /> from bs4 import NavigableString<br /> def surrounded_by_strings(tag):<br /> return (isinstance(xt_element, NavigableString)<br /> and isinstance(evious_element, NavigableString))<br /> for tag in nd_all(surrounded_by_strings):<br /> Now we’re ready to look at the search methods in detail.<br /> find_all()¶<br /> Method signature: find_all(name, attrs, recursive, string, limit, **kwargs)<br /> The find_all() method looks through a tag’s descendants and<br /> retrieves all descendants that match your filters. I gave several<br /> examples in Kinds of filters, but here are a few more:<br /> nd_all(“title”)<br /> nd_all(“p”, “title”)<br /> # [</p> <p class="title"><b>The Dormouse’s story</b></p> <p>]<br /> nd_all(“a”)<br /> nd_all(id=”link2″)<br /> # [<a class="sister" href=" id="link2">Lacie</a>]<br /> (mpile(“sisters”))<br /> Some of these should look familiar, but others are new. What does it<br /> mean to pass in a value for string, or id? Why does<br /> find_all(“p”, “title”) find a </p> <p> tag with the CSS class “title”?<br /> Let’s look at the arguments to find_all().<br /> The name argument¶<br /> Pass in a value for name and you’ll tell Beautiful Soup to only<br /> consider tags with certain names. Text strings will be ignored, as<br /> will tags whose names that don’t match.<br /> This is the simplest usage:<br /> Recall from Kinds of filters that the value to name can be a<br /> string, a regular expression, a list, a function, or the value<br /> True.<br /> The keyword arguments¶<br /> Any argument that’s not recognized will be turned into a filter on one<br /> of a tag’s attributes. If you pass in a value for an argument called id,<br /> Beautiful Soup will filter against each tag’s ‘id’ attribute:<br /> nd_all(id=’link2′)<br /> If you pass in a value for href, Beautiful Soup will filter<br /> against each tag’s ‘href’ attribute:<br /> nd_all(mpile(“elsie”))<br /> # [<a class="sister" href=" id="link1">Elsie</a>]<br /> You can filter an attribute based on a string, a regular<br /> expression, a list, a function, or the value True.<br /> This code finds all tags whose id attribute has a value,<br /> regardless of what the value is:<br /> nd_all(id=True)<br /> You can filter multiple attributes at once by passing in more than one<br /> keyword argument:<br /> nd_all(mpile(“elsie”), id=’link1′)<br /> Some attributes, like the data-* attributes in HTML 5, have names that<br /> can’t be used as the names of keyword arguments:<br /> data_soup = BeautifulSoup(‘</p> <div data-foo="value">foo! </div> <p>‘, ”)<br /> nd_all(data-foo=”value”)<br /> # SyntaxError: keyword can’t be an expression<br /> You can use these attributes in searches by putting them into a<br /> dictionary and passing the dictionary into find_all() as the<br /> attrs argument:<br /> nd_all(attrs={“data-foo”: “value”})<br /> # [</p> <div data-foo="value">foo! </div> <p>]<br /> You can’t use a keyword argument to search for HTML’s ‘name’ element,<br /> because Beautiful Soup uses the name argument to contain the name<br /> of the tag itself. Instead, you can give a value to ‘name’ in the<br /> name_soup = BeautifulSoup(‘<input name="email"/>‘, ”)<br /> nd_all(name=”email”)<br /> # []<br /> nd_all(attrs={“name”: “email”})<br /> # [<input name="email"/>]<br /> Searching by CSS class¶<br /> It’s very useful to search for a tag that has a certain CSS class, but<br /> the name of the CSS attribute, “class”, is a reserved word in<br /> Python. Using class as a keyword argument will give you a syntax<br /> error. As of Beautiful Soup 4. 1. 2, you can search by CSS class using<br /> the keyword argument class_:<br /> nd_all(“a”, class_=”sister”)<br /> As with any keyword argument, you can pass class_ a string, a regular<br /> expression, a function, or True:<br /> nd_all(mpile(“itl”))<br /> def has_six_characters(css_class):<br /> return css_class is not None and len(css_class) == 6<br /> nd_all(class_=has_six_characters)<br /> Remember that a single tag can have multiple<br /> values for its “class” attribute. When you search for a tag that<br /> matches a certain CSS class, you’re matching against any of its CSS<br /> classes:<br /> nd_all(“p”, class_=”strikeout”)<br /> # [</p> <p class="body strikeout"> <p>]<br /> nd_all(“p”, class_=”body”)<br /> You can also search for the exact string value of the class attribute:<br /> nd_all(“p”, class_=”body strikeout”)<br /> But searching for variants of the string value won’t work:<br /> nd_all(“p”, class_=”strikeout body”)<br /> If you want to search for tags that match two or more CSS classes, you<br /> should use a CSS selector:<br /> (“p. “)<br /> In older versions of Beautiful Soup, which don’t have the class_<br /> shortcut, you can use the attrs trick mentioned above. Create a<br /> dictionary whose value for “class” is the string (or regular<br /> expression, or whatever) you want to search for:<br /> nd_all(“a”, attrs={“class”: “sister”})<br /> The string argument¶<br /> With string you can search for strings instead of tags. As with<br /> name and the keyword arguments, you can pass in a string, a<br /> regular expression, a list, a function, or the value True.<br /> Here are some examples:<br /> nd_all(string=”Elsie”)<br /> # [‘Elsie’]<br /> nd_all(string=[“Tillie”, “Elsie”, “Lacie”])<br /> # [‘Elsie’, ‘Lacie’, ‘Tillie’]<br /> nd_all(mpile(“Dormouse”))<br /> # [“The Dormouse’s story”, “The Dormouse’s story”]<br /> def is_the_only_string_within_a_tag(s):<br /> “””Return True if this string is the only child of its parent tag. “””<br /> return (s ==)<br /> nd_all(string=is_the_only_string_within_a_tag)<br /> # [“The Dormouse’s story”, “The Dormouse’s story”, ‘Elsie’, ‘Lacie’, ‘Tillie’, ‘… ‘]<br /> Although string is for finding strings, you can combine it with<br /> arguments that find tags: Beautiful Soup will find all tags whose<br /> matches your value for string. This code finds the <a><br /> tags whose is “Elsie”:<br /> nd_all(“a”, string=”Elsie”)<br /> # [<a href=" class="sister" id="link1">Elsie</a>]<br /> The string argument is new in Beautiful Soup 4. 4. 0. In earlier<br /> versions it was called text:<br /> nd_all(“a”, text=”Elsie”)<br /> The limit argument¶<br /> find_all() returns all the tags and strings that match your<br /> filters. This can take a while if the document is large. If you don’t<br /> need all the results, you can pass in a number for limit. This<br /> works just like the LIMIT keyword in SQL. It tells Beautiful Soup to<br /> stop gathering results after it’s found a certain number.<br /> There are three links in the “three sisters” document, but this code<br /> only finds the first two:<br /> nd_all(“a”, limit=2)<br /> # <a class="sister" href=" id="link2">Lacie</a>]<br /> The recursive argument¶<br /> If you call nd_all(), Beautiful Soup will examine all the<br /> descendants of mytag: its children, its children’s children, and<br /> so on. If you only want Beautiful Soup to consider direct children,<br /> you can pass in recursive=False. See the differe<br /> <img decoding="async" src="https://proxyboys.net/wp-content/uploads/2021/11/51LqOGyPJuL.jpg" alt="Scrape Beautifully With Beautiful Soup In Python - Analytics India Magazine" title="Scrape Beautifully With Beautiful Soup In Python - Analytics India Magazine" /></p> <h2>Scrape Beautifully With Beautiful Soup In Python – Analytics India Magazine</h2> <p>Web Scraping is the process of collecting data from the internet by using various tools and frameworks. Sometimes, It is used for online price change monitoring, price comparison, and seeing how well the competitors are doing by extracting data from their websites.<br /> Web Scraping is as old as the internet is, In 1989 World wide web was launched and after four years World Wide Web Wanderer: The first web robot was created at MIT by Matthew Gray, the purpose of this crawler is to measure the size of the worldwide gister>><br /> Beautiful Soup is a Python library that is used for web scraping purposes to pull the data out of HTML and XML files. It creates a parse tree from page source code that can be used to extract data in a hierarchical and more readable manner.<br /> It was first introduced by Leonard Richardson, who is still contributing to this project and this project is additionally supported by Tidelift (a paid subscription tool for open-source maintenance)<br /> Beautiful soup3 was officially released in May 2006, Latest version released by Beautiful Soup is 4. 9. 2, and it supports Python 3 and Python 2. 4 as well.<br /> Advantage<br /> Very fastExtremely lenientParses pages the same way a Browser doesPrettify the Source Code<br /> Installation<br /> For installing Beautiful Soup we need Python made framework for the same, and also some other supported or additional frameworks can be installed by given PIP command below:<br /> pip install beautifulsoup4.<br /> Other frameworks we need in the future to work with different parser and frameworks:<br /> pip install selenium<br /> pip install requests<br /> pip install lxml<br /> pip install html5lib<br /> Quickstart<br /> A small code to see how BeautifulSoup is faster than any other tools, we are extracting the source code from demoblaze<br /> from bs4 import BeautifulSoupimport requests URL = “r = (URL)<br /> soup = BeautifulSoup(ntent, ‘html5lib’)<br /> print(ettify())<br /> Now “. prettify()” is a built-in function provided by the Beautiful Soup module, it gives the visual representation of the parsed URL Source code. i. e. it arranges all the tags in a parse-tree manner with better readabilityprettify function<br /> How to locate the data from the source code?<br /> For Excluding unwanted data and scrap reliable information only, we have to inspect the webpage.<br /> We can open the Inspect tab by doing any of the following in your Web browser:<br /> Right Click on Webpage and Select InspectOr in Chrome, Go to the upper right side of your chrome browser screen and Click on the Menu bar -> More tools -> Developer + Shift + i<br /> Now after opening the inspect tab, you can search the element you wish to extract from the webpage.<br /> By just hovering through the webpage, we can select the elements; and corresponding code will be available like shown in the above image.<br /> The title for all the articles is inside Class=”post-article”, and inside that, we have our article title in-between “span” tags.<br /> With this method, we can look into web pages’ backend and explore all the data with just hover and watch functionality provided by Chrome browser Inspect tools.<br /> In this example, we are going to use Selenium for browser automation & source code extraction purposes.<br /> A full tutorial about selenium is available here.<br /> Our purpose is to scrape all the Titles of articles from the Analytics India Magazine homepage.<br /> #importing modules<br /> from selenium import webdriver<br /> from bs4 import BeautifulSoup<br /> options = romeOptions()<br /> d_argument(‘–ignore-certificate-errors’)<br /> d_argument(‘–incognito’)<br /> d_argument(‘–headless’)<br /> driver = (chrome_options=options)<br /> source (”)<br /> ge_source<br /> soup = BeautifulSoup(source_code, ‘lxml’)<br /> article_block nd_all(‘div’, class_=’post-title’)<br /> for titles in article_block:<br /> title (‘span’). get_text()<br /> print(title)<br /> Let’s break down the above code line by line to understand how it can detect those article titles:<br /> First, two lines were to import BeautifulSoup and Selenium.<br /> Then we started the chrome Browser in Incognito, and headless mode means no chrome popup and surfing web URLs; instead, it will boot up the URL in the background.<br /> Then with the help of Selenium driver, we loaded the given URL source code into “source_code” variable.<br /> Note: We can extract given URL source code in many ways, but as we already know about selenium, So it’s easy to move forward with the same tool, and it has other functionalities too like scrolling through the hyperlinks and clicking elements.<br /> Passing “source_code” variable into ‘BeautifulSoup’ with specifying the ”lxml” parser we are going to use for data processing, Now we are using the Beautiful soup function “Find” to find the ‘div’ tag having class ‘post-title’ as discussed above because article titles are inside this div container.<br /> Now with a simple for loop, we are going to iterate through each article element and again with the help of “Find” we extract all the “span” tags containing title text. “get_text()” is used to trim the pre/post span tags we are getting with each iteration of finding titles.<br /> After this, you can feed the data for data science work you can use this data to create a world, or maybe you can do text-analysis.<br /> Conclusion<br /> Beautiful Soup is a great tool for extracting very specific information from large unstructured raw Data, and also it is very fast and handy to use.<br /> Its documentation is also very helpful if you want to continue your research.<br /> You learned how to:<br /> Install and setup the scraping environmentInspect the website to get elements nameParse the source code in Beautiful Soup to get trimmed resultsLive example of getting all the published article names from a website.<br /> Join Our Discord Server. Be part of an engaging online community. Join Here.<br /> Subscribe to our Newsletter<br /> Get the latest updates and relevant offers by sharing your email.<br /> Mohit Maithani<br /> Mohit is a Data & Technology Enthusiast with good exposure to solving real-world problems in various avenues of IT and Deep learning domain. He believes in solving human’s daily problems with the help of technology.<br /> <img decoding="async" src="https://proxyboys.net/wp-content/uploads/2021/12/images-273.jpeg" alt="Python Web Scraping using Beautiful Soup | Codementor" title="Python Web Scraping using Beautiful Soup | Codementor" /></p> <h2>Python Web Scraping using Beautiful Soup | Codementor</h2> <p>Background<br /> Let’s assume that we have two competitors selling similar pairs of shoes in the same area. Typically, if a competitor wants to know of another competitor’s pricing, competitor A would enquire from someone close to competitor B.<br /> These days, it is quite different. If we want to purchase a bouquet of roses, we just check the seller’s platform for the price. This simply defines web scraping—the art of extracting data from a website. But we can automate the above examples in Python with Beautiful Soup module.<br /> Dos and don’ts of web scraping<br /> Web scraping is legal in one context and illegal in another context. For example, it is legal when the data extracted is composed of directories and telephone listing for personal use. However, if the extracted data is for commercial use—without the consent of the owner—this would be illegal. Thus, we should be careful when extracting data from a website and always be mindful of the law.<br /> Getting started<br /> There are three standard methods we can use to scrape data from a web page on a website. We can use a regular expression, Beautiful Soup, and CSS selectors. If you know of any other approach to scrape data from a web page, kindly make it available in the comments section.<br /> Before we dive straight into scraping data from a stock exchange site, let’s understand a number of basic terms in web scraping.<br /> Web Crawling: Web crawling simply refers to downloading of HTML pages on a website via user agents known as crawlers/user-agents. Google bots, baiduspider, Bingbot, and others.<br /> is a file which contains a set of suggestions/instructions purposely for crawlers. These set of instructions/suggestions specify whether a crawler has the right to access a particular web page on a website or not.<br /> Sitemap Files: Sitemap files are provided by websites to make crawling a bit easier for crawlers/user-agents. It simply helps crawlers to locate updated content of pages on websites. Instead of crawling web pages of a website, crawlers check the updated content of a website via the sitemap files. For further details, the sitemap standard is defined at<br /> Beautiful Soup: Beautiful Soup is a popular module in Python that parses (or examines) a web page and provides a convenient interface for navigating content. I prefer Beautiful Soup to a regular expression and CSS selectors when scraping data from a web page. It is also one of the recommended Python libraries by the #1 Stack Overflow answerer, Martijn Pieters. But if you want, you can also build a web scraper in<br /> Apart from the Beautiful Soup, which we will use to scrape data from a web page, there are modules in Python to help us know technical aspects of our web target. We can use the builtwith module to know more of our target’s technical details. You can install the builtwith module by doing the following:<br /> pip install builtwith<br /> The builtwith module exposes arrays of technologies a website was built upon. Web intermediaries (i. e WAFs or proxies) may block other technical aspects for security reasons. For instance, let’s try to examine Bloomberg’s website<br /> import builtwith<br /> (“)<br /> Below is a screenshot of the output:<br /> Before we scrape the name and price of the index on Bloomberg, we need to check the file of our target before we take any further steps. To remind us again of its purpose, I initially explained that is a file composed of suggestions for crawlers (or web robots).<br /> For this project, our target is Bloomberg. Let’s check out Bloomberg’s restrictions for web crawlers.<br /> Just type the following in the url space bar:<br /> //:The code above simply sends a request to the web server to retrieve file. Below is the file retrieved from the web server. Now let’s check the web robots rules of Bloomberg.<br /> Crawling our target<br /> With the help of file, we know where we can allow our crawler to download HTML pages and where we should not allow our crawler to tread. As good web citizens, it is advisable to obey bots rules. However, it is not impossible for us to allow our crawler to venture into restricted areas. Bloomberg may ban our IP address for an hour or a longer period.<br /> For this project, it is not necessary to download/crawl a specific web page. We can use a Firebug extension to check or inspect the page where we want to scrape our data from.<br /> Now let’s use Firebug to find the related HTML of the index’s name and price of the day. Similarly, we can use the browser’s native inspector, too. I prefer to use both.<br /> Just hover or move your cursor to the index name and click the related HTML tags. We can see the name of the index, which should look like something similar to the one below:<br /> Let examine the sitemap file of our target<br /> Sitemap files simply provide links to updated content of a website. Therefore, it allows crawlers to efficiently crawl web pages of interest. Below are a number of Bloomberg’s sitemap files:<br /> Let’s scrape data from our target:<br /> Now it is time to scrape a particular data on our target site:. There are diverse ways to scrape data from a web page. We can use CSS selectors, regular expressions, and the popular BeautifulSoup module. Among these three approaches, we are going to use the BeautifulSoup to scrape data from a web page. The name we will use to install pip via this module is quite different from when we import it. When choosing between text editors, you can choose to use Sublime, Atom or, Notepad++. Others are available, too.<br /> Now let’s assume we don’t have BeautifulSoup. Let’s install BeautifulSoup via pip.<br /> Next, we import urllib2 and BeautifulSoup4:<br /> #import libraries<br /> import urllib2 // urllib2 is used to fetch url(s) via urlopen()<br /> from bs4 import BeautifulSoup // when importing ‘Beautiful Soup’ don’t add 4.<br /> From datetime import datetime // contains functions and classes for working with dates and times, separately and together<br /> Now, let’s define and declare variable for the url:<br /> quote_page = ‘<br /> Here, we include the datetime module as:<br /> t1 = datetime()<br /> Now let’s use the Python urllib2 to get HTML page of the URL stored in the quote_page variable and return to the variable page.<br /> page = urllib2. urlopen(quote_page)<br /> Afterward, let’s parse the HTML page into the BeautifulSoup module:<br /> soup = BeautifulSoup(page, ‘’)<br /> Since we know where the name and price of the index are in the HTML tags via the screenshot, it is not difficult to query the specific class name:<br /> name_store = (‘h1’, attrs={‘class’: ‘name’})<br /> Now let’s get the name of the index by getting its text via the dot notation and thereafter store in the variable data_name.<br /> data_name = ()<br /> Same as the index name, let do the same for the index price:<br /> price_store = (‘div’, attrs={‘class’: ‘price’})<br /> price =<br /> We print the data_name of our index as:<br /> print data_name<br /> Also, print the price:<br /> print price<br /> Finally, we calculate the total time of the program by the following:<br /> t2 = datetime()<br /> total = t2 – t1<br /> print ‘scraping completed in ’, total<br /> Below is the full source code:<br /> import urllib2<br /> from bs4 import BeautifulSoup<br /> from datetime import datetime<br /> quote_page = ”<br /> t1 = ()<br /> soup = BeautifulSoup(page, ”)<br /> name_store = (‘h1’, attrs={‘div’: ‘name’})<br /> t2 = ()<br /> total = t2 -t1<br /> print ‘scraping completed in ‘, total<br /> This should be the output:<br /> Wrapping up:<br /> And with that, we just learned how to scrape data with Beautiful Soup which, in my opinion, is quite easy in comparison with regular expression and CSS selectors. And just so you are aware, this is just one of the ways of scraping data with Python.<br /> And just to reiterate this important point: web scraping is legal in one context, and illegal in another. Before you scrape data from a webpage, it is strictly advisable to check the bot rules of a website by appending the at the end of the URL, just like this:. Your IP address may be restricted till further notice if you fail to do so. Hope you’ll use the skill you just learned appropriately, cheers!<br /> Author’s Bio<br /> Michael is a budding Cybersecurity Engineer and a technical writer based in Ghana, Africa. He works with AmericanEyes Security as a part-time WordPress security consultant. He is interested in Ruby on Rails and PHP security.</p> <h2>Frequently Asked Questions about beautifulsoap</h2> <h3>What is BeautifulSoup used for?</h3> <p>Beautiful Soup is a Python library that is used for web scraping purposes to pull the data out of HTML and XML files. It creates a parse tree from page source code that can be used to extract data in a hierarchical and more readable manner.Dec 4, 2020</p> <h3>Is using BeautifulSoup illegal?</h3> <p>For example, it is legal when the data extracted is composed of directories and telephone listing for personal use. However, if the extracted data is for commercial use—without the consent of the owner—this would be illegal.</p> <h3>Is BeautifulSoup faster than selenium?</h3> <p>One of the ways to compare selenium vs BeautifulSoup is the performance of both. … This is a con of BeautifulSoup because the programmer needs to know multithreading properly. Scrapy is faster than both as it makes use of asynchronous system calls. So it’s faster and performs better than other libraries.Feb 10, 2021</p> <div class="post-tags"> <a href="#">Tags: <a href="https://proxyboys.net/tag/beautifulsoup-3/" rel="tag">beautifulsoup 3</a> <a href="https://proxyboys.net/tag/beautifulsoup-documentation/" rel="tag">beautifulsoup documentation</a> <a href="https://proxyboys.net/tag/beautifulsoup-download/" rel="tag">beautifulsoup download</a> <a href="https://proxyboys.net/tag/beautifulsoup-example/" rel="tag">beautifulsoup example</a> <a href="https://proxyboys.net/tag/beautifulsoup-find-by-class/" rel="tag">beautifulsoup find by class</a> <a href="https://proxyboys.net/tag/beautifulsoup-html-parser/" rel="tag">beautifulsoup html parser</a> <a href="https://proxyboys.net/tag/beautifulsoup-install/" rel="tag">beautifulsoup install</a> <a href="https://proxyboys.net/tag/beautifulsoup-tutorial/" rel="tag">beautifulsoup tutorial</a><br /></a> </div> <div class="post-navigation"> <div class="post-prev"> <a href="https://proxyboys.net/snkrs-stash-spot/"> <div class="postnav-image"> <i class="fa fa-chevron-left"></i> <div class="overlay"></div> <div class="navprev"> <img width="284" height="177" src="https://proxyboys.net/wp-content/uploads/2021/12/images-234.jpeg" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="" decoding="async" fetchpriority="high" /> </div> </div> <div class="prev-post-title"> <p><a href="https://proxyboys.net/snkrs-stash-spot/" rel="prev">Snkrs Stash Spot</a></p> </div> </a> </div> <div class="post-next"> <a href="https://proxyboys.net/ebay-stealth-account/"> <div class="postnav-image"> <i class="fa fa-chevron-right"></i> <div class="overlay"></div> <div class="navnext"> <img width="149" height="198" src="https://proxyboys.net/wp-content/uploads/2021/12/1623959261.png" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="" decoding="async" /> </div> </div> <div class="next-post-title"> <p><a href="https://proxyboys.net/ebay-stealth-account/" rel="next">Ebay Stealth Account</a></p> </div> </a> </div> </div> </div> </div> <div id="comments" class="comments-area"> <div id="respond" class="comment-respond"> <h3 id="reply-title" class="comment-reply-title">Leave a Reply <small><a rel="nofollow" id="cancel-comment-reply-link" href="/beautifulsoap/#respond" style="display:none;">Cancel reply</a></small></h3><p class="must-log-in">You must be <a href="https://proxyboys.net/wp-login.php?redirect_to=https%3A%2F%2Fproxyboys.net%2Fbeautifulsoap%2F">logged in</a> to post a comment.</p> </div><!-- #respond --> </div><!-- #comments --> </div> <div class="col-lg-4"> <aside id="secondary" class="widget-area"> <div id="search-2" class="widget sidebar-post sidebar widget_search"><form role="search" method="get" class="search-form" action="https://proxyboys.net/"> <label> <span class="screen-reader-text">Search for:</span> <input type="search" class="search-field" placeholder="Search …" value="" name="s" /> </label> <input type="submit" class="search-submit" value="Search" /> </form></div> <div id="recent-posts-2" class="widget sidebar-post sidebar widget_recent_entries"> <div class="sidebar-title"><h3 class="title mb-20">Recent Posts</h3></div> <ul> <li> <a href="https://proxyboys.net/earn-with-actproxy-start-today/">Earn with ActProxy: Start Today</a> </li> <li> <a href="https://proxyboys.net/how-to-know-if-my-ip-address-is-being-tracked/">How To Know If My Ip Address Is Being Tracked</a> </li> <li> <a href="https://proxyboys.net/how-can-you-change-your-ip-address/">How Can You Change Your Ip Address</a> </li> <li> <a href="https://proxyboys.net/is-a-public-ip-address-safe/">Is A Public Ip Address Safe</a> </li> <li> <a href="https://proxyboys.net/anonymous-firefox-android/">Anonymous Firefox Android</a> </li> <li> <a href="https://proxyboys.net/hong-kong-proxy-server/">Hong Kong Proxy Server</a> </li> <li> <a href="https://proxyboys.net/youtube-proxy-france/">Youtube Proxy France</a> </li> <li> <a href="https://proxyboys.net/how-to-scrape-linkedin/">How To Scrape Linkedin</a> </li> <li> <a href="https://proxyboys.net/post-ad-gumtree/">Post Ad Gumtree</a> </li> <li> <a href="https://proxyboys.net/4g-proxy-usa/">4G Proxy Usa</a> </li> </ul> </div></aside> </div> </div> </div> </section> </div><!-- #content --> <footer class="footer-section-child"> <div class="container"> <div class="footer-top"> <div class="row clearfix"> <div class="widget_text widget_custom_html footer-widget col-md-3 col-sm-6 col-xs-12"><div class="textwidget custom-html-widget"><!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-9TFKENNJT0"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-9TFKENNJT0'); </script></div></div> </div> </div> </div> <div class="copyright-footer-child"> <div class="container"> <div class="row justify-content-center"> <div class="col-md-6 text-md-center align-self-center"> <p>Copyright 2021 ProxyBoys</p> </div> </div> </div> </div> </footer> </div><!-- #page --> <button onclick="blogwavesTopFunction()" id="myBtn" title="Go to top"> <i class="fa fa-angle-up"></i> </button> <script src="https://proxyboys.net/wp-content/plugins/accordion-slider-gallery/assets/js/accordion-slider-js.js?ver=2.7" id="jquery-accordion-slider-js-js"></script> <script src="https://proxyboys.net/wp-content/plugins/blog-manager-wp/assets/js/designer.js?ver=6.7.1" id="wp-pbsm-script-js"></script> <script src="https://proxyboys.net/wp-content/plugins/photo-gallery-builder/assets/js/lightbox.min.js?ver=3.0" id="photo_gallery_lightbox2_script-js"></script> <script src="https://proxyboys.net/wp-content/plugins/photo-gallery-builder/assets/js/packery.min.js?ver=3.0" id="photo_gallery_packery-js"></script> <script src="https://proxyboys.net/wp-content/plugins/photo-gallery-builder/assets/js/isotope.pkgd.js?ver=3.0" id="photo_gallery_isotope-js"></script> <script src="https://proxyboys.net/wp-content/plugins/photo-gallery-builder/assets/js/imagesloaded.pkgd.min.js?ver=3.0" id="photo_gallery_imagesloaded-js"></script> <script src="https://proxyboys.net/wp-includes/js/imagesloaded.min.js?ver=5.0.0" id="imagesloaded-js"></script> <script src="https://proxyboys.net/wp-includes/js/masonry.min.js?ver=4.2.2" id="masonry-js"></script> <script src="https://proxyboys.net/wp-content/themes/blogwaves/assets/js/navigation.js?ver=1.0.0" id="blogwaves-navigation-js"></script> <script src="https://proxyboys.net/wp-content/themes/blogwaves/assets/js/popper.js?ver=1.0.0" id="popper-js-js"></script> <script src="https://proxyboys.net/wp-content/themes/blogwaves/assets/js/bootstrap.js?ver=1.0.0" id="bootstrap-js-js"></script> <script src="https://proxyboys.net/wp-content/themes/blogwaves/assets/js/main.js?ver=1.0.0" id="blogwaves-main-js-js"></script> <script src="https://proxyboys.net/wp-content/themes/blogwaves/assets/js/skip-link-focus-fix.js?ver=1.0.0" id="skip-link-focus-fix-js-js"></script> <script src="https://proxyboys.net/wp-content/themes/blogwaves/assets/js/global.js?ver=1.0.0" id="blogwaves-global-js-js"></script> <script src="https://proxyboys.net/wp-includes/js/comment-reply.min.js?ver=6.7.1" id="comment-reply-js" async data-wp-strategy="async"></script> <!--noptimize--><script>!function(){window.advanced_ads_ready_queue=window.advanced_ads_ready_queue||[],advanced_ads_ready_queue.push=window.advanced_ads_ready;for(var d=0,a=advanced_ads_ready_queue.length;d<a;d++)advanced_ads_ready(advanced_ads_ready_queue[d])}();</script><!--/noptimize--> <!-- Yandex.Metrika counter --> <script type="text/javascript" > (function(m,e,t,r,i,k,a){m[i]=m[i]||function(){(m[i].a=m[i].a||[]).push(arguments)}; m[i].l=1*new Date(); for (var j = 0; j < document.scripts.length; j++) {if (document.scripts[j].src === r) { return; }} k=e.createElement(t),a=e.getElementsByTagName(t)[0],k.async=1,k.src=r,a.parentNode.insertBefore(k,a)}) (window, document, "script", "https://mc.yandex.ru/metrika/tag.js", "ym"); ym(99065817, "init", { clickmap:true, trackLinks:true, accurateTrackBounce:true, webvisor:true }); </script> <noscript><div><img src="https://mc.yandex.ru/watch/99065817" style="position:absolute; left:-9999px;" alt="" /></div></noscript> <!-- /Yandex.Metrika counter --> </body> </html> <!-- This website is like a Rocket, isn't it? Performance optimized by WP Rocket. Learn more: https://wp-rocket.me - Debug: cached@1734876918 -->