Beautifulsoup Lxml
BeautifulSoup Parser – lxml
BeautifulSoup is a Python package for working with real-world and broken HTML,
just like As of version 4. x, it can use
different HTML parsers,
each of which has its advantages and disadvantages (see the link).
lxml can make use of BeautifulSoup as a parser backend, just like BeautifulSoup
can employ lxml as a parser. When using BeautifulSoup from lxml, however, the
default is to use Python’s integrated HTML parser in the
module.
In order to make use of the HTML5 parser of
html5lib instead, it is better
to go directly through the html5parser module in
A very nice feature of BeautifulSoup is its excellent support for encoding
detection which can provide better results for real-world HTML pages that
do not (correctly) declare their encoding.
lxml interfaces with BeautifulSoup through the
module. It provides three main functions: fromstring() and parse()
to parse a string or file using BeautifulSoup into an
document, and convert_tree() to convert an existing BeautifulSoup
tree into a list of top-level Elements.
Contents
Parsing with the soupparser
Entity handling
Using soupparser as a fallback
Using only the encoding detection
The functions fromstring() and parse() behave as known from
lxml. The first returns a root Element, the latter returns an
ElementTree.
There is also a legacy module called, which
mimics the interface provided by Fredrik Lundh’s ElementSoup
module. Note that the soupparser module was added in lxml 2. 0. 3.
Previous versions of lxml 2. x only have the ElementSoup module.
Here is a document full of tag soup, similar to, but not quite like, HTML:
>>> tag_soup = ”’…
”’
All you need to do is pass it to the fromstring() function:
>>> from import fromstring
>>> root = fromstring(tag_soup)
To see what we have here, you can serialise it:
>>> from import tostring
>>> print(tostring(root, pretty_print=True)())
Hi all
Not quite what you’d expect from an HTML page, but, well, it was broken
already, right? The parser did its best, and so now it’s a tree.
To control how Element objects are created during the conversion
of the tree, you can pass a makeelement factory function to
parse() and fromstring(). By default, this is based on the
HTML parser defined in
For a quick comparison, libxml2 2. 9. 1 parses the same tag soup as
follows. The only difference is that libxml2 tries harder to adhere
to the structure of an HTML document and moves misplaced tags where
they (likely) belong. Note, however, that the result can vary between
parser versions.
By default, the BeautifulSoup parser also replaces the entities it
finds by their character equivalent.
>>> tag_soup = ‘©€-õƽ
‘
>>> body = fromstring(tag_soup)(‘. //body’)
>>>
u’xa9u20ac-xf5u01bd’
If you want them back on the way out, you can just serialise with the
default encoding, which is ‘US-ASCII’.
>>> tostring(body)
‘
>>> tostring(body, method=”html”)
‘©€-õƽ
‘
Any other encoding will output the respective byte sequences.
>>> tostring(body, encoding=”utf-8″)
‘
>>> tostring(body, method=”html”, encoding=”utf-8″)
‘xc2xa9xe2x82xac-xc3xb5xc6xbd
‘
>>> tostring(body, encoding=’unicode’)
u’
>>> tostring(body, method=”html”, encoding=’unicode’)
u’xa9u20ac-xf5u01bd
‘
The downside of using this parser is that it is much slower than
the C implemented HTML parser of libxml2 that lxml uses. So if
performance matters, you might want to consider using soupparser
only as a fallback for certain cases.
One common problem of lxml’s parser is that it might not get the
encoding right in cases where the document contains a tag
at the wrong place. In this case, you can exploit the fact that lxml
serialises much faster than most other HTML libraries for Python.
Just serialise the document to unicode and if that gives you an
exception, re-parse it with BeautifulSoup to see if that works
better.
>>> tag_soup = ”’… … …
>>> import
>>> root = (tag_soup)
>>> try:… ignore = tostring(root, encoding=’unicode’)… except UnicodeDecodeError:… root = (tag_soup)
Even if you prefer lxml’s fast HTML parser, you can still benefit
from BeautifulSoup’s support for encoding detection in the
UnicodeDammit class. Once it succeeds in decoding the data,
you can simply pass the resulting Unicode string into lxml’s parser.
>>> try:… from bs4 import UnicodeDammit # BeautifulSoup 4…… def decode_html(html_string):… converted = UnicodeDammit(html_string)… if not converted. unicode_markup:… raise UnicodeDecodeError(… “Failed to detect encoding, tried [%s]”,… ‘, ‘(ied_encodings))… # print converted. original_encoding… return converted. unicode_markup…… except ImportError:… from BeautifulSoup import UnicodeDammit # BeautifulSoup 3…… converted = UnicodeDammit(html_string, isHTML=True)… unicode:… ‘, ‘(iedEncodings))… originalEncoding… unicode
>>> root = (decode_html(tag_soup))
BeautifulSoup Parser – lxml
BeautifulSoup is a Python package that parses broken HTML, just like
lxml supports it based on the parser of libxml2. BeautifulSoup uses a
different parsing approach. It is not a real HTML parser but uses
regular expressions to dive through tag soup. It is therefore more
forgiving in some cases and less good in others. It is not uncommon
that lxml/libxml2 parses and fixes broken HTML better, but
BeautifulSoup has superiour support for encoding detection. It
very much depends on the input which parser works better.
To prevent users from having to choose their parser library in
advance, lxml can interface to the parsing capabilities of
BeautifulSoup through the module. It
provides three main functions: fromstring() and parse() to
parse a string or file using BeautifulSoup into an
document, and convert_tree() to convert an existing BeautifulSoup
tree into a list of top-level Elements.
Contents
Parsing with the soupparser
Entity handling
Using soupparser as a fallback
Using only the encoding detection
The functions fromstring() and parse() behave as known from
ElementTree. The first returns a root Element, the latter returns an
ElementTree.
There is also a legacy module called, which
mimics the interface provided by ElementTree’s own ElementSoup
module. Note that the soupparser module was added in lxml 2. 0. 3.
Previous versions of lxml 2. x only have the ElementSoup module.
Here is a document full of tag soup, similar to, but not quite like, HTML:
>>> tag_soup = ‘
‘
all you need to do is pass it to the fromstring() function:
>>> from import fromstring
>>> root = fromstring(tag_soup)
To see what we have here, you can serialise it:
>>> from import tostring
>>> print(tostring(root, pretty_print=True)())
Hi all
Not quite what you’d expect from an HTML page, but, well, it was broken
already, right? BeautifulSoup did its best, and so now it’s a tree.
To control which Element implementation is used, you can pass a
makeelement factory function to parse() and fromstring().
By default, this is based on the HTML parser defined in
For a quick comparison, libxml2 2. 6. 32 parses the same tag soup as
follows. The main difference is that libxml2 tries harder to adhere
to the structure of an HTML document and moves misplaced tags where
they (likely) belong. Note, however, that the result can vary between
parser versions.
Hi all
By default, the BeautifulSoup parser also replaces the entities it
finds by their character equivalent.
>>> tag_soup = ‘©€-õƽ
‘
>>> body = fromstring(tag_soup)(‘. //body’)
>>>
u’xa9u20ac-xf5u01bd’
If you want them back on the way out, you can just serialise with the
default encoding, which is ‘US-ASCII’.
>>> tostring(body)
‘
>>> tostring(body, method=”html”)
‘©€-õƽ
‘
Any other encoding will output the respective byte sequences.
>>> tostring(body, encoding=”utf-8″)
‘
>>> tostring(body, method=”html”, encoding=”utf-8″)
‘xc2xa9xe2x82xac-xc3xb5xc6xbd
‘
>>> tostring(body, encoding=’unicode’)
u’
>>> tostring(body, method=”html”, encoding=’unicode’)
u’xa9u20ac-xf5u01bd
‘
The downside of using this parser is that it is much slower than
the HTML parser of lxml. So if performance matters, you might want to
consider using soupparser only as a fallback for certain cases.
One common problem of lxml’s parser is that it might not get the
encoding right in cases where the document contains a tag
at the wrong place. In this case, you can exploit the fact that lxml
serialises much faster than most other HTML libraries for Python.
Just serialise the document to unicode and if that gives you an
exception, re-parse it with BeautifulSoup to see if that works
better.
>>> tag_soup = ”’… … …
>>> import
>>> root = (tag_soup)
>>> try:… ignore = tostring(root, encoding=’unicode’)… except UnicodeDecodeError:… root = (tag_soup)
If you prefer a ‘real’ (and fast) HTML parser instead of the regular
expression based one in BeautifulSoup, you can still benefit from
BeautifulSoup’s support for encoding detection in the
UnicodeDammit class.
>>> from BeautifulSoup import UnicodeDammit
>>> def decode_html(html_string):… converted = UnicodeDammit(html_string, isHTML=True)… if not converted. unicode:… raise UnicodeDecodeError(… “Failed to detect encoding, tried [%s]”,… ‘, ‘(iedEncodings))… # print converted. originalEncoding… return converted. unicode
>>> root = (decode_html(tag_soup))
Parsing XML and HTML with lxml
lxml provides a very simple and powerful API for parsing XML and HTML. It
supports one-step parsing as well as step-by-step parsing using an
event-driven API (currently only for XML).
Contents
Parsers
Parser options
Error log
Parsing HTML
Doctype information
The target parser interface
The feed parser interface
Incremental event parsing
Event types
Modifying the tree
Selective tag events
Comments and PIs
Events with custom targets
iterparse and iterwalk
iterwalk
Python unicode strings
Serialising to Unicode strings
The usual setup procedure:
>>> from lxml import etree
The following examples also use StringIO or BytesIO to show how to parse
from files and file-like objects. Both are available in the io module:
from io import StringIO, BytesIO
Parsers are represented by parser objects. There is support for parsing both
XML and (broken) HTML. Note that XHTML is best parsed as XML, parsing it with
the HTML parser can lead to unexpected results. Here is a simple example for
parsing XML from an in-memory string:
>>> xml = ‘‘
>>> root = omstring(xml)
>>> string(root)
b’‘
To read from a file or file-like object, you can use the parse() function,
which returns an ElementTree object:
>>> tree = (StringIO(xml))
>>> string(troot())
Note how the parse() function reads from a file-like object here. If
parsing is done from a real file, it is more common (and also somewhat more
efficient) to pass a filename:
>>> tree = (“doc/”)
lxml can parse from a local file, an HTTP URL or an FTP URL. It also
auto-detects and reads gzip-compressed XML files ().
If you want to parse from memory and still provide a base URL for the document
(e. g. to support relative paths in an XInclude), you can pass the base_url
keyword argument:
>>> root = omstring(xml, base_url=”)
The parsers accept a number of setup options as keyword arguments. The above
example is easily extended to clean up namespaces during parsing:
>>> parser = etree. XMLParser(ns_clean=True)
>>> tree = (StringIO(xml), parser)
b’‘
The keyword arguments in the constructor are mainly based on the libxml2
parser configuration. A DTD will also be loaded if validation or attribute
default values are requested.
Available boolean keyword arguments:
attribute_defaults – read the DTD (if referenced by the document) and add
the default attributes from it
dtd_validation – validate while parsing (if a DTD was referenced)
load_dtd – load and parse the DTD while parsing (no validation is performed)
no_network – prevent network access when looking up external
documents (on by default)
ns_clean – try to clean up redundant namespace declarations
recover – try hard to parse through broken XML
remove_blank_text – discard blank text nodes between tags, also known as
ignorable whitespace. This is best used together with a DTD or schema
(which tells data and noise apart), otherwise a heuristic will be applied.
remove_comments – discard comments
remove_pis – discard processing instructions
strip_cdata – replace CDATA sections by normal text content (on by
default)
resolve_entities – replace entities by their text value (on by
huge_tree – disable security restrictions and support very deep trees
and very long text content (only affects libxml2 2. 7+)
compact – use compact storage for short text content (on by default)
collect_ids – collect XML IDs in a hash table while parsing (on by default).
Disabling this can substantially speed up parsing of documents with many
different IDs if the hash lookup is not used afterwards.
Other keyword arguments:
encoding – override the document encoding
target – a parser target object that will receive the parse events
(see The target parser interface)
schema – an XMLSchema to validate against (see validation)
Parsers have an error_log property that lists the errors and
warnings of the last parser run:
>>> parser = etree. XMLParser()
>>> print(len(ror_log))
0
>>> tree = (“
Traceback (most recent call last):…
Opening and ending tag mismatch: root line 1 and b, line 2, column 5…
1
>>> error = ror_log[0]
>>> print(ssage)
Opening and ending tag mismatch: root line 1 and b
>>> print()
2
5
Each entry in the log has the following properties:
message: the message text
domain: the domain ID (see the class)
type: the message type ID (see the class)
level: the log level ID (see the class)
line: the line at which the message originated (if applicable)
column: the character column at which the message originated (if applicable)
filename: the name of the file in which the message originated (if applicable)
For convenience, there are also three properties that provide readable
names for the ID values:
domain_name
type_name
level_name
To filter for a specific kind of message, use the different
filter_*() methods on the error log (see the
class).
HTML parsing is similarly simple. The parsers have a recover
keyword argument that the HTMLParser sets by default. It lets libxml2
try its best to return a valid HTML tree with all content it can
manage to parse. It will not raise an exception on parser errors.
You should use libxml2 version 2. 6. 21 or newer to take advantage of
this feature.
>>> broken_html = “
page title
”
>>> parser = MLParser()
>>> tree = (StringIO(broken_html), parser)
>>> result = string(troot(),… pretty_print=True, method=”html”)
>>> print(result)
page title
Lxml has an HTML function, similar to the XML shortcut known from
ElementTree:
>>> html = (broken_html)
>>> result = string(html, pretty_print=True, method=”html”)
The support for parsing broken HTML depends entirely on libxml2’s recovery
algorithm. It is not the fault of lxml if you find documents that are so
heavily broken that the parser cannot handle them. There is also no guarantee
that the resulting tree will contain all data from the original document. The
parser may have to drop seriously broken parts when struggling to keep
parsing. Especially misplaced meta tags can suffer from this, which may lead
to encoding problems.
Note that the result is a valid HTML tree, but it may not be a
well-formed XML tree. For example, XML forbids double hyphens in
comments, which the HTML parser will happily accept in recovery mode.
Therefore, if your goal is to serialise an HTML document as an
XML/XHTML document after parsing, you may have to apply some manual
preprocessing first.
Also note that the HTML parser is meant to parse HTML documents. For
XHTML documents, use the XML parser, which is namespace aware.
The use of the libxml2 parsers makes some additional information available at
the API level. Currently, ElementTree objects can access the DOCTYPE
information provided by a parsed document, as well as the XML version and the
original encoding. Since lxml 3. 5, the doctype references are mutable.
>>> pub_id = “-//W3C//DTD XHTML 1. 0 Transitional//EN”
>>> sys_url = ”
>>> doctype_string = ‘‘% (pub_id, sys_url)
>>> xml_header = ‘ xml version="1. 0" encoding="ascii"? >‘
>>> xhtml = xml_header + doctype_string + ‘
>>> tree = (StringIO(xhtml))
>>> docinfo = cinfo
>>> print(lic_id)
-//W3C//DTD XHTML 1. 0 Transitional//EN
>>> print(stem_url)
>>> ctype == doctype_string
True
>>> print(docinfo. xml_version)
1. 0
>>> print(docinfo. encoding)
ascii
>>> stem_url = None
>>> lic_id = None
>>> print(string(tree))
As in ElementTree, and similar to a SAX event handler, you can pass
a target object to the parser:
>>> class EchoTarget(object):… def start(self, tag, attrib):… print(“start%s%r”% (tag, dict(attrib)))… def end(self, tag):… print(“end%s”% tag)… def data(self, data):… print(“data%r”% data)… def comment(self, text):… print(“comment%s”% text)… def close(self):… print(“close”)… return “closed! ”
>>> parser = etree. XMLParser(target = EchoTarget())
>>> result = (“
start element {}
data u’some’
comment comment
data u’text’
end element
close
closed!
It is important for the () method to reset the parser target
to a usable state, so that you can reuse the parser as often as you
like:
Starting with lxml 2. 3, the () method will also be called in
the error case. This diverges from the behaviour of ElementTree, but
allows target objects to clean up their state in all situations, so
that the parser can reuse them afterwards.
>>> class CollectorTarget(object):… def __init__(self):… = []… (“start%s%r”% (tag, dict(attrib)))… (“end%s”% tag)… (“data%r”% data)… (“comment%s”% text)… (“close”)… XMLParser(target = CollectorTarget())
>>> result = (“
Opening and ending tag mismatch…
>>> for event in… print(event)
Note that the parser does not build a tree when using a parser
target. The result of the parser run is whatever the target object
returns from its () method. If you want to return an XML
tree here, you have to create it programmatically in the target
object. An example for a parser target that builds a tree is the
TreeBuilder:
>>> parser = etree. XMLParser(target = eeBuilder())
element
>>> print(result[0])
comment
Since lxml 2. 0, the parsers have a feed parser interface that is
compatible to the ElementTree parsers. You can use it to feed data
into the parser in a controlled step-by-step way.
In, you can use both interfaces to a parser at the same
time: the parse() or XML() functions, and the feed parser
interface. Both are independent and will not conflict (except if used
in conjunction with a parser target object as described above).
To start parsing with a feed parser, just call its feed() method
to feed it some data.
>>> for data in (‘ xml versio', 'n="1. 0"? ', '>
When you are done parsing, you must call the close() method to
retrieve the root Element of the parse result document, and to unlock the
parser:
>>> root = ()
root
>>> print(root[0])
a
If you do not call close(), the parser will stay locked and
subsequent feeds will keep appending data, usually resulting in a non
well-formed document and an unexpected parser error. So make sure you
always close the parser after use, also in the exception case.
Another way of achieving the same step-by-step parsing is by writing your own
file-like object that returns a chunk of data on each read() call. Where
the feed parser interface allows you to actively pass data chunks into the
parser, a file-like object passively responds to read() requests of the
parser itself. Depending on the data source, either way may be more natural.
Note that the feed parser has its own error log called
feed_error_log. Errors in the feed parser do not show up in the
normal error_log and vice versa.
You can also combine the feed parser interface with the target parser:
>>> (“
>>> result = ()
Again, this prevents the automatic creation of an XML tree and leaves
all the event handling to the target object. The close() method
of the parser forwards the return value of the target’s close()
method.
In Python 3. 4, the package gained an extension
to the feed parser interface that is implemented by the XMLPullParser
class. It additionally allows processing parse events after each
incremental parsing step, by calling the. read_events() method and
iterating over the result. This is most useful for non-blocking execution
environments where data chunks arrive one after the other and should be
processed as far as possible in each step.
The same feature is available in lxml 3. 3. The basic usage is as follows:
>>> parser = etree. XMLPullParser(events=(‘start’, ‘end’))
>>> def print_events(parser):… for action, element in ad_events():… print(‘%s:%s’% (action, ))
>>> (‘
>>> print_events(parser)
start: root
>>> print_events(parser) # well, no more events, as before…
>>> (‘
start: child
start: a
end: a
>>> (‘
end: root
Just like the normal feed parser, the XMLPullParser builds a tree in
memory (and you should always call the () method when done with
parsing):
b’
However, since the parser provides incremental access to that tree,
you can explicitly delete content that you no longer need once you
have processed it. Read the section on Modifying the tree below
to see what you can do here and what kind of modifications you should
avoid.
In lxml, it is enough to call the. read_events() method once as
the iterator it returns can be reused when new events are available.
Also, as known from other iterators in lxml, you can pass a tag
argument that selects which parse events are returned by the. read_events() iterator.
The parse events are tuples (event-type, object). The event types
supported by ElementTree and are the strings ‘start’, ‘end’,
‘start-ns’ and ‘end-ns’. The ‘start’ and ‘end’ events represent opening
and closing elements. They are accompanied by the respective Element
instance. By default, only ‘end’ events are generated, whereas the
example above requested the generation of both ‘start’ and ‘end’ events.
The ‘start-ns’ and ‘end-ns’ events notify about namespace declarations.
They do not come with Elements. Instead, the value of the ‘start-ns’
event is a tuple (prefix, namespaceURI) that designates the beginning
of a prefix-namespace mapping. The corresponding end-ns event does
not have a value (None). It is common practice to use a list as namespace
stack and pop the last entry on the ‘end-ns’ event.
>>> def print_events(events):… for action, obj in events:… if action in (‘start’, ‘end’):… print(“%s:%s”% (action, ))… elif action == ‘start-ns’:… print(“%s:%s”% (action, obj))… else:… print(action)
>>> event_types = (“start”, “end”, “start-ns”, “end-ns”)
>>> parser = etree. XMLPullParser(event_types)
>>> events = ad_events()
>>> (‘
>>> print_events(events)
start: element
>>> (‘text
end: element
>>> (‘
start-ns: (”, ‘testns/’)
start: {testns/}empty-element
end: {testns/}empty-element
end-ns
>>> (‘
You can modify the element and its descendants when handling the
‘end’ event. To save memory, for example, you can remove subtrees
that are no longer needed:
>>> parser = etree. XMLPullParser()
>>> (‘
>>> (‘
>>> for action, elem in events:… print(‘%s:%d’% (, len(elem))) # processing… (keep_tail=True) # delete children
element: 0
child: 0
element: 1
>>> (‘
{testns/}empty-element: 0
root: 3
b’
WARNING: During the ‘start’ event, any content of the element,
such as the descendants, following siblings or text, is not yet
available and should not be accessed. Only attributes are guaranteed
to be set. During the ‘end’ event, the element and its descendants
can be freely modified, but its following siblings should not be
accessed. During either of the two events, you must not modify or
move the ancestors (parents) of the current element. You should also
avoid moving or discarding the element itself. The golden rule is: do
not touch anything that will have to be touched again by the parser
later on.
If you have elements with a long list of children in your XML file and want
to save more memory during parsing, you can clean up the preceding siblings
of the current element:
>>> for event, element in ad_events():… #… do something with the element… (keep_tail=True) # clean up children… while tprevious() is not None:… del tparent()[0] # clean up preceding siblings
The while loop deletes multiple siblings in a row. This is only necessary
if you skipped over some of them using the tag keyword argument.
Otherwise, a simple if should do. The more selective your tag is,
however, the more thought you will have to put into finding the right way to
clean up the elements that were skipped. Therefore, it is sometimes easier to
traverse all elements and do the tag selection by hand in the event handler
code.
As an extension over ElementTree, accepts a tag keyword
argument just like (tag). This restricts events to a
specific tag or namespace:
>>> parser = etree. XMLPullParser(tag=”element”)
>>> for action, elem in ad_events():… print(“%s:%s”% (action, ))
>>> event_types = (“start”, “end”)
>>> parser = etree. XMLPullParser(event_types, tag=”{testns/}*”)
You can combine the pull parser with a parser target. In that case,
it is the target’s responsibility to generate event values. Whatever
it returns from its () and () methods will be returned
by the pull parser as the second item of the parse events tuple.
>>> class Target(object):… print(‘-> start(%s)’% tag)… return ‘>>START:%s<<'% tag... print('-> end(%s)’% tag)… return ‘>>END:%s<<'% tag... print('-> close()’)… return “CLOSED! ”
>>> event_types = (‘start’, ‘end’)
>>> parser = etree. XMLPullParser(event_types, target=Target())
>>> (‘
-> start(root)
-> start(child1)
-> end(child1)
-> start(child2)
-> end(child2)
-> end(root)
>>> for action, value in ad_events():… print(‘%s:%s’% (action, value))
start: >>START: root<< start: >>START: child1<< end: >>END: child1<< start: >>START: child2<< end: >>END: child2<< end: >>END: root<< >>> print(())
-> close()
CLOSED!
As you can see, the event values do not even have to be Element objects.
The target is generally free to decide how it wants to create an XML tree
or whatever else it wants to make of the parser callbacks. In many cases,
however, you will want to make your custom target inherit from the
TreeBuilder class in order to have it build a tree that you can process
normally. The start() and () methods of TreeBuilder return
the Element object that was created, so you can override them and modify
the input or output according to your needs. Here is an example that
filters attributes before they are being added to the tree:
>>> class AttributeFilter(eeBuilder):… attrib = dict(attrib)… if ‘evil’ in attrib:… del attrib[‘evil’]… return super(AttributeFilter, self)(tag, attrib)
>>> parser = etree. XMLPullParser(target=AttributeFilter())
>>> (‘
>>> for action, element in ad_events():… print(‘%s:%s(%r)’% (action,, ))
end: child1({‘test’: ‘123’})
end: child2({})
end: root({})
As known from ElementTree, the iterparse() utility function
returns an iterator that generates parser events for an XML file (or
file-like object), while building the tree. You can think of it as
a blocking wrapper around the XMLPullParser that automatically and
incrementally reads data from the input file for you and provides a
single iterator for them:
>>> xml = ”’…
>>> context = erparse(StringIO(xml))
>>> for action, elem in context:… print(“%s:%s”% (action, ))
After parsing, the resulting tree is available through the root property
of the iterator:
>>>
‘root’
The other event types can be activated with the events keyword argument:
>>> events = (“start”, “end”)
>>> context = erparse(StringIO(xml), events=events)
iterparse() also supports the tag argument for selective event
iteration and several other parameters that control the parser setup.
The tag argument can be a single tag or a sequence of tags.
You can also use it to parse HTML input by passing html=True.
For convenience, lxml also provides an iterwalk() function.
It behaves exactly like iterparse(), but works on Elements and
ElementTrees. Here is an example for a tree parsed by iterparse():
>>> f = StringIO(xml)
>>> context = erparse(… f, events=(“start”, “end”), tag=”element”)
>>> root =
And now we can take the resulting in-memory tree and iterate over it
using iterwalk() to get the exact same events without parsing the
input again:
>>> context = erwalk(… root, events=(“start”, “end”), tag=”element”)
In order to avoid wasting time on uninteresting parts of the tree, the iterwalk
iterator can be instructed to skip over an entire subtree with its. skip_subtree() method.
>>> root = (”’… …
>>> context = erwalk(root, events=(“start”, “end”))
>>> for action, elem in context:… if action == ‘start’ and == ‘a’:… ip_subtree() # ignore
start: c
end: c
Note that. skip_subtree() only has an effect when handling start or
start-ns events.
has broader support for Python unicode strings than the ElementTree
library. First of all, where ElementTree would raise an exception, the
parsers in can handle unicode strings straight away. This is most
helpful for XML snippets embedded in source code using the XML()
function:
>>> root = ( u’
This requires, however, that unicode strings do not specify a conflicting
encoding themselves and thus lie about their real encoding:
>>> ( u’ xml version="1. 0" encoding="ASCII"? >n’ +… u’
ValueError: Unicode strings with encoding declaration are not supported. Please use bytes input or XML fragments without declaration.
Similarly, you will get errors when you try the same with HTML data in a
unicode string that specifies a charset in a meta tag of the header. You
should generally avoid converting XML/HTML data to unicode before passing it
into the parsers. It is both slower and error prone.
To serialize the result, you would normally use the tostring()
module function, which serializes to plain ASCII by default or a
number of other byte encodings if asked for:
b’
>>> string(root, encoding=’UTF-8′, xml_declaration=False)
b’
As an extension, recognises the name ‘unicode’ as an argument
to the encoding parameter to build a Python unicode representation of a tree:
>>> string(root, encoding=’unicode’)
u’
>>> el = etree. Element(“test”)
>>> string(el, encoding=’unicode’)
u’
>>> subel = bElement(el, “subtest”)
u’
>>> tree = etree. ElementTree(el)
>>> string(tree, encoding=’unicode’)
The result of tostring(encoding=’unicode’) can be treated like any
other Python unicode string and then passed back into the parsers.
However, if you want to save the result to a file or pass it over the
network, you should use write() or tostring() with a byte
encoding (typically UTF-8) to serialize the XML. The main reason is
that unicode strings returned by tostring(encoding=’unicode’) are
not byte streams and they never have an XML declaration to specify
their encoding. These strings are most likely not parsable by other
XML libraries.
For normal byte encodings, the tostring() function automatically
adds a declaration as needed that reflects the encoding of the
returned string. This makes it possible for other parsers to
correctly parse the XML byte stream. Note that using tostring()
with UTF-8 is also considerably faster in most cases.
Frequently Asked Questions about beautifulsoup lxml
What is lxml in BeautifulSoup?
BeautifulSoup is a Python package that parses broken HTML, just like lxml supports it based on the parser of libxml2. … To prevent users from having to choose their parser library in advance, lxml can interface to the parsing capabilities of BeautifulSoup through the lxml. html. soupparser module.
What does lxml parser do?
lxml provides a very simple and powerful API for parsing XML and HTML. It supports one-step parsing as well as step-by-step parsing using an event-driven API (currently only for XML).
What is the difference between HTML and lxml?
html5lib: A pure-python library for parsing HTML. It is designed to conform to the WHATWG HTML specification, as is implemented by all major web browsers. lxml: A Pythonic, mature binding for the C libraries libxml2 and libxslt .Apr 18, 2019