Node Js Headless Browser
puppeteer/puppeteer: Headless Chrome Node.js API – GitHub
API | FAQ | Contributing | Troubleshooting
Puppeteer is a Node library which provides a high-level API to control Chrome or Chromium over the DevTools Protocol. Puppeteer runs headless by default, but can be configured to run full (non-headless) Chrome or Chromium.
What can I do?
Most things that you can do manually in the browser can be done using Puppeteer! Here are a few examples to get you started:
Generate screenshots and PDFs of pages.
Crawl a SPA (Single-Page Application) and generate pre-rendered content (i. e. “SSR” (Server-Side Rendering)).
Automate form submission, UI testing, keyboard input, etc.
Create an up-to-date, automated testing environment. Run your tests directly in the latest version of Chrome using the latest JavaScript and browser features.
Capture a timeline trace of your site to help diagnose performance issues.
Test Chrome Extensions.
Give it a spin:
Getting Started
Installation
To use Puppeteer in your project, run:
npm i puppeteer
# or “yarn add puppeteer”
Note: When you install Puppeteer, it downloads a recent version of Chromium (~170MB Mac, ~282MB Linux, ~280MB Win) that is guaranteed to work with the API. To skip the download, download into another path, or download a different browser, see Environment variables.
puppeteer-core
Since version 1. 7. 0 we publish the puppeteer-core package,
a version of Puppeteer that doesn’t download any browser by default.
npm i puppeteer-core
# or “yarn add puppeteer-core”
puppeteer-core is intended to be a lightweight version of Puppeteer for launching an existing browser installation or for connecting to a remote one. Be sure that the version of puppeteer-core you install is compatible with the
browser you intend to connect to.
See puppeteer vs puppeteer-core.
Usage
Puppeteer follows the latest maintenance LTS version of Node.
Note: Prior to v1. 18. 1, Puppeteer required at least Node v6. 4. 0. Versions from v1. 1 to v2. 1. 0 rely on
Node 8. 9. 0+. Starting from v3. 0 Puppeteer starts to rely on Node 10. 1+. All examples below use async/await which is only supported in Node v7. 6. 0 or greater.
Puppeteer will be familiar to people using other browser testing frameworks. You create an instance
of Browser, open pages, and then manipulate them with Puppeteer’s API.
Example – navigating to and saving a screenshot as
Save file as
const puppeteer = require(‘puppeteer’);
(async () => {
const browser = await ();
const page = await wPage();
await (”);
await reenshot({ path: ”});
await ();})();
Execute script on the command line
Puppeteer sets an initial page size to 800×600px, which defines the screenshot size. The page size can be customized with tViewport().
Example – create a PDF.
await (”, {
waitUntil: ‘networkidle2’, });
await ({ path: ”, format: ‘a4’});
See () for more information about creating pdfs.
Example – evaluate script in the context of the page
// Get the “viewport” of the page, as reported by the page.
const dimensions = await page. evaluate(() => {
return {
width: ientWidth,
height: ientHeight,
deviceScaleFactor: vicePixelRatio, };});
(‘Dimensions:’, dimensions);
See Page. evaluate() for more information on evaluate and related methods like evaluateOnNewDocument and exposeFunction.
Default runtime settings
1. Uses Headless mode
Puppeteer launches Chromium in headless mode. To launch a full version of Chromium, set the headless option when launching a browser:
const browser = await ({ headless: false}); // default is true
2. Runs a bundled version of Chromium
By default, Puppeteer downloads and uses a specific version of Chromium so its API
is guaranteed to work out of the box. To use Puppeteer with a different version of Chrome or Chromium,
pass in the executable’s path when creating a Browser instance:
const browser = await ({ executablePath: ‘/path/to/Chrome’});
You can also use Puppeteer with Firefox Nightly (experimental support). See () for more information.
See this article for a description of the differences between Chromium and Chrome. This article describes some differences for Linux users.
3. Creates a fresh user profile
Puppeteer creates its own browser user profile which it cleans up on every run.
Resources
API Documentation
Examples
Community list of Puppeteer resources
Debugging tips
Turn off headless mode – sometimes it’s useful to see what the browser is
displaying. Instead of launching in headless mode, launch a full version of
the browser using headless: false:
const browser = await ({ headless: false});
Slow it down – the slowMo option slows down Puppeteer operations by the
specified amount of milliseconds. It’s another way to help see what’s going on.
const browser = await ({
headless: false,
slowMo: 250, // slow down by 250ms});
Capture console output – You can listen for the console event.
This is also handy when debugging code in page. evaluate():
(‘console’, (msg) => (‘PAGE LOG:’, ()));
await page. evaluate(() => (`url is ${}`));
Use debugger in application code browser
There are two execution context: that is running test code, and the browser
running application code being tested. This lets you debug code in the
application code browser; ie code inside evaluate().
Use {devtools: true} when launching Puppeteer:
const browser = await ({ devtools: true});
Change default test timeout:
jest: tTimeout(100000);
jasmine: FAULT_TIMEOUT_INTERVAL = 100000;
mocha: this. timeout(100000); (don’t forget to change test to use function and not ‘=>’)
Add an evaluate statement with debugger inside / add debugger to an existing evaluate statement:
await page. evaluate(() => {
debugger;});
The test will now stop executing in the above evaluate statement, and chromium will stop in debug mode.
Use debugger in
This will let you debug test code. For example, you can step over await () in the script and see the click happen in the application code browser.
Note that you won’t be able to run await () in
DevTools console due to this Chromium bug. So if
you want to try something out, you have to add it to your test file.
Add debugger; to your test, eg:
debugger;
await (‘a[target=_blank]’);
Set headless to false
Run node –inspect-brk, eg node –inspect-brk node_modules/ tests
In Chrome open chromeinspect/#devices and click inspect
In the newly opened test browser, type F8 to resume test execution
Now your debugger will be hit and you can debug in the test browser
Enable verbose logging – internal DevTools protocol traffic
will be logged via the debug module under the puppeteer namespace.
# Basic verbose logging
env DEBUG=”puppeteer:*” node
# Protocol traffic can be rather noisy. This example filters out all Network domain messages
env DEBUG=”puppeteer:*” env DEBUG_COLORS=true node 2>&1 | grep -v ‘”Network’
Debug your Puppeteer (node) code easily, using ndb
npm install -g ndb (or even better, use npx! )
add a debugger to your Puppeteer (node) code
add ndb (or npx ndb) before your test command. For example:
ndb jest or ndb mocha (or npx ndb jest / npx ndb mocha)
debug your test inside chromium like a boss!
Usage with TypeScript
We have recently completed a migration to move the Puppeteer source code from JavaScript to TypeScript and as of version 7. 1 we ship our own built-in type definitions.
If you are on a version older than 7, we recommend installing the Puppeteer type definitions from the DefinitelyTyped repository:
npm install –save-dev @types/puppeteer
The types that you’ll see appearing in the Puppeteer source code are based off the great work of those who have contributed to the @types/puppeteer package. We really appreciate the hard work those people put in to providing high quality TypeScript definitions for Puppeteer’s users.
Contributing to Puppeteer
Check out contributing guide to get an overview of Puppeteer development.
Q: Who maintains Puppeteer?
The Chrome DevTools team maintains the library, but we’d love your help and expertise on the project!
See Contributing.
Q: What is the status of cross-browser support?
Official Firefox support is currently experimental. The ongoing collaboration with Mozilla aims to support common end-to-end testing use cases, for which developers expect cross-browser coverage. The Puppeteer team needs input from users to stabilize Firefox support and to bring missing APIs to our attention.
From Puppeteer v2. 0 onwards you can specify ({product: ‘firefox’}) to run your Puppeteer scripts in Firefox Nightly, without any additional custom patches. While an older experiment required a patched version of Firefox, the current approach works with “stock” Firefox.
We will continue to collaborate with other browser vendors to bring Puppeteer support to browsers such as Safari.
This effort includes exploration of a standard for executing cross-browser commands (instead of relying on the non-standard DevTools Protocol used by Chrome).
Q: What are Puppeteer’s goals and principles?
The goals of the project are:
Provide a slim, canonical library that highlights the capabilities of the DevTools Protocol.
Provide a reference implementation for similar testing libraries. Eventually, these other frameworks could adopt Puppeteer as their foundational layer.
Grow the adoption of headless/automated browser testing.
Help dogfood new DevTools Protocol catch bugs!
Learn more about the pain points of automated browser testing and help fill those gaps.
We adapt Chromium principles to help us drive product decisions:
Speed: Puppeteer has almost zero performance overhead over an automated page.
Security: Puppeteer operates off-process with respect to Chromium, making it safe to automate potentially malicious pages.
Stability: Puppeteer should not be flaky and should not leak memory.
Simplicity: Puppeteer provides a high-level API that’s easy to use, understand, and debug.
Q: Is Puppeteer replacing Selenium/WebDriver?
No. Both projects are valuable for very different reasons:
Selenium/WebDriver focuses on cross-browser automation; its value proposition is a single standard API that works across all major browsers.
Puppeteer focuses on Chromium; its value proposition is richer functionality and higher reliability.
That said, you can use Puppeteer to run tests against Chromium, e. g. using the community-driven jest-puppeteer. While this probably shouldn’t be your only testing solution, it does have a few good points compared to WebDriver:
Puppeteer requires zero setup and comes bundled with the Chromium version it works best with, making it very easy to start with. At the end of the day, it’s better to have a few tests running chromium-only, than no tests at all.
Puppeteer has event-driven architecture, which removes a lot of potential flakiness. There’s no need for evil “sleep(1000)” calls in puppeteer scripts.
Puppeteer runs headless by default, which makes it fast to run. Puppeteer v1. 5. 0 also exposes browser contexts, making it possible to efficiently parallelize test execution.
Puppeteer shines when it comes to debugging: flip the “headless” bit to false, add “slowMo”, and you’ll see what the browser is doing. You can even open Chrome DevTools to inspect the test environment.
Q: Why doesn’t Puppeteer work with Chromium
We see Puppeteer as an indivisible entity with Chromium. Each version of Puppeteer bundles a specific version of Chromium – the only version it is guaranteed to work with.
This is not an artificial constraint: A lot of work on Puppeteer is actually taking place in the Chromium repository. Here’s a typical story:
A Puppeteer bug is reported: It turned out this is an issue with the DevTools protocol, so we’re fixing it in Chromium: Once the upstream fix is landed, we roll updated Chromium into Puppeteer:
However, oftentimes it is desirable to use Puppeteer with the official Google Chrome rather than Chromium. For this to work, you should install a puppeteer-core version that corresponds to the Chrome version.
For example, in order to drive Chrome 71 with puppeteer-core, use chrome-71 npm tag:
npm install puppeteer-core@chrome-71
Q: Which Chromium version does Puppeteer use?
Look for the chromium entry in To find the corresponding Chromium commit and version number, search for the revision prefixed by an r in OmahaProxy’s “Find Releases” section.
Q: Which Firefox version does Puppeteer use?
Since Firefox support is experimental, Puppeteer downloads the latest Firefox Nightly when the PUPPETEER_PRODUCT environment variable is set to firefox. That’s also why the value of firefox in is latest — Puppeteer isn’t tied to a particular Firefox version.
To fetch Firefox Nightly as part of Puppeteer installation:
PUPPETEER_PRODUCT=firefox npm i puppeteer
Q: What’s considered a “Navigation”?
From Puppeteer’s standpoint, “navigation” is anything that changes a page’s URL.
Aside from regular navigation where the browser hits the network to fetch a new document from the web server, this includes anchor navigations and History API usage.
With this definition of “navigation, ” Puppeteer works seamlessly with single-page applications.
Q: What’s the difference between a “trusted” and “untrusted” input event?
In browsers, input events could be divided into two big groups: trusted vs. untrusted.
Trusted events: events generated by users interacting with the page, e. using a mouse or keyboard.
Untrusted event: events generated by Web APIs, e. eateEvent or () methods.
Websites can distinguish between these two groups:
using an Trusted event flag
sniffing for accompanying events. For example, every trusted ‘click’ event is preceded by ‘mousedown’ and ‘mouseup’ events.
For automation purposes it’s important to generate trusted events. All input events generated with Puppeteer are trusted and fire proper accompanying events. If, for some reason, one needs an untrusted event, it’s always possible to hop into a page context with page. evaluate and generate a fake event:
document. querySelector(‘button[type=submit]’)();});
Q: What features does Puppeteer not support?
You may find that Puppeteer does not behave as expected when controlling pages that incorporate audio and video. (For example, video playback/screenshots is likely to fail. ) There are two reasons for this:
Puppeteer is bundled with Chromium — not Chrome — and so by default, it inherits all of Chromium’s media-related limitations. This means that Puppeteer does not support licensed formats such as AAC or H. 264. (However, it is possible to force Puppeteer to use a separately-installed version Chrome instead of Chromium via the executablePath option to You should only use this configuration if you need an official release of Chrome that supports these media formats. )
Since Puppeteer (in all configurations) controls a desktop version of Chromium/Chrome, features that are only supported by the mobile version of Chrome are not supported. This means that Puppeteer does not support HTTP Live Streaming (HLS).
Q: I am having trouble installing / running Puppeteer in my test environment. Where should I look for help?
We have a troubleshooting guide for various operating systems that lists the required dependencies.
Q: Chromium gets downloaded on every npm ci run. How can I cache the download?
The default download path is node_modules/puppeteer/ However, you can change that path with the PUPPETTER_DOWNLOAD_PATH environment variable.
Puppeteer uses that variable to resolve the Chromium executable location during launch, so you don’t need to specify PUPPETEER_EXECUTABLE_PATH as well.
For example, if you wish to keep the Chromium download in ~/
export PUPPETEER_DOWNLOAD_PATH=~/
npm ci
# by default the Chromium executable path is inferred
# from the download path
npm test
# a new run of npm ci will check for the existence of
# Chromium in ~/
Q: How do I try/test a prerelease version of Puppeteer?
You can check out this repo or install the latest prerelease from npm:
npm i –save puppeteer@next
Please note that prerelease may be unstable and contain bugs.
Q: I have more questions! Where do I ask?
There are many ways to get help on Puppeteer:
bugtracker
Stack Overflow
Make sure to search these channels before posting your question.
Web Scraping with a Headless Browser: A Puppeteer Tutorial
In this article, we’ll see how easy it is to perform web scraping (web automation) with the somewhat non-traditional method of using a headless browser.
What Is a Headless Browser and Why Is It Needed?
The last few years have seen the web evolve from simplistic websites built with bare HTML and CSS. Now there are much more interactive web apps with beautiful UIs, which are often built with frameworks such as Angular or React. In other words, nowadays JavaScript rules the web, including almost everything you interact with on websites.
For our purposes, JavaScript is a client-side language. The server returns JavaScript files or scripts injected into an HTML response, and the browser processes it. Now, this is a problem if we are doing some kind of web scraping or web automation because more times than not, the content that we’d like to see or scrape is actually rendered by JavaScript code and is not accessible from the raw HTML response that the server delivers.
As we mentioned above, browsers do know how to process the JavaScript and render beautiful web pages. Now, what if we could leverage this functionality for our scraping needs and had a way to control browsers programmatically? That’s exactly where headless browser automation steps in!
Headless? Excuse me? Yes, this just means there’s no graphical user interface (GUI). Instead of interacting with visual elements the way you normally would—for example with a mouse or touch device—you automate use cases with a command-line interface (CLI).
Headless Chrome and Puppeteer
There are many web scraping tools that can be used for headless browsing, like or headless Firefox using Selenium. But today we’ll be exploring headless Chrome via Puppeteer, as it’s a relatively newer player, released at the start of 2018. Editor’s note: It’s worth mentioning Intoli’s Remote Browser, another new player, but that will have to be a subject for another article.
What exactly is Puppeteer? It’s a library which provides a high-level API to control headless Chrome or Chromium or to interact with the DevTools protocol. It’s maintained by the Chrome DevTools team and an awesome open-source community.
Enough talking—let’s jump into the code and explore the world of how to automate web scraping using Puppeteer’s headless browsing!
Preparing the Environment
First of all, you’ll need to have 8+ installed on your machine. You can install it here, or if you are CLI lover like me and like to work on Ubuntu, follow those commands:
curl -sL | sudo -E bash –
sudo apt-get install -y nodejs
You’ll also need some packages that may or may not be available on your system. Just to be safe, try to install those:
sudo apt-get install -yq –no-install-recommends libasound2 libatk1. 0-0 libc6 libcairo2 libcups2 libdbus-1-3 libexpat1 libfontconfig1 libgcc1 libgconf-2-4 libgdk-pixbuf2. 0-0 libglib2. 0-0 libgtk-3-0 libnspr4 libpango-1. 0-0 libpangocairo-1. 0-0 libstdc++6 libx11-6 libx11-xcb1 libxcb1 libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 libxrandr2 libxrender1 libxss1 libxtst6 libnss3
Setup Headless Chrome and Puppeteer
I’d recommend installing Puppeteer with npm, as it’ll also include the stable up-to-date Chromium version that is guaranteed to work with the library.
Run this command in your project root directory:
npm i puppeteer –save
Note: This might take a while as Puppeteer will need to download and install Chromium in the background.
Okay, now that we are all set and configured, let the fun begin!
Using Puppeteer API for Automated Web Scraping
Let’s start our Puppeteer tutorial with a basic example. We’ll write a script that will cause our headless browser to take a screenshot of a website of our choice.
Create a new file in your project directory named and open it in your favorite code editor.
First, let’s import the Puppeteer library in your script:
const puppeteer = require(‘puppeteer’);
Next up, let’s take the URL from command-line arguments:
const url = [2];
if (! url) {
throw “Please provide a URL as the first argument”;}
Now, we need to keep in mind that Puppeteer is a promise-based library: It performs asynchronous calls to the headless Chrome instance under the hood. Let’s keep the code clean by using async/await. For that, we need to define an async function first and put all the Puppeteer code in there:
async function run () {
const browser = await ();
const page = await wPage();
await (url);
await reenshot({path: ”});
();}
run();
Altogether, the final code looks like this:
throw “Please provide URL as a first argument”;}
You can run it by executing the following command in the root directory of your project:
node
Wait a second, and boom! Our headless browser just created a file named and you can see the GitHub homepage rendered in it. Great, we have a working Chrome web scraper!
Let’s stop for a minute and explore what happens in our run() function above.
First, we launch a new headless browser instance, then we open a new page (tab) and navigate to the URL provided in the command-line argument. Lastly, we use Puppeteer’s built-in method for taking a screenshot, and we only need to provide the path where it should be saved. We also need to make sure to close the headless browser after we are done with our automation.
Now that we’ve covered the basics, let’s move on to something a bit more complex.
A Second Puppeteer Scraping Example
For the next part of our Puppeteer tutorial, let’s say we want to scrape down the newest articles from Hacker News.
Create a new file named and paste in the following code snippet:
function run () {
return new Promise(async (resolve, reject) => {
try {
await (“);
let urls = await page. evaluate(() => {
let results = [];
let items = document. querySelectorAll(‘orylink’);
rEach((item) => {
({
url: tAttribute(‘href’),
text: nerText, });});
return results;})
();
return resolve(urls);} catch (e) {
return reject(e);}})}
run()()();
Okay, there’s a bit more going on here compared with the previous example.
The first thing you might notice is that the run() function now returns a promise so the async prefix has moved to the promise function’s definition.
We’ve also wrapped all of our code in a try-catch block so that we can handle any errors that cause our promise to be rejected.
And finally, we’re using Puppeteer’s built-in method called evaluate(). This method lets us run custom JavaScript code as if we were executing it in the DevTools console. Anything returned from that function gets resolved by the promise. This method is very handy when it comes to scraping information or performing custom actions.
The code passed to the evaluate() method is pretty basic JavaScript that builds an array of objects, each having url and text fields that represent the story URLs we see on
The output of the script looks something like this (but with 30 entries, originally):
[ { url: ”,
text: ‘Bias detectives: the researchers striving to make algorithms fair’},
{ url: ”,
text: ‘Mino Games Is Hiring Programmers in Montreal’},
text: ‘A Beginner\’s Guide to Firewalling with pf’},
//…
text: ‘ChaCha20 and Poly1305 for IETF Protocols’}]
Pretty neat, I’d say!
Okay, let’s move forward. We only had 30 items returned, while there are many more available—they are just on other pages. We need to click on the “More” button to load the next page of results.
Let’s modify our script a bit to add a support for pagination:
function run (pagesToScrape) {
if (! pagesToScrape) {
pagesToScrape = 1;}
let currentPage = 1;
let urls = [];
while (currentPage <= pagesToScrape) {
let newUrls = await page. evaluate(() => {
return results;});
urls = (newUrls);
if (currentPage < pagesToScrape) {
await ([
await ('relink'),
await page. waitForSelector('orylink')])}
currentPage++;}
run(5)()();
Let’s review what we did here:
We added a single argument called pagesToScrape to our main run() function. We’ll use this to limit how many pages our script will scrape.
There is one more new variable named currentPage which represents the number of the page of results are we looking at currently. It’s set to 1 initially. We also wrapped our evaluate() function in a while loop, so that it keeps running as long as currentPage is less than or equal to pagesToScrape.
We added the block for moving to a new page and waiting for the page to load before restarting the while loop.
You’ll notice that we used the () method to have the headless browser click on the “More” button. We also used the waitForSelector() method to make sure our logic is paused until the page contents are loaded.
Both of those are high-level Puppeteer API methods ready to use out-of-the-box.
One of the problems you’ll probably encounter during scraping with Puppeteer is waiting for a page to load. Hacker News has a relatively simple structure and it was fairly easy to wait for its page load completion. For more complex use cases, Puppeteer offers a wide range of built-in functionality, which you can explore in the API documentation on GitHub.
This is all pretty cool, but our Puppeteer tutorial hasn’t covered optimization yet. Let’s see how can we make Puppeteer run faster.
Optimizing Our Puppeteer Script
The general idea is to not let the headless browser do any extra work. This might include loading images, applying CSS rules, firing XHR requests, etc.
As with other tools, optimization of Puppeteer depends on the exact use case, so keep in mind that some of these ideas might not be suitable for your project. For instance, if we had avoided loading images in our first example, our screenshot might not have looked how we wanted.
Anyway, these optimizations can be accomplished either by caching the assets on the first request, or canceling the HTTP requests outright as they are initiated by the website.
Let’s see how caching works first.
You should be aware that when you launch a new headless browser instance, Puppeteer creates a temporary directory for its profile. It is removed when the browser is closed and is not available for use when you fire up a new instance—thus all the images, CSS, cookies, and other objects stored will not be accessible anymore.
We can force Puppeteer to use a custom path for storing data like cookies and cache, which will be reused every time we run it again—until they expire or are manually deleted.
const browser = await ({
userDataDir: '. /data', });
This should give us a nice bump in performance, as lots of CSS and images will be cached in the data directory upon the first request, and Chrome won’t need to download them again and again.
However, those assets will still be used when rendering the page. In our scraping needs of Y Combinator news articles, we don’t really need to worry about any visuals, including the images. We only care about bare HTML output, so let’s try to block every request.
Luckily, Puppeteer is pretty cool to work with, in this case, because it comes with support for custom hooks. We can provide an interceptor on every request and cancel the ones we don’t really need.
The interceptor can be defined in the following way:
await tRequestInterception(true);
('request', (request) => {
if (sourceType() === ‘document’) {
ntinue();} else {
();}});
As you can see, we have full control over the requests that get initiated. We can write custom logic to allow or abort specific requests based on their resourceType. We also have access to lots of other data like so we can block only specific URLs if we want.
In the above example, we only allow requests with the resource type of “document” to get through our filter, meaning that we will block all images, CSS, and everything else besides the original HTML response.
Here’s our final code:
await page. waitForSelector(‘orylink’);
await page. waitForSelector(‘relink’),
Stay Safe with Rate Limits
Headless browsers are very powerful tools. They’re able to perform almost any kind of web automation task, and Puppeteer makes this even easier. Despite all the possibilities, we must comply with a website’s terms of service to make sure we don’t abuse the system.
Since this aspect is more architecture-related, I won’t cover this in depth in this Puppeteer tutorial. That said, the most basic way to slow down a Puppeteer script is to add a sleep command to it:
js
await page. waitFor(5000);
This statement will force your script to sleep for five seconds (5000 ms). You can put this anywhere before ().
Just like limiting your use of third-party services, there are lots of other more robust ways to control your usage of Puppeteer. One example would be building a queue system with a limited number of workers. Every time you want to use Puppeteer, you’d push a new task into the queue, but there would only be a limited number of workers able to work on the tasks in it. This is a fairly common practice when dealing with third-party API rate limits and can be applied to Puppeteer web data scraping as well.
Puppeteer’s Place in the Fast-moving Web
In this Puppeteer tutorial, I’ve demonstrated its basic functionality as a web-scraping tool. However, it has much wider use cases, including headless browser testing, PDF generation, and performance monitoring, among many others.
Web technologies are moving forward fast. Some websites are so dependent on JavaScript rendering that it’s become nearly impossible to execute simple HTTP requests to scrape them or perform some sort of automation. Luckily, headless browsers are becoming more and more accessible to handle all of our automation needs, thanks to projects like Puppeteer and the awesome teams behind them!
Zombie.js | Zombie
Insanely fast, full-stack, headless browser testing using
View the Project on GitHub
Download ZIP File
Download TAR Ball
View On GitHub
Insanely fast, headless full-stack testing using
Zombie 6. x is tested to work with Node 8 or later.
If you need to use Node 6, consider using Zombie 5. x.
The Bite
If you’re going to write an insanely fast, headless browser, how can you not
call it Zombie? Zombie it is.
is a lightweight framework for testing client-side JavaScript code in
a simulated environment. No browser required.
Let’s try to sign up to a page and see what happens:
const Browser = require(‘zombie’);
// We’re going to make requests to // Which will be routed to our test server localhost:3000
Browser. localhost(”, 3000);
describe(‘User visits signup page’, function() {
const browser = new Browser();
before(function(done) {
(‘/signup’, done);});
describe(‘submits form’, function() {
browser
(’email’, ”)
(‘password’, ‘eat-the-living’). pressButton(‘Sign Me Up! ‘, done);});
it(‘should be successful’, function() {
();});
it(‘should see welcome page’, function() {
(‘title’, ‘Welcome To Brains Depot’);});});});
This example uses the Mocha testing
framework, but Zombie will work with other testing frameworks. Since Mocha
supports promises, we can also write the test like this:
before(function() {
return (‘/signup’);});
(‘password’, ‘eat-the-living’);
return essButton(‘Sign Me Up! ‘);});
Well, that was easy.
Table of Contents
Installing
Browser
Assertions
Cookies
Tabs
Debugging
Events
Resources
Pipeline
To install you will need
$ npm install zombie –save-dev
Methods for making assertions against the browser, such as
(”).
See Assertions for detailed discussion.
ferer
You can use this to set the HTTP Referer header.
sources
Access to history of retrieved resources. See Resources for
detailed discussion.
browser. pipeline
Access to the pipeline for making requests and processing responses. Use this
to add new request/response handlers the pipeline for a single browser instance,
or use dHandler to modify all instances. See
Pipeline.
Array of all open tabs (windows). Allows you to operate on more than one open
window at a time.
See Tabs for detailed discussion.
The proxy option takes a URL so you can tell Zombie what protocol, host and
port to use. Also supports Basic authentication, e. g. :
= ‘me:secret@myproxy:8080’
Collection of errors accumulated by the browser while loading page and executing
scripts.
Returns a string of the source HTML from the last response.
(element)
Returns a string of HTML for a selected HTML element. If argument element is undefined, the function returns a string of the source HTML from the last response.
Example uses:
(‘div’);
(‘div#contain’);
(‘. selector’);
();
Browser. localhost(host, port)
Allows you to make requests against a named domain and HTTP/S port, and will
route it to the test server running on localhost and unprivileged port.
For example, if you want to call your application “”, and redirect
traffic from port 80 to the test server that’s listening on port 3000, you can
do this:
Browser. localhost(”, 3000)
(‘/path’, function() {
=> ”
The first time you call Browser. localhost, if you didn’t specify, it will set it to the hostname (in the above example,
“”). Whenever you call with a relative URL, it
appends it to, so you don’t need to repeat the full URL in every
test case.
You can use wildcards to map domains and all hosts within these domains, and you
can specify the source port to map protocols other than HTTP. For example:
// HTTP requests for will be answered by localhost
// server running on port 3000
Browser. localhost(‘*. ‘, 3000);
// HTTPS requests will be answered by localhost server running on port 3001
Browser. ‘, 3001);
The underlying implementation hacks, so it will route any
TCP connection made by the Node application, whether Zombie or any other
library. It does not affect other processes running on your machine.
You can use this to customize new browser instances for your specific needs.
The extension function is called for every new browser instance, and can change
properties, bind methods, register event listeners, etc.
(function(browser) {
(‘console’, function(level, message) {
(message);});
(‘log’, function(level, message) {
(message);});});
Browser. evaluate
You can use this to evaluate javascript in the browser, it’s similar to
Browser. evaluate(‘document. querySelector(“a”)’)
To make life easier, Zombie introduces a set of convenience assertions that you
can access directly from the browser object. For example, to check that a page
loaded successfully:
(‘title’, ‘My Awesome Site’);
(‘#main’);
These assertions are available from the browser object since they operate on a
particular browser instance – generally dependent on the currently open window,
or document loaded in that window.
Many assertions require an element/elements as the first argument, for example,
to compare the text content (), or attribute value
(tribute). You can pass one of the following values:
An HTML element or an array of HTML elements
A CSS selector string (e. “h2”, “”, “#first-name”)
Many assertions take an expected value and compare it against the actual value.
For example, compares the expected value against the text contents
of one or more strings. The expected value can be one of:
A JavaScript primitive value (string, number)
undefined or null are used to assert the lack of value
A regular expression
A function that is called with the actual value and returns true if the
assertion is true
Any other object will be matched using epEqual
Note that in some cases the DOM specification indicates that lack of value is an
empty string, not null/undefined.
All assertions take an optional last argument that is the message to show if the
assertion fails. Better yet, use a testing framework like
Mocha that has good diff support and
don’t worry about these messages.
Available Assertions
The following assertions are available:
tribute(selection, name, expected, message)
Asserts the named attribute of the selected element(s) has the expected value.
Fails if no element found.
(‘form’, ‘method’, ‘post’);
(‘form’, ‘action’, ‘/customer/new’);
// Disabled with no attribute value, i. e.