• April 13, 2024

Jsdom Web Scraping

Web Scraping and Parsing HTML in Node.js with jsdom – Twilio

The internet has a wide variety of information for human consumption. But this data is often difficult to access programmatically if it doesn’t come in the form of a dedicated REST API. With tools like jsdom, you can scrape and parse this data directly from web pages to use for your projects and applications.
Let’s use the example of needing MIDI data to train a neural network that can generate classic Nintendo-sounding music. In order to do this, we’ll need a set of MIDI music from old Nintendo games. Using jsdom we can scrape this data from the Video Game Music Archive.
Getting started and setting up dependencies
Before moving on, you will need to make sure you have an up to date version of and npm installed.
Navigate to the directory where you want this code to live and run the following command in your terminal to create a package for this project:
The –yes argument runs through all of the prompts that you would otherwise have to fill out or skip. Now we have a for our app.
For making HTTP requests to get data from the web page we will use the Got library, and for parsing through the HTML we’ll use Cheerio.
Run the following command in your terminal to install these libraries:
npm install got@10. 4. 0 jsdom@16. 2. 2
jsdom is a pure-JavaScript implementation of many web standards, making it a familiar tool to use for lots of JavaScript developers. Let’s dive into how to use it.
Using Got to retrieve data to use with jsdom
First let’s write some code to grab the HTML from the web page, and look at how we can start parsing through it. The following code will send a GET request to the web page we want, and will create a jsdom object with the HTML from that page, which we’ll name dom:
const fs = require(‘fs’);
const got = require(‘got’);
const jsdom = require(“jsdom”);
const { JSDOM} = jsdom;
const vgmUrl= ”;
got(vgmUrl)(response => {
const dom = new JSDOM();
((‘title’). textContent);})(err => {
(err);});
When you pass the JSDOM constructor a string, you will get back a JSDOM object, from which you can access a number of usable properties such as window. As seen in this code, you can navigate through the HTML and retrieve DOM elements for the data you want using a query selector.
For example, querySelector(‘title’). textContent will get you the text inside of the tag on the page. If you save this code to a file named and run it with the command node, it will log the title of the web page to the console.<br /> Using CSS Selectors with jsdom<br /> If you want to get more specific in your query, there are a variety of selectors you can use to parse through the HTML. Two of the most common ones are to search for elements by class or ID. If you wanted to get a div with the ID of “menu” you would use querySelectorAll(‘#menu’) and if you wanted all of the header columns in the table of VGM MIDIs, you’d do querySelectorAll(”)<br /> What we want on this page are the hyperlinks to all of the MIDI files we need to download. We can start by getting every link on the page using querySelectorAll(‘a’). Add the following to your code in<br /> (‘a’). forEach(link => {<br /> ();});})(err => {<br /> This code logs the URL of every link on the page. We’re able to look through all elements from a given selector using the forEach function. Iterating through every link on the page is great, but we’re going to need to get a little more specific than that if we want to download all of the MIDI files.<br /> Filtering through HTML elements<br /> Before writing more code to parse the content that we want, let’s first take a look at the HTML that’s rendered by the browser. Every web page is different, and sometimes getting the right data out of them requires a bit of creativity, pattern recognition, and experimentation.<br /> Our goal is to download a bunch of MIDI files, but there are a lot of duplicate tracks on this webpage, as well as remixes of songs. We only want one of each song, and because our ultimate goal is to use this data to train a neural network to generate accurate Nintendo music, we won’t want to train it on user-created remixes.<br /> When you’re writing code to parse through a web page, it’s usually helpful to use the developer tools available to you in most modern browsers. If you right-click on the element you’re interested in, you can inspect the HTML behind that element to get more insight.<br /> You can write filter functions to fine-tune which data you want from your selectors. These are functions which loop through all elements for a given selector and return true or false based on whether they should be included in the set or not.<br /> If you looked through the data that was logged in the previous step, you might have noticed that there are quite a few links on the page that have no href attribute, and therefore lead nowhere. We can be sure those are not the MIDIs we are looking for, so let’s write a short function to filter those out as well as elements which do contain a href element that leads to a file:<br /> const isMidi = (link) => {<br /> // Return false if there is no href attribute.<br /> if(typeof === ‘undefined’) { return false}<br /> return (”);};<br /> Now we have the problem of not wanting to download duplicates or user generated remixes. For this we can use regular expressions to make sure we are only getting links whose text has no parentheses, as only the duplicates and remixes contain parentheses:<br /> const noParens = (link) => {<br /> // Regular expression to determine if the text has parentheses.<br /> const parensRegex = /^((?! (). )*$/;<br /> return (link. textContent);};<br /> Try adding these to your code in by creating an array out of the collection of HTML Element Nodes that are returned from querySelectorAll and applying our filter functions to it:<br /> // Create an Array out of the HTML Elements for filtering using spread syntax.<br /> const nodeList = [(‘a’)];<br /> (isMidi)(noParens). forEach(link => {<br /> Run this code again and it should only be printing files, without duplicates of any particular song.<br /> Downloading the MIDI files we want from the webpage<br /> Now that we have working code to iterate through every MIDI file that we want, we have to write code to download all of them.<br /> In the callback function for looping through all of the MIDI links, add this code to stream the MIDI download into a local file, complete with error checking:<br /> const fileName =;<br /> (`${vgmUrl}/${fileName}`)<br /> (‘error’, err => { (err); (`Error on ${vgmUrl}/${fileName}`)})<br /> (eateWriteStream(`MIDIs/${fileName}`))<br /> (‘finish’, () => (`Downloaded: ${fileName}`));});<br /> Run this code from a directory where you want to save all of the MIDI files, and watch your terminal screen display all 2230 MIDI files that you downloaded (at the time of writing this). With that, we should be finished scraping all of the MIDI files we need.<br /> Go through and listen to them and enjoy some Nintendo music!<br /> The vast expanse of the World Wide Web<br /> Now that you can programmatically grab things from web pages, you have access to a huge source of data for whatever your projects need. One thing to keep in mind is that changes to a web page’s HTML might break your code, so make sure to keep everything up to date if you’re building applications on top of this. You might want to also try comparing the functionality of the jsdom library with other solutions by following tutorials for web scraping using Cheerio and headless browser scripting using Puppeteer or a similar library called Playwright.<br /> If you’re looking for something to do with the data you just grabbed from the Video Game Music Archive, you can try using Python libraries like Magenta to train a neural network with it.<br /> I’m looking forward to seeing what you build. Feel free to reach out and share your experiences or ask any questions.<br /> Email:<br /> Twitter: @Sagnewshreds<br /> Github: Sagnew<br /> Twitch (streaming live code): Sagnewshreds<br /> <img decoding="async" src="" alt="Web scraping for web developers: a concise summary" title="Web scraping for web developers: a concise summary" /></p> <h2>Web scraping for web developers: a concise summary</h2> <p>Knowing one approach to web scraping may solve your problem in the short term, but all methods have their own strengths and weaknesses. Being aware of this can save you time and help you to solve a task more merous resources exist, which will show you a single technique for extracting data from a web page. The reality is that multiple solutions and tools can be used for are your options to programmatically extract data from a web page? What are the pros and cons of each approach? How to use cloud services to increase the degree of automation? This guide meant to answer these questions. I assume you have a basic understanding of browsers in general, HTTP requests, the DOM (Document Object Model), HTML, CSS selectors, and Async these phrases sound unfamiliar, I suggest checking out those topics before continue reading. Examples are implemented in, but hopefully you can transfer the theory into other languages if contentHTML sourceLet’s start with the simplest you are planning to scrape a web page, this is the first method to try. It requires a negligible amount of computing power and the least time to ever, it only works if the HTML source code contains the data you are targeting. To check that in Chrome, right-click the page and choose View page source. Now you should see the HTML source ’s important to note here, that you won’t see the same code by using Chrome’s inspect tool, because it shows the HTML structure related to the current state of the page, which is not necessarily the same as the source HTML document that you can get from the you find the data here, write a CSS selector belonging to the wrapping element, to have a reference later implement, you can send an HTTP GET request to the URL of the page and will get back the HTML source Node, you can use a tool called CheerioJS to parse this raw HTML and extract the data using a selector. The code looks something like this:const fetch = require(‘node-fetch’);<br /> const cheerio = require(‘cheerio’);<br /> const url = ”;<br /> const selector = ‘. example’;<br /> fetch(url)<br /> (res => ())<br /> (html => {<br /> const $ = (html);<br /> const data = $(selector);<br /> (());});Dynamic contentIn many cases, you can’t access the information from the raw HTML code, because the DOM was manipulated by some JavaScript, executed in the background. A typical example of that is a SPA (Single Page Application), where the HTML document contains a minimal amount of information, and the JavaScript populates it at this situation, a solution is to build the DOM and execute the scripts located in the HTML source code, just like a browser does. After that, the data can be extracted from this object with selectors. Headless browsersThis can be achieved by using a headless browser. A headless browser is almost the same thing as the normal one you are probably using every day but without a user interface. It’s running in the background and you can programmatically control it instead of clicking with your mouse and typing with a keyboard. A popular choice for a headless browser is Puppeteer. It is an easy to use Node library which provides a high-level API to control Chrome in headless mode. It can be configured to run non-headless, which comes in handy during development. The following code does the same thing as before, but it will work with dynamic pages as well:const puppeteer = require(‘puppeteer’);<br /> async function getData(url, selector){<br /> const browser = await ();<br /> const page = await wPage();<br /> await (url);<br /> const data = await page. evaluate(selector => {<br /> return document. querySelector(selector). innerText;}, selector);<br /> await ();<br /> return data;}<br /> getData(url, selector)<br /> (result => (result));Of course, you can do more interesting things with Puppeteer, so it is worth checking out the documentation. Here is a code snippet which navigates to a URL, takes a screenshot and saves it:const puppeteer = require(‘puppeteer’);<br /> async function takeScreenshot(url, path){<br /> await reenshot({path: path});<br /> await ();}<br /> const path = ”;<br /> takeScreenshot(url, path);As you can imagine, running a browser requires much more computing power than sending a simple GET request and parsing the response. Therefore execution is relatively costly and slow. Not only that but including a browser as a dependency makes the deployment package the upside, this method is highly flexible. You can use it for navigating around pages, simulating clicks, mouse moves, and keyboard events, filling out forms, taking screenshots or generating PDFs of pages, executing commands in the console, selecting elements to extract its text content. Basically, everything can be done that is possible manually in a ing just the DOMYou may think it’s a little bit of overkill to simulate a whole browser just for building a DOM. Actually, it is, at least under certain is a Node library, called Jsdom, which will parse the HTML you pass it, just like a browser does. However, it isn’t a browser, but a tool for building a DOM from a given HTML source code, while also executing the JavaScript code within that to this abstraction, Jsdom is able to run faster than a headless browser. If it’s faster, why don’t use it instead of headless browsers all the time? Quote from the documentation:People often have trouble with asynchronous script loading when using jsdom. Many pages load scripts asynchronously, but there is no way to tell when they’re done doing so, and thus when it’s a good time to run your code and inspect the resulting DOM structure. This is a fundamental limitation. … This can be worked around by polling for the presence of a specific solution is shown in the example. It checks every 100 ms if the element either appeared or timed out (after 2 seconds) also often throws nasty error messages when some browser feature in the page is not implemented by Jsdom, such as: “Error: Not implemented: …” or “Error: Not implemented: rollTo…”. This issue also can be solved with some workarounds (virtual consoles). Generally, it’s a lower level API than Puppeteer, so you need to implement certain things things make it a little messier to use, as you will see in the example. Puppeteer solves all these things for you behind the scenes and makes it extremely easy to use. Jsdom for this extra work will offer a fast and lean ’s see the same example as previously, but with Jsdom:const jsdom = require(“jsdom”);<br /> const { JSDOM} = jsdom;<br /> async function getData(url, selector, timeout) {<br /> const virtualConsole = new rtualConsole();<br /> (console, { omitJSDOMErrors: true});<br /> const dom = await omURL(url, {<br /> runScripts: “dangerously”,<br /> resources: “usable”,<br /> virtualConsole});<br /> const data = await new Promise((res, rej)=>{<br /> const started = ();<br /> const timer = setInterval(() => {<br /> const element = (selector)<br /> if (element) {<br /> res(element. textContent);<br /> clearInterval(timer);}<br /> else if(()-started > timeout){<br /> rej(“Timed out”);<br /> clearInterval(timer);}}, 100);});<br /> ();<br /> const url = “;<br /> const selector = “. example”;<br /> getData(url, selector, 2000)(result => (result));Reverse engineeringJsdom is a fast and lightweight solution, but it’s possible even further to simplify we even need to simulate the DOM? Generally speaking, the webpage that you want to scrape consists of the same HTML, same JavaScript, same technologies you’ve already know. So, if you find that piece of code from where the targeted data was derived, you can repeat the same operation in order to get the same we oversimplify things, the data you’re looking for can be:part of the HTML source code (as we saw in the first paragraph), part of a static file, referenced in the HTML document (for example a string in a javascript file), a response for a network request (for example some JavaScript code sent an AJAX request to a server, which responded with a JSON string) of these data sources can be accessed with network requests. From our perspective, it doesn’t matter if the webpage uses HTTP, WebSockets or any other communication protocol, because all of them are reproducible in you locate the resource housing the data, you can send a similar network request to the same server as the original page does. As a result, you get the response, containing the targeted data, which can be easily extracted with regular expressions, string methods, etc…With simple words, you can just take the resource where the data is located, instead of processing and loading the whole stuff. This way the problem, showed in the previous examples, can be solved with a single HTTP request instead of controlling a browser or a complex JavaScript solution seems easy in theory, but most of the times it can be really time-consuming to carry out and requires some experience of working with web pages and servers. A possible place to start researching is to observe network traffic. A great tool for that is the Network tab in Chrome DevTools. You will see all outgoing requests with the responses (including static files, AJAX requests, etc…), so you can iterate through them and look for the can be even more sluggish if the response is modified by some code before being rendered on the screen. In that case, you have to find that piece of code and understand what’s going you see, this solution may require way more work than the methods featured so far. On the other hand, once it’s implemented, it provides the best chart shows the required execution time, and the package size compared to Jsdom and Puppeteer:These results aren’t based on precise measurements and can vary in every situation, but shows well the approximate difference between these service integrationLet’s say you implemented one of the solutions listed so far. One way to execute your script is to power on your computer, open a terminal and execute it can become annoying and inefficient very quickly, so it would be better if we could just upload the script to a server and it would execute the code on a regular basis depending on how it’s can be done by running an actual server and configuring some rules on when to execute the script. Servers shine when you keep observing an element in a page. In other cases, a cloud function is probably a simpler way to functions are basically containers intended to execute the uploaded code when a triggering event occurs. This means you don’t have to manage servers, it’s done automatically by the cloud provider of your choice. A possible trigger can be a schedule, a network request, and numerous other events. You can save the collected data in a database, write it in a Google sheet or send it in an email. It all depends on your creativity. Popular cloud providers are Amazon Web Services(AWS), Google Cloud Platform(GCP), and Microsoft Azure and all of them has a function service:AWS LambdaGCP Cloud FunctionsAzure FunctionsThey offer some amount of free usage every month, which your single script probably won’t exceed, unless in extreme cases, but please check the pricing before you are using Puppeteer, Google’s Cloud Functions is the simplest solution. Headless Chrome’s zipped package size (~130MB) exceeds AWS Lambda’s limit of maximum zipped size (50MB). There are some techniques to make it work with Lambda, but GCP functions support headless Chrome by default, you just need to include Puppeteer as a dependency in you want to learn more about cloud functions in general, do some research on serverless architectures. Many great guides have already been written on this topic and most providers have an easy to follow mmaryI know that every topic was a bit compressed. You probably can’t implement every solution just with this knowledge, but with the documentation and some custom research, it shouldn’t be a problem. Hopefully, now you have a high-level overview of techniques used for collecting data from the web, so you can dive deeper into each topic accordingly.<br /> Learn to code for free. freeCodeCamp’s open source curriculum has helped more than 40, 000 people get jobs as developers. Get started<br /> <img decoding="async" src="" alt="Simple Site Scraping With NodeJS And JSDom - Shane Reustle" title="Simple Site Scraping With NodeJS And JSDom - Shane Reustle" /></p> <h2>Simple Site Scraping With NodeJS And JSDom – Shane Reustle</h2> <p>Simple Site Scraping With NodeJS And JSDom – Shane Reustle<br /> Shane Reustle<br /> I’ve been playing with Node on and off over the past couple of weeks and it’s really starting to grow on me. I initially looked into it because I’m intrigued by the thought of using one language for both client and server side coding. Turns out, as people have pointed out, it’s fast too. Really fast. I spent some time messing around with the hello world examples, built some simple APIs, and even gave a talk at BarCamp Boston about the basics of Node, but I want to do something that takes advantage of the JS nature of Node. Let’s start off with a simple site scraping example where we pull the current temperature. As the examples get more complex, we’ll be able to leverage libraries like jQuery to do more complex scraping in an already familiar syntax.<br /> For this first example, you’re going to need the Node packages Request and JSDOM. You can get both of these using npm (npm install request jsdom). This is a pretty short example (9 lines), so I’ll skip right to the code.<br /> var request = require(‘request’);<br /> var jsdom = require(‘jsdom’);<br /> var req_url = ”;<br /> request({uri: req_url}, function(error, response, body){<br /> if(! error && atusCode == 200){<br /> var window = (body). createWindow();<br /> var temp = tElementsByClassName(‘u-eng’)[0]. innerHTML;<br /> (temp);}});<br /> We started off by requiring Request and JSDOM. We then made the request to the site we’re going to scrape and set a callback function to handle the response. Inside that callback, we make sure the request was successful by checking the HTTP status code. If the request was successful, we pipe the response into JSDOM to render a duplicate version of the DOM locally so that we can interact with it. Now that we have a local copy of the webpage, we can do whatever we want with it. We only need 1 line of code to extract the current temperature from the page, which we send back to the console for the user to see.<br /> After playing around with this script for awhile, you may have noticed this method works well but often throws strange JS errors depending on what site you try to scrape. Let’s take a step back and think about what we’re doing. We make a request to a page and parse a copy of its response html locally. The problem with this method is that there are usually resources requested using relative links (/static/) and not absolute links (). Since these resources do not exist locally, they cannot be loaded, which ends up causing errors later on in the page. We can manually go in and patch up these broken links by either modifying the links in the response before parsing it, or including the scripts in your new DOM before parsing the response. Keep in mind, there may be AJAX requests that use relative links as well, so keep an eye on the network traffic.<br /> This should give you a good head start on scraping sites with NodeJS and JSDOM. JSDOM does a great job with this task, but doesn’t seem to be built for this type of work. If you need to scrape some JS generated content, you may need to do some work. If you have a large site scraping project, you may want to check out NodeIO and PhantomJS. NodeIO is a screen scraping framekwork built on top of Node and PhantomJS is a headless implementation of WebKit with a JS API. I would use PhantomJS if I needed to do any large scraping projects because it lets you interact with a real browser which renders all of the JS content. Keep an eye out for a review of PhantomJS in the future.</p> <h2>Frequently Asked Questions about jsdom web scraping</h2> <h3>Is web scraping legal?</h3> <p>It is perfectly legal if you scrape data from websites for public consumption and use it for analysis. However, it is not legal if you scrape confidential information for profit. For example, scraping private contact information without permission, and sell them to a 3rd party for profit is illegal.Aug 16, 2021</p> <h3>What is Jsdom?</h3> <p>JSDOM is a library which parses and interacts with assembled HTML just like a browser. The benefit is that it isn’t actually a browser. Instead, it implements web standards like browsers do. You can feed it some HTML, and it will parse that HTML.Aug 13, 2021</p> <h3>Can you use JavaScript to web scrape?</h3> <p>js, JavaScript is a great language to use for a web scraper: not only is Node fast, but you’ll likely end up using a lot of the same methods you’re used to from querying the DOM with front-end JavaScript.</p> <div class="post-tags"> <a href="#">Tags: <a href="https://proxyboys.net/tag/cheerio-web-scraping/" rel="tag">cheerio web scraping</a> <a href="https://proxyboys.net/tag/jsdom-example/" rel="tag">jsdom example</a> <a href="https://proxyboys.net/tag/jsdom-get-element-by-id/" rel="tag">jsdom get element by id</a> <a href="https://proxyboys.net/tag/jsdom-npm/" rel="tag">jsdom npm</a> <a href="https://proxyboys.net/tag/jsdom-queryselectorall/" rel="tag">jsdom queryselectorall</a> <a href="https://proxyboys.net/tag/jsdom-tutorial/" rel="tag">jsdom tutorial</a> <a href="https://proxyboys.net/tag/node-jsdom/" rel="tag">node jsdom</a> <a href="https://proxyboys.net/tag/web-scraping-nodejs/" rel="tag">web scraping nodejs</a><br /></a> </div> <div class="post-navigation"> <div class="post-prev"> <a href="https://proxyboys.net/how-to-buy-on-snkrs-2/"> <div class="postnav-image"> <i class="fa fa-chevron-left"></i> <div class="overlay"></div> <div class="navprev"> <img width="259" height="194" src="https://proxyboys.net/wp-content/uploads/2021/12/images-35.jpeg" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="" decoding="async" fetchpriority="high" /> </div> </div> <div class="prev-post-title"> <p><a href="https://proxyboys.net/how-to-buy-on-snkrs-2/" rel="prev">How To Buy On Snkrs</a></p> </div> </a> </div> <div class="post-next"> <a href="https://proxyboys.net/how-to-set-up-proxy-chrome/"> <div class="postnav-image"> <i class="fa fa-chevron-right"></i> <div class="overlay"></div> <div class="navnext"> <img width="300" height="168" src="https://proxyboys.net/wp-content/uploads/2021/11/images-10.jpeg" class="attachment-post-thumbnail size-post-thumbnail wp-post-image" alt="" decoding="async" /> </div> </div> <div class="next-post-title"> <p><a href="https://proxyboys.net/how-to-set-up-proxy-chrome/" rel="next">How To Set Up Proxy Chrome</a></p> </div> </a> </div> </div> </div> </div> <div id="comments" class="comments-area"> <div id="respond" class="comment-respond"> <h3 id="reply-title" class="comment-reply-title">Leave a Reply <small><a rel="nofollow" id="cancel-comment-reply-link" href="/jsdom-web-scraping/#respond" style="display:none;">Cancel reply</a></small></h3><form action="https://proxyboys.net/wp-comments-post.php" method="post" id="commentform" class="comment-form" novalidate><p class="comment-notes"><span id="email-notes">Your email address will not be published.</span> <span class="required-field-message">Required fields are marked <span class="required">*</span></span></p><p class="comment-form-comment"><label for="comment">Comment <span class="required">*</span></label> <textarea id="comment" name="comment" cols="45" rows="8" maxlength="65525" required></textarea></p><p class="comment-form-author"><label for="author">Name <span class="required">*</span></label> <input id="author" name="author" type="text" value="" size="30" maxlength="245" autocomplete="name" required /></p> <p class="comment-form-email"><label for="email">Email <span class="required">*</span></label> <input id="email" name="email" type="email" value="" size="30" maxlength="100" aria-describedby="email-notes" autocomplete="email" required /></p> <p class="comment-form-url"><label for="url">Website</label> <input id="url" name="url" type="url" value="" size="30" maxlength="200" autocomplete="url" /></p> <p class="comment-form-cookies-consent"><input id="wp-comment-cookies-consent" name="wp-comment-cookies-consent" type="checkbox" value="yes" /> <label for="wp-comment-cookies-consent">Save my name, email, and website in this browser for the next time I comment.</label></p> <p class="form-submit"><input name="submit" type="submit" id="submit" class="submit" value="Post Comment" /> <input type='hidden' name='comment_post_ID' value='17161' id='comment_post_ID' /> <input type='hidden' name='comment_parent' id='comment_parent' value='0' /> </p><p style="display: none;"><input type="hidden" id="akismet_comment_nonce" name="akismet_comment_nonce" value="048e07eddc" /></p><p style="display: none !important;" class="akismet-fields-container" data-prefix="ak_"><label>Δ<textarea name="ak_hp_textarea" cols="45" rows="8" maxlength="100"></textarea></label><input type="hidden" id="ak_js_1" name="ak_js" value="160"/><script>document.getElementById( "ak_js_1" ).setAttribute( "value", ( new Date() ).getTime() );</script></p></form> </div><!-- #respond --> </div><!-- #comments --> </div> <div class="col-lg-4"> <aside id="secondary" class="widget-area"> <div id="search-2" class="widget sidebar-post sidebar widget_search"><form role="search" method="get" class="search-form" action="https://proxyboys.net/"> <label> <span class="screen-reader-text">Search for:</span> <input type="search" class="search-field" placeholder="Search …" value="" name="s" /> </label> <input type="submit" class="search-submit" value="Search" /> </form></div> <div id="recent-posts-2" class="widget sidebar-post sidebar widget_recent_entries"> <div class="sidebar-title"><h3 class="title mb-20">Recent Posts</h3></div> <ul> <li> <a href="https://proxyboys.net/how-to-know-if-my-ip-address-is-being-tracked/">How To Know If My Ip Address Is Being Tracked</a> </li> <li> <a href="https://proxyboys.net/how-can-you-change-your-ip-address/">How Can You Change Your Ip Address</a> </li> <li> <a href="https://proxyboys.net/is-a-public-ip-address-safe/">Is A Public Ip Address Safe</a> </li> <li> <a href="https://proxyboys.net/anonymous-firefox-android/">Anonymous Firefox Android</a> </li> <li> <a href="https://proxyboys.net/hong-kong-proxy-server/">Hong Kong Proxy Server</a> </li> <li> <a href="https://proxyboys.net/youtube-proxy-france/">Youtube Proxy France</a> </li> <li> <a href="https://proxyboys.net/how-to-scrape-linkedin/">How To Scrape Linkedin</a> </li> <li> <a href="https://proxyboys.net/post-ad-gumtree/">Post Ad Gumtree</a> </li> <li> <a href="https://proxyboys.net/4g-proxy-usa/">4G Proxy Usa</a> </li> <li> <a href="https://proxyboys.net/proxy-8082/">Proxy 8082</a> </li> </ul> </div><div id="tag_cloud-2" class="widget sidebar-post sidebar widget_tag_cloud"><div class="sidebar-title"><h3 class="title mb-20">Tags</h3></div><div class="tagcloud"><a href="https://proxyboys.net/tag/best-free-proxy/" class="tag-cloud-link tag-link-349 tag-link-position-1" style="font-size: 20pt;" aria-label="best free proxy (148 items)">best free proxy</a> <a href="https://proxyboys.net/tag/best-free-proxy-server-list/" class="tag-cloud-link tag-link-219 tag-link-position-2" style="font-size: 16pt;" aria-label="best free proxy server list (93 items)">best free proxy server list</a> <a href="https://proxyboys.net/tag/best-proxy-server/" class="tag-cloud-link tag-link-348 tag-link-position-3" style="font-size: 12.6pt;" aria-label="best proxy server (62 items)">best proxy server</a> <a href="https://proxyboys.net/tag/best-proxy-sites/" class="tag-cloud-link tag-link-948 tag-link-position-4" style="font-size: 10.2pt;" aria-label="best proxy sites (47 items)">best proxy sites</a> <a href="https://proxyboys.net/tag/best-vpn-to-hide-ip-address/" class="tag-cloud-link tag-link-964 tag-link-position-5" style="font-size: 8.2pt;" aria-label="best vpn to hide ip address (37 items)">best vpn to hide ip address</a> <a href="https://proxyboys.net/tag/craigslist-account-for-sale/" class="tag-cloud-link tag-link-2942 tag-link-position-6" style="font-size: 9.2pt;" aria-label="craigslist account for sale (42 items)">craigslist account for sale</a> <a href="https://proxyboys.net/tag/craigslist-homepage/" class="tag-cloud-link tag-link-306 tag-link-position-7" style="font-size: 12.2pt;" aria-label="craigslist homepage (59 items)">craigslist homepage</a> <a href="https://proxyboys.net/tag/craigslist-my-account-new-posting/" class="tag-cloud-link tag-link-166 tag-link-position-8" style="font-size: 9pt;" aria-label="craigslist my account new posting (41 items)">craigslist my account new posting</a> <a href="https://proxyboys.net/tag/free-proxy/" class="tag-cloud-link tag-link-1110 tag-link-position-9" style="font-size: 13pt;" aria-label="free proxy (65 items)">free proxy</a> <a href="https://proxyboys.net/tag/free-proxy-list/" class="tag-cloud-link tag-link-469 tag-link-position-10" style="font-size: 20.8pt;" aria-label="free proxy list (163 items)">free proxy list</a> <a href="https://proxyboys.net/tag/free-proxy-list-download/" class="tag-cloud-link tag-link-220 tag-link-position-11" style="font-size: 11pt;" aria-label="free proxy list download (52 items)">free proxy list download</a> <a href="https://proxyboys.net/tag/free-proxy-list-india/" class="tag-cloud-link tag-link-472 tag-link-position-12" style="font-size: 9.2pt;" aria-label="free proxy list india (42 items)">free proxy list india</a> <a href="https://proxyboys.net/tag/free-proxy-list-txt/" class="tag-cloud-link tag-link-148 tag-link-position-13" style="font-size: 13.8pt;" aria-label="free proxy list txt (72 items)">free proxy list txt</a> <a href="https://proxyboys.net/tag/free-proxy-list-usa/" class="tag-cloud-link tag-link-1759 tag-link-position-14" style="font-size: 9.2pt;" aria-label="free proxy list usa (42 items)">free proxy list usa</a> <a href="https://proxyboys.net/tag/free-proxy-server/" class="tag-cloud-link tag-link-577 tag-link-position-15" style="font-size: 11.6pt;" aria-label="free proxy server (55 items)">free proxy server</a> <a href="https://proxyboys.net/tag/free-proxy-server-list/" class="tag-cloud-link tag-link-142 tag-link-position-16" style="font-size: 17.2pt;" aria-label="free proxy server list (107 items)">free proxy server list</a> <a href="https://proxyboys.net/tag/free-socks-list-daily/" class="tag-cloud-link tag-link-931 tag-link-position-17" style="font-size: 13pt;" aria-label="free socks list daily (65 items)">free socks list daily</a> <a href="https://proxyboys.net/tag/free-vpn-to-hide-ip-address/" class="tag-cloud-link tag-link-960 tag-link-position-18" style="font-size: 15.8pt;" aria-label="free vpn to hide ip address (91 items)">free vpn to hide ip address</a> <a href="https://proxyboys.net/tag/free-web-proxy/" class="tag-cloud-link tag-link-626 tag-link-position-19" style="font-size: 10.2pt;" aria-label="free web proxy (47 items)">free web proxy</a> <a href="https://proxyboys.net/tag/hide-my-ip-address-free/" class="tag-cloud-link tag-link-815 tag-link-position-20" style="font-size: 13.8pt;" aria-label="hide my ip address free (71 items)">hide my ip address free</a> <a href="https://proxyboys.net/tag/hide-my-ip-address-free-online/" class="tag-cloud-link tag-link-4832 tag-link-position-21" style="font-size: 12.4pt;" aria-label="hide my ip address free online (61 items)">hide my ip address free online</a> <a href="https://proxyboys.net/tag/hide-my-ip-online/" class="tag-cloud-link tag-link-814 tag-link-position-22" style="font-size: 11.8pt;" aria-label="hide my ip online (57 items)">hide my ip online</a> <a href="https://proxyboys.net/tag/how-to-hide-my-ip-address-in-gmail/" class="tag-cloud-link tag-link-968 tag-link-position-23" style="font-size: 8.4pt;" aria-label="how to hide my ip address in gmail (38 items)">how to hide my ip address in gmail</a> <a href="https://proxyboys.net/tag/how-to-hide-my-ip-address-without-vpn/" class="tag-cloud-link tag-link-962 tag-link-position-24" style="font-size: 13pt;" aria-label="how to hide my ip address without vpn (65 items)">how to hide my ip address without vpn</a> <a href="https://proxyboys.net/tag/ip-address/" class="tag-cloud-link tag-link-961 tag-link-position-25" style="font-size: 9.2pt;" aria-label="ip address (42 items)">ip address</a> <a href="https://proxyboys.net/tag/ip-address-tracker/" class="tag-cloud-link tag-link-477 tag-link-position-26" style="font-size: 16pt;" aria-label="ip address tracker (92 items)">ip address tracker</a> <a href="https://proxyboys.net/tag/my-ip-country/" class="tag-cloud-link tag-link-965 tag-link-position-27" style="font-size: 11.8pt;" aria-label="my ip country (57 items)">my ip country</a> <a href="https://proxyboys.net/tag/proxy-browser/" class="tag-cloud-link tag-link-629 tag-link-position-28" style="font-size: 11.6pt;" aria-label="proxy browser (55 items)">proxy browser</a> <a href="https://proxyboys.net/tag/proxy-server/" class="tag-cloud-link tag-link-470 tag-link-position-29" style="font-size: 17.4pt;" aria-label="proxy server (109 items)">proxy server</a> <a href="https://proxyboys.net/tag/proxy-server-address/" class="tag-cloud-link tag-link-1611 tag-link-position-30" style="font-size: 14.2pt;" aria-label="proxy server address (74 items)">proxy server address</a> <a href="https://proxyboys.net/tag/proxy-server-address-ps4/" class="tag-cloud-link tag-link-365 tag-link-position-31" style="font-size: 8.4pt;" aria-label="proxy server address ps4 (38 items)">proxy server address ps4</a> <a href="https://proxyboys.net/tag/proxy-server-example/" class="tag-cloud-link tag-link-350 tag-link-position-32" style="font-size: 9.2pt;" aria-label="proxy server example (42 items)">proxy server example</a> <a href="https://proxyboys.net/tag/proxy-site/" class="tag-cloud-link tag-link-351 tag-link-position-33" style="font-size: 10.8pt;" aria-label="proxy site (50 items)">proxy site</a> <a href="https://proxyboys.net/tag/proxy-url-list/" class="tag-cloud-link tag-link-2011 tag-link-position-34" style="font-size: 10.8pt;" aria-label="proxy url list (50 items)">proxy url list</a> <a href="https://proxyboys.net/tag/proxy-websites/" class="tag-cloud-link tag-link-627 tag-link-position-35" style="font-size: 15.2pt;" aria-label="proxy websites (85 items)">proxy websites</a> <a href="https://proxyboys.net/tag/socks5-proxy-list/" class="tag-cloud-link tag-link-131 tag-link-position-36" style="font-size: 8pt;" aria-label="socks5 proxy list (36 items)">socks5 proxy list</a> <a href="https://proxyboys.net/tag/socks5-proxy-list-txt/" class="tag-cloud-link tag-link-518 tag-link-position-37" style="font-size: 11pt;" aria-label="socks5 proxy list txt (52 items)">socks5 proxy list txt</a> <a href="https://proxyboys.net/tag/thepiratebay3-list/" class="tag-cloud-link tag-link-16 tag-link-position-38" style="font-size: 8.2pt;" aria-label="thepiratebay3 list (37 items)">thepiratebay3 list</a> <a href="https://proxyboys.net/tag/unblock-proxy-free/" class="tag-cloud-link tag-link-316 tag-link-position-39" style="font-size: 22pt;" aria-label="unblock proxy free (185 items)">unblock proxy free</a> <a href="https://proxyboys.net/tag/unblock-proxy-sites-list/" class="tag-cloud-link tag-link-4596 tag-link-position-40" style="font-size: 8.2pt;" aria-label="unblock proxy sites list (37 items)">unblock proxy sites list</a> <a href="https://proxyboys.net/tag/utorrent-download/" class="tag-cloud-link tag-link-1167 tag-link-position-41" style="font-size: 11pt;" aria-label="utorrent download (51 items)">utorrent download</a> <a href="https://proxyboys.net/tag/vpn-proxy/" class="tag-cloud-link tag-link-585 tag-link-position-42" style="font-size: 9pt;" aria-label="vpn proxy (41 items)">vpn proxy</a> <a href="https://proxyboys.net/tag/what-is-a-proxy-server/" class="tag-cloud-link tag-link-74 tag-link-position-43" style="font-size: 9.2pt;" aria-label="what is a proxy server (42 items)">what is a proxy server</a> <a href="https://proxyboys.net/tag/what-is-my-ip/" class="tag-cloud-link tag-link-816 tag-link-position-44" style="font-size: 13.2pt;" aria-label="what is my ip (67 items)">what is my ip</a> <a href="https://proxyboys.net/tag/what-is-my-private-ip/" class="tag-cloud-link tag-link-959 tag-link-position-45" style="font-size: 11pt;" aria-label="what is my private ip (51 items)">what is my private ip</a></div> </div></aside> </div> </div> </div> </section> </div><!-- #content --> <footer class="footer-section-child"> <div class="container"> <div class="footer-top"> <div class="row clearfix"> <div class="widget_text widget_custom_html footer-widget col-md-3 col-sm-6 col-xs-12"><div class="textwidget custom-html-widget"><!-- Google tag (gtag.js) --> <script async src="https://www.googletagmanager.com/gtag/js?id=G-9TFKENNJT0"></script> <script> window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-9TFKENNJT0'); </script></div></div> </div> </div> </div> <div class="copyright-footer-child"> <div class="container"> <div class="row justify-content-center"> <div class="col-md-6 text-md-center align-self-center"> <p>Copyright 2021 ProxyBoys</p> </div> </div> </div> </div> </footer> </div><!-- #page --> <button onclick="blogwavesTopFunction()" id="myBtn" title="Go to top"> <i class="fa fa-angle-up"></i> </button> <script src="https://proxyboys.net/wp-content/plugins/accordion-slider-gallery/assets/js/accordion-slider-js.js?ver=2.7" id="jquery-accordion-slider-js-js"></script> <script src="https://proxyboys.net/wp-content/plugins/blog-manager-wp/assets/js/designer.js?ver=6.5.2" id="wp-pbsm-script-js"></script> <script src="https://proxyboys.net/wp-content/plugins/photo-gallery-builder/assets/js/lightbox.min.js?ver=3.0" id="photo_gallery_lightbox2_script-js"></script> <script src="https://proxyboys.net/wp-content/plugins/photo-gallery-builder/assets/js/packery.min.js?ver=3.0" id="photo_gallery_packery-js"></script> <script src="https://proxyboys.net/wp-content/plugins/photo-gallery-builder/assets/js/isotope.pkgd.js?ver=3.0" id="photo_gallery_isotope-js"></script> <script src="https://proxyboys.net/wp-content/plugins/photo-gallery-builder/assets/js/imagesloaded.pkgd.min.js?ver=3.0" id="photo_gallery_imagesloaded-js"></script> <script src="https://proxyboys.net/wp-includes/js/imagesloaded.min.js?ver=5.0.0" id="imagesloaded-js"></script> <script src="https://proxyboys.net/wp-includes/js/masonry.min.js?ver=4.2.2" id="masonry-js"></script> <script src="https://proxyboys.net/wp-content/themes/blogwaves/assets/js/navigation.js?ver=1.0.0" id="blogwaves-navigation-js"></script> <script src="https://proxyboys.net/wp-content/themes/blogwaves/assets/js/popper.js?ver=1.0.0" id="popper-js-js"></script> <script src="https://proxyboys.net/wp-content/themes/blogwaves/assets/js/bootstrap.js?ver=1.0.0" id="bootstrap-js-js"></script> <script src="https://proxyboys.net/wp-content/themes/blogwaves/assets/js/main.js?ver=1.0.0" id="blogwaves-main-js-js"></script> <script src="https://proxyboys.net/wp-content/themes/blogwaves/assets/js/skip-link-focus-fix.js?ver=1.0.0" id="skip-link-focus-fix-js-js"></script> <script src="https://proxyboys.net/wp-content/themes/blogwaves/assets/js/global.js?ver=1.0.0" id="blogwaves-global-js-js"></script> <script src="https://proxyboys.net/wp-includes/js/comment-reply.min.js?ver=6.5.2" id="comment-reply-js" async data-wp-strategy="async"></script> <script defer src="https://proxyboys.net/wp-content/plugins/akismet/_inc/akismet-frontend.js?ver=1711008241" id="akismet-frontend-js"></script> <!--noptimize--><script>!function(){window.advanced_ads_ready_queue=window.advanced_ads_ready_queue||[],advanced_ads_ready_queue.push=window.advanced_ads_ready;for(var d=0,a=advanced_ads_ready_queue.length;d<a;d++)advanced_ads_ready(advanced_ads_ready_queue[d])}();</script><!--/noptimize--> </body> </html> <!-- This website is like a Rocket, isn't it? Performance optimized by WP Rocket. Learn more: https://wp-rocket.me - Debug: cached@1713031207 -->