• April 26, 2024

Scrapebox Manual

Manual Blog Commenter – ScrapeBox

Many people are aware that ScrapeBox can perform mass blog commenting which can build backlinks like a machine gun on thousands of websites an hour… But not many people are aware it can also work with the finesse of a ballerina to build high quality, highly relevant backlinks blog owners will actually enjoy receiving.
After you have Harvesed URL’s ScrapeBox can filter them by quality in numerous ways like by Page Authority, MozRank, or Domain Authority, by how many other Outbound Links are on each page, or even based on Social Signals. In fact there’s countless ways to filter URL lists in ScrapeBox like:
Domain PageRank
URL PageRank
Alexa Rank
MozRank
Domain Authority
Page Authority
FaceBook Likes
Google +1 count
Twitter Mentions
Pinterest Pins
LinkedIn Shares
Malware Filter
By country of server IP
Page Dead or Alive
Outbound Link Count
Internal Link Count
Google last cache date
If url contains specific keywords
If url does not contain specific keywords
If page contains bad words
So if you only want to leave comments on pages with a PageRank more than PR5, that have an outbound link count of less than 20, which contains no bad words, plus it has more than 1, 000 FaceBook likes and and overall Alexa rank lower than 10, 000 with ScrapeBox you can scrape a raw URL list based on keywords and filter it down to only sites and pages matching that criteria. It’s hard to get much higher quality than this!
Then you can use the Manual Commenter to create backlinks on these sites, this opens a browser window and lets you see the blog posts, see other comments on the page which allows you to leave highly relevant comments and actually interact with the blog owner and other commenter’s. ScrapeBox will automatically fill out the Name, Email, Website and Comment fields so all you need to do is click Submit. If you are promoting multiple sites at once, you also have the option of clicking the “Change Name/Website” button which will automatically fill the those fields with another of your websites which is more relevant to the blog.
After leaving your comment and backlink, ScrapeBox will progress to the next site in the list making it far quicker and easier than commenting in your browser. We even have SEO Companies using this feature as a reputation management tool to monitor a specific list of URL’s talking about their brand or products, and they can use the manual Blog Commenter to quickly skip through every site and page checking them for new comments so they can respond.
So as you can see, ScrapeBox can obtain some of the highest quality links possible and leave relevant comments blog owners will want to receive. You can of course also comment using the Slow or Fast Commenters, even the Learning Mode but if you have gone to the trouble of filtering high quality lists we recommend leaving high quality relevant comments too! Check out our video tutorial below for more info on quality links.
The Ultimate Guide to White Hat SEO using Scrapebox - Onely Blog

The Ultimate Guide to White Hat SEO using Scrapebox – Onely Blog

More than a year ago, on my G+ profile, I posted about something that I found funny: using Scrapebox for white hat. During this year a lot has changed, so now we know we need to focus more and more on the quality of the backlinks instead of quantity. This means that we have to rethink which tools should we use and how they can help us maximize our SEO.
Personally, like Bartosz mentioned in his blog post on LRT, I find Scrapebox very useful for every single SEO task I do connected with link analysis or link building.
Scrapebox – a forbidden word in SEO
I bet everybody knows Scrapebox, more or less. In short – it’s a tool used for mass scraping, harvesting, pinging and posting tasks in order to maximize the amount of links you can gain for your website to help it rank better in Google. A lot of webmasters and blog owners treat Scrapebox like a spam machine, but in fact it is only a tool, and it what it’s actually used for depends on the “driver”.
Now, due to all the Penguin updates, a lot of SEO agencies have changed their minds about linkbuilding and have started to use Scrapebox as support for their link audits or outreach.
Scrapebox – general overview
You can skip this section if you know Scrapebox already. If not – here is some basic information about the most important functions you can use.
Scrapebox is cheap. Even without the discount code, it costs $97. You can order ScrapeBox here.
In this field, you can put the footprint you want to use for harvesting blogs/domains/other resources. You can choose from the Custom option and predefined platforms. Personally, I love to use the “Custom footprint” option because it allows you to get more out of each harvest task
Here, you can post keywords related to your harvest. For example, if you want to get WordPress blogs about flowers and gardening, you can post “flowers” and “gardening” along with the custom footprint “Powered by WordPress”. It will give you a list of blogs containing these keywords and this footprint.
The URL’s Harvested box shows the total amount of websites harvested. Using the option number 6, you can get even more from each results list.
Select Engines & Proxies allow you to choose which search engine you want to get results from, and how many of them to harvest. For link detox needs or competition analysis, I recommend making use of Bing and Yahoo as well (different search engines give different results, which results in more information harvested). Also, you can post the list of proxies you want to use and manage them by checking if they are alive, and not blocked by Google and so on. After that, you can filter your results and download them as a file for further usage.
Comment Poster allows you to post comments to a blog list you have harvested, but in our White Hat tasks – we do not use it. Instead of that, we can use it to ping our links to get them indexed faster.
Scrapebox – Addons
By default, Scrapebox allows you to use a lot of different addons to get more and more from your links. You can find them by clicking “Addons” in the top menu in the main interface. Here is our list of addons:
To get more addons You can click on “Show available addons”. Also, remember about premium plugins, which can boost your SEO a lot.
Keyword Scraper – the very beginning on your link building
One of the most massive things in Scrapebox that I use all the time is the integrated Google suggested keywords scraper. It works very simply and allows you to get a list of keywords you should definitely use while optimizing your website content or preparing new blog post very, very quickly. To do this, just click on the “Scrape” button in the “Harvester” box and select “Keyword Scraper”. You will see a Keyword Scraper window like this one:
The fun starts right now. On the left side, simply put a list of keywords related to your business or blog and select Keyword Scraper Sources. Later, select the search engine you want to have research done on and hit the “Scrape” button.
As you can see on the screenshot above, you can also select the total “level” for the keyword scraper. For most keyword research tasks, it’s okay to have it on 2, but when it’s specific for each niche you want to target (for example for cooking blogs, it should be level 4 to get more keywords related to specific recipes or kitchen tips and tricks), you can adjust it up to 4. Remember that the higher level you choose, the longer it will take to see results.
After that, do a quick overview of the results you’ve got – if you see some superfluous keywords you don’t want to have in your keywords list, use “Remove” from the drop down list to remove keywords containing/not containing specified string or entries from a specified source.
If the list is ready – you can send it to ScrapeBox for further usage or just copy and save to your notepad for later.
Now: let’s start our Outreach – scrape URLs with Scrapebox
So: we have our keyword research done (after checking the total amount of traffic that keywords can bring to your domain) – now let’s see if we can get some interesting links from specified niche websites.
After sending our URL list to ScrapeBox we can now start searching for specified domains we would like to get links from.
Footprints – what they are and how to build them
Footprints are (in a nutshell) pieces of code or sentences that appear in a website’s code or in text. For example when somebody creates a WordPress blog, he has “Powered by WordPress” in his footer by default. Each CMS can have its very own footprints connected both with content or the URL structure. To learn more about footprints, you should test top Content Management Systems or forum boards to check if they index any repeatable pieces of code.
How to build footprints for ScrapeBox
Firstly, learn more about Google Search Operators. For your basic link building tasks you should know and understand these three search operators:
Inurl: – shows URLs containing a specified string in their address
Intitle: – shows URLs which have a title optimized for a specified text string
Site: – lists domains/URLs/links from a specified domain, ccTLD etc.
So if you already know this, do a test search answering questions related to your business right now:
Do I need do follow links from blogs and bloggers related to my niche?
Do I need backlinks from link directories to boost my SEO for one specified money keyword?
Should these links be do follow only?
On which platforms I can easily share my product/services and why?
Got it? Nice! Now let’s move to the next step – creating our footprint:
So let’s say that you are the owner of a marketing blog related to CPC campaigns and conversion rate optimization. The best idea to get new customers for your services is:
Manual commenting on specified blogs
Creating and posting guest posts on other marketing blogs related to your business
Being in top business link directories which allow you to post a lot information about your business
Let’s state that we need top 100 links where we can post a comment/get in touch with bloggers and contact them for any guest postings.
From our experience and after we did keyword research with Keyword Scraper in ScrapeBox, we’ve noticed that the top platform for blogging about marketing is WordPress – both on our own domain and on free platform.
To get the top 100 blogs related to our needs you can simply use:
“Powered by WordPress” + AdWords AND
This means that we want to search for WordPress blogs on Polish TLD domains with “AdWords” in every single part of the site. However, the results may not be so well-targeted if you fail to use advanced operators you can use search operators where a specified string can be found.
Use footprints in ScrapeBox
Now, after you’ve learned the basics of footprints, you can use them to get specific platforms which will allow you to post a link to your website (or find new customers if you would like to guest blog sometimes).
To do that, simply put them here:
You can combine footprints with advanced search engine commands like site:, inurl or intitle to get only these URLs.
Advanced search operators and footprints have to be connected with the keywords we want to target so as to find more, better pages to link from.
For example you can search only for domains () containing specified keyword in URL (inurl) and title (intitle). Now the URL list will be shorter, but it will contain only related keywords matching our needs.
Expert’s Tip:
For your product or service outreach, you can harvest a lot of interesting blogs hosted on free blog network sites like, or your language-related sites. Links from these pages will have different IP addresses, so they can be really valuable for your rankings.
Find Guest Blogging opportunities using ScrapeBox
By using simple footprints like:
Site:
Allintitle:
“guest blogger” or “guest post” (to search only for links where somebody posted a guest post already – you can also use the allinurl search operator because a lot of blogs have a “guest posts” category which can be found in its URL structure)
Later, combine it with your target keywords and get ready to mail and post fresh guest posts to share your knowledge and services with others!
Check the value of the harvested links using ScrapeBox
Now, when your keyword research is done and you have harvested your very first links list, you can start with checking some basic information about the links. Aside from ScrapeBox, you will also need MozAPI.
Start with trimming to domain
In general, our outreach is supposed to help us build relationships and find customers. This means that you shouldn’t be only looking at a specific article, but rather the whole domain in general. To do that, select the “Trim to root” option from the Manage Lists box:
Later, remove duplicates by clicking the Remove/Filter button and select “Remove duplicate URLs”.
Check Page Rank in ScrapeBox
Start with checking Page Rank – even if it’s not the top ranking factor right now, it still provides basic information about the domain. If the domain has a page rank higher than 1 or 2, this means that it’s trusted and has links from other related/hight PR sources.
To check Page Rank in ScrapeBox, simply click on “Check Page Rank” button and select “Get domain Page Rank”:
To be 100% sure that each domain legit PR – use “ScrapeBox Fake Page Rank Checker”. You can find it in the Addons Section in your ScrapeBox main window.
I tend to say that it’s not a good idea to believe in any 3rd party tools results about Link Trust (because it’s hard to measure if link is trusted or not), although it’s another great sign if a link’s every single result is “green”.
To check Domain Authority in ScrapeBox you can use the Page Authority addon. You can find it in your Addons list in ScrapeBox. To get it to work you will have to get your very own Moz API information (the window will appear after you select the addon).
This provides a quick overview of your links list. You can get information about the Page/Domain Authority, MozRank and the amount of external links pointing to the domain/page. With that, you can see if a URL is worthy of your link building tactics and all the work you plan to put in or not.
Remember: Do not rely on MozRank or Page/Domain authority only.
To get top links, try to look for average ones – a lot of backlinks with medium MozRank/Page/Domain authority.
Email scraping from a URL list using ScrapeBox
After you’ve harvested your first link list, you will probably want to get in touch with bloggers to start your outreach campaign. To do this effectively, use the Scrapebox Email Scraper feature. Simply click on the Grab/Check button and select to grab emails from harvested URLs or from a local list:
The results may not be perfect, but they can really give you a lot of useful information. You can export data to a text file and sort them by email addresses to find connections between domains.
Merge and remove duplicates using ScrapeBox
If you are running a link detox campaign, it’s strongly recommended to use more than one backlink source to get all of the data needed to lift a penalty, for example. For example, if you have more than 40 thousand in each file, you will probably want to merge them into one file and dig into it later.
To do this quickly, install the DupeRemove addon from the available addon list. After running it, this window will pop up:
Now simply choose “Select source files to merge” and go directly to the folder with the different text files with URL addresses. Later press “Merge files” to have them all in one text file.
To remove Duplicate URLs or Domains “Select Source file” and choose where to export non duplicated URLs/Domains. Voila! You have one file containing every single backlink you need to analyze.
For those who like to do things in smaller parts – you have the option of splitting a large file into smaller ones. Select your text file with backlinks and choose how many lines per file it should contain. From my point of view, it’s very effective to split your link file into groups of 1000 links per file. It’s very comfortable and gives you the chance to manage your link analysis tasks.
ScrapeBox Meta Scraper
ScrapeBox allows you to scrape titles and descriptions from your harvested list. To do that, choose the Grab/Check option then, from the drop down menu, “Grab meta info from harvested URLs”:
Here, you can take a look at some example results:
You can export this data to an CSV file and use it to check how many pages use an exact match keyword in the title or optimize it some other way (i. e., do the keywords look natural to Google and not Made For SEO? ).
Check if links are dead or alive with ScrapeBox
If you want to be pretty sure that every single intern/external link is alive you can use the “ScrapeBox Alive Checker” addon. First – if you haven’t done this yet – install the Alive Checker addon.
Later, to use it, head to the Addons list and select ScrapeBox Alive Check.
f you were previously harvesting URLs – simply load them from Harvester. If not, you can load them from the text file.
Now, let’s begin with Options:
Also, remember to have the checkbox for “Follow relocation” checked.
The results can be seen here:
If a link returns HTTP status code different than 301 or 200 it means “Dead” for ScrapeBox.
Check which internal links are not indexed yet
So if you are working on some big onsite changes connected with the total amount of internal pages you will probably want to be pretty sure that Google re-indexes everything. To sure that everything is as it should be, you can use Screaming Frog, SEO Spider and ScrapeBox.
So start crawling your page in Screaming Frog, using the very basic setup in the crawler setting menu:
f you are a crawling huge domain – you can use a Deep Crawl tool instead of the Screaming Frog SEO Spider.
Later, when your crawl is done, save the results in the file, open it and copy it to Clipboard or export it to a file it with one click in ScrapeBox:
When your import is done, simply hit the Check Indexed button and select the Google Indexed option.
Remember to set up the Random Delay option for indexing and checking and total amount of connections based on your internet connection. Mostly, I use 25 connection and Random Delay between each query sent by ScrapeBox to be sure that my IP/Proxy addresses won’t be blocked by Google.
After that, you will get a pop up with information about how many links are indexed or not, and there will be an extra column added to your URLs harvested box with information about whether they are Indexed or not:
You can export unindexed URLs for further investigation.
Get more backlinks straight from Google using ScrapeBox
“Some people create free templates for WordPress and share them with others to both help people have nicely designed blogs and obtain free dofollow links from a lot of different TLDs. ”
Sometimes it’s not enough to download backlink data from Google Webmaster Tools or some other software made for that (although Bartosz found a real nice “glitch” in Webmaster Tools to get more links).
In this case – especially when you are fighting a manual penalty for your site and Google has refused to lift it – go deep into these links and find a pattern that is the same for every single one.
For example – if you are using automatic link building services with spun content, sometimes you can find a sentence or string that is not spun. You can use it as a footprint, harvest results from Google, and check if your previous disavow file contained those links or not.
And another example – some people create free templates for WordPress and share them with others to both help people have nicely designed blogs and obtain free dofollow links from a lot of different TLDs. Here is an example:
“Responsive Theme powered by WordPress”
This returns every single domain using the kind of theme from Cyberchimps. If you will combine it with the keywords you were linking to your site, you will probably get a very big, nice, WordPress blog list. You can combine it with keywords you want to target to get more related and 100% accurate results.
Check external links on your link lists
After you have done your first scrape for custom made footprint it’s good to know what is the quality of links you have found. And once againg – ScrapeBox and its amazing list of Addons will help you!
“Outbound Link Checker” is a addon which will cheack links line by line and list both internal and external links. Because addon works fine supports multithread technology you can check tousands of links at the same time.
To use “Outbound Link Checker” go to your Addons list and selec Outbound Link Checker:
Next, choose to load a URL list from ScrapeBox or from an external file.
After that, you will see something like this:
The magic starts now – simply press the “Start” button.
Voila!
Results?
Now you can filter the results if they contain more than X outgoing links. Later, you can also check the authority of those links and how valuable they are.
Short Summary
As you can see – ScrapeBox in the Penguin era is still a powerful tool which will speed up your daily SEO tasks if used properly. Even if you do not want to post comments or links manually, it can still help you find links where you can get both traffic and customers.
Working across the technical spectrum of SEO, Onely provides strong commercial value to clients through cutting-edge solutions.
ScrapeBox Guide - White Hat SEO | Powered by Search

ScrapeBox Guide – White Hat SEO | Powered by Search

Updated: September 10, 2019
B2B SaaS Marketing Tips
Sign up for updates delivered directly to your inbox.
The Forbidden S-Word Of SEO: ScrapeBox
So you and your company (yes – even you one-[wo]man-armies out there) are brand new to the world of search engine optimization. Maybe your offline business has been suffering due to the economy or a myriad of other factors, however you are a warrior – a trooper – and you will not take no for an answer. You know that the internet is a gold mine, but you have trouble tapping into this unending fountain of visitors. You manage to convince yourself that an online game plan is required. Over the course of a few days, you feverishly search the internet for tools and software that can help your endeavors.
After much frustration, panic, and tears, you finally come across a forum that is discussing SEO tactics and your eyes are drawn to the banter back and forth over a particular piece of software, it is called ScrapeBox. In your desire to make your business an online sensation, you register for an account on this forum and introduce yourself as a newcomer to the search engine marketing world. People respond to you and welcome you aboard and all is going well….. but then you make a very serious mistake…. an oh-so-terrible mistake…. You proceed to ask your very first question. You want input from the “SEO” community on whether ScrapeBox will help your website rank in the search engines. The feedback you receive from the majority of self proclaimed ethical webmasters and search engine marketers alike is most accurately described in the following photo:
While this is amusing to read about, it also carries a lot of truth behind it. Time and time again we see newcomers to the SEO community being flamed off the discussion boards for questions deemed misguided, misinformed, black hat, grey hat, upside down hat, you name it. If you haven’t seen this behavior – pay a visit to some of the more well known forums surrounding search engine optimization tips and techniques. The question begs though….
Is ScrapeBox A White Hat SEO Tool?
I will pose an answer to this question with another question. Is the sky blue? At this point you either think I am losing it, or you clearly understand where this is headed – and kudos to you if the latter is true! The sky is in fact not blue or red or even orange for that matter. Its color depends upon the time of day, whether we are color blind, if we have sunglasses on, or if we can even see at all! Likewise with ScrapeBox or any other tool, software can be abused and thrown around like a plague on the human race, which sadly is the case more often than not. On the flip-side of that coin however is an overwhelming amount of power that can help speed up daily tasks and production for even the purest of the pure, hardcore white-hat junkies. ScrapeBox is a piece of software that costs $97. This software collects (or scrapes obviously) information off of the internet. Some of its features include:
Harvesting of proxies
Being able to create a sitemap of a website (did you know this? )
Ability to make a RSS feed (did you know this? )
Collecting keyword ideas
Collecting websites based on a footprint (handy)
Blog commenting en-masse (not a wise idea)
Pinging of URLs
RSS submission
and more…
The Real ScrapeBox
While ScrapeBox has gotten a bad horrible reputation due to the unceasing amount of spam it has enabled to be carried out (alongside Xrumer), it has a fair amount of legitimate use that can greatly speed up your day to day workflow. Let’s take a look.
1: How to Find Long Tail Keywords
2: Blogger Outreach Guidelines
3: Backlink Checker
4: WHOIS Checker
5: TDNAM Addon – GoDaddy Auctions
6: Sitemap Scraper
7: Outbound Link Checker
8: Bulk URL Shortener
9: Malware & Phishing Finder
10: Rapid Indexer
11: Page Scanner / Categorizer
12: Link Extractor
13: Competition Finder
14: Cache Extractor
15: Fake Page Rank Checker
16: Duplicate Remover
17: Domain Name Checker
18: Meta Scraper
19: Domain Resolver
1: How To Find Long Tail Keywords
If you are in need of generating industry related ideas for your marketing plan and content strategy, you can easily accomplish this with ScrapeBox in a matter of minutes (compared to hours of work the normal manual way). Here’s how it’s done in minutes:
For this example, let’s use the SEO industry. In the picture below, I started with 3 key-phrases.
I entered these few key-phrases into the text area on the left and then clicked the “scrape” button along the bottom.
It came back with a few hundred results.
I then copy/pasted these new additions back into the main text area on the left and re-ran the “scrape” a second time.
The final result brought back a ton of key-phrases, that are possible content ideas and niche markets to go after.
During this time I stretched for the win.
Harvesting material is a good way to get an overview of the market, however you won’t get much accomplished unless you actually do something with that data. The best way to launch your website to the top of the SERPs in 2013 is by forming relationships with those in the industry, by cultivating useful information, and by having an authority status in your select market. Outreach to influential people is one of the best ways to get this done and you can accomplish that goal with ScrapeBox. Let’s take a look. Some people still search the following set of phrases manually in order to find link prospects:
Keyword guest blogger wanted
Keyword guest writer
Keyword guest blog post writer
Keyword “write for us” OR “write for me”
Keyword “Submit a blog post”
Keyword “Become a contributor”
Keyword “guest blogger”
Keyword “Add blog post”
Keyword “guest post”
Keyword “Write for us”
Keyword submit blog post
Keyword “guest column”
Keyword “contributing author”
Keyword “Submit post”
Keyword “submit one guest post”
Keyword “write for us”
Keyword “Suggest a guest post”
Keyword “Send a guest post”
Keyword “contributing writer”
Keyword “Submit blog post”
Keyword inurl:contributors
Keyword “guest article OR post”
Keyword add blog post
Keyword “submit a guest post”
Keyword “Become an author”
Keyword submit post
Keyword “submit your own guest post”
Keyword “Contribute to our site”
Keyword magazines
Keyword “Submit an article”
Keyword “Add a blog post”
Keyword “Submit a guest post”
Keyword “Guest bloggers wanted”
Keyword “submit your guest post”
Keyword “guest article”
Keyword inurl:guest*posts
Keyword Become guest writer
Keyword inurl:guest*blogger
Keyword “become a contributor” OR “contribute to this site”
Now I don’t know about you, but having to search each one of these manually and then record the results into an excel spreadsheet would push my sanity level by a small margin. I prefer to have a solid foundation and workflow and then attempt to automate the tasks that can be automated. With ScrapeBox, you can too – let’s take a look. The very first task on your list should be to decide on your market’s topic. If you don’t know the main focus of your website, you should probably re-visit that BEFORE you start this step. Once you have your market focus down willy-nilly, it is now time to enter that focus into the software. This is how it’s done: There are a variety of tools out there that allow you to find potential link prospects. I won’t name the obvious ones because they are….. obviously obvious! The purpose of this post is to show the white-hat side of this software, because you must always try to find the good in everything going on, right? The only issue with using ScrapeBox as a means of finding people to form a relationship with is the fact that proxies will be needed. Like other software which tracks your rankings in the SERPs or goes out around the web to find potential prospects, SB can either be used with or without proxies. As you may be aware, automating requests to the search engines is not the best course of action and the last thing you want to happen is having your IP banned due to the large number of requests being sent out. You can specify settings to reduce the amount of “stuff” being done at any given time though. For this reason, a reliable proxy service will be the order of the day if you decide to go this route. I like to do some manual work and search out really high prospects on my own, however some people prefer the use of proxies for gathering large scale data. It really depends on the type of project you are doing and the amount of work involved and your standpoint on the issue (yes, some will argue that using proxy services is not white hat, while others will dismiss the fact of proxies being black hat as complete nonsense…. which type are you? ).
Everyone loves backlinks. It is what the web is made up of. Being involved in the search engine marketing industry will turn you into a link junkie/lover in no time at all. Love it or hate it, links have always and will for the foreseeable future, play a major role in the ranking power of any website. Why is this true? The reason is that the internet is made up entirely of links – without links, there would not be any internet as we know it today. You get from site A to site B via a hyperlink and until that changes, backlinks will be here for some time. Backlinks can help and they can hurt. When they do hurt, it is up to you to find them and remove them or get them disavowed. Perhaps the photo below reminds you of those late nights you spent trying to fix your backlink profile:Due to this fact, backlink checkers have risen left and right over the years. Some are free, some are paid, some are better than others (as with anything in life) and others downright suck. As you may be guessing, I am now going to let you know how you utilize ScrapeBox to be your own free backlink checker. There are no monthly fees involved in this, the data is simple, and you have a limit of up to 1, 000 links returned in the report – however it is free. First, enter in your URL inside the left text area. Copy it and then paste it into the right hand side, like so: Next, move your mouse over the “addons” tab along the top of the screen and select it. You will get something similar to the screenshot below:
If your addons tab looks different, it is probably because you have not installed any addons yet. All you have to do is click “Show available addons” and then install them one by one. It really is that easy! Once you click the backlink checker option underneath the addons tab, you will be presented with a screen like the one below:
All that is required for you to do now is select the “Load from ScrapeBox Harvester” option and then click START. When the results are done, you will have the option of downloading a file with up to 1, 000 backlinks. Not too shabby for being free…
I WHOIS, you WHOIS, we all WHOIS for the “biz”! Seriously though, if you have a bunch of websites you want to check the WHOIS out on – ScrapeBox is a nice tool to get the job done. Of course you have your browser extensions and plugins, as well as manually looking up one domain at a time. The thing about this tool is that it is simple, but did you know that you can also check bulk URLs at the same time? Check out the photo below:
When you select the WHOIS option, you will be presented with a screen like the one below. All you have to do at this point is click the “Load” button and then “Load from ScrapeBox harvester”. Once your URLs are loaded, it’s as simple as pressing “Start”.
A word to note here: It is best if the proxies you are using for this addon module are of the SOCKS extension. A lot of the proxies you may use online are not SOCKS and you may run into some errors, so keep that in mind.
Once your WHOIS information has completed, you will get a screen like the one below:
As you may have noticed the information brought back is rather simple, however it can be really useful if you have a bunch of URLs to check at the same time. Have any of you ever had to check the WHOIS for multiple sites at one time and if so, why?
Do we have any domain name junkies in the house? The TDNAM addon allows you scrape GoDaddy Auctions for domains that are ending with 24 hours and it will let you search that. As per usual, begin by clicking the addons tab and then installing the TDNAM addon if it is not already installed. The way it works is quite simple:
You enter your keyword in order to start the search (try to start broad and work your way down the narrower path).
You select the type of TLD you want to find (top level domain).
You click……. Start.
Done.
Take a look at the photo below to see what I mean:
From here, you can right click on any of the listings and then proceed to GoDaddy’s website for more information. It’s great to get an overview of what is going on in the domain name market and you can get through a lot of data rather quickly.
As you can see, the items displayed to you include:
Traffic
Price
End Time
Domain Age
Export Options
How many of you were aware that ScrapeBox can be used hand-in-hand with GoDaddy?
The sitemap scraper is a useful tool if you want to churn back the URLs from your website or from your competitors. As always, please install the addon from the available list of addons. Now what this addon will do, is load a valid sitemap from a domain, and then scrape all the URLs out of that sitemap.
The first thing you should do is enter in your valid sitemap file and then copy/paste it over to the harvester section (just like we did in the previous examples). Take a look at the picture below for an idea of what this will look like:
It should also be noted that there are options for “Deep Crawl”. This allows the tool to go out to each link found and then also pull in more internal links from those originally found. Simple isn’t it?
Just as you may have imagined, the outbound link checker is a useful addition to the software in that it allows you to quickly glance at the amount of links leaving a particular website. It also shows internal links as well. As always, please make sure that this module has been installed from the list of available addons (found along the top bar of the program’s interface). Once you have this addon installed, it’s time to get to work.
For this example, I chose 3 random URLs and entered them into the text area on the left hand side.
I then proceeded to copy those URLs over to the text harvester area on the right hand side.
After this, I navigated up to the addon tab and selected the Outbound Link Checker option.
Take a look at the screenshot below:
After I loaded in the URLs and click Start, I was presented with the following screen:
Another nice feature is the ability to filter out results based upon your own needs. In addition to that, you have the option of removing any error entries. This would be useful if you needed to come back with a list of websites that had more than “X” number of external links. See the photo below:
As far as outbound links go, that’s about it for this addon!
8: Bulk URL Shortener
If you have used Twitter or Bitly in the past – then you are definitely familiar with the process of shortening a URL in order to make it fit within a specified number of characters. The problem with many services is that you can only shorten one URL at a time. What if you had to shorten 95 of them? It would get a bit tedious wouldn’t it? Of course it would. So that’s where the Bulk URL Shortener comes into play. This addon allows you to:
Type in a list of URLs
Uses URL shortening services and get you new liks
Like we have been doing (if you haven’t gotten the pattern by now), you must make sure that the addon is installed via the available addons selection, under the Addons tab.
I have some trouble getting the URL shortener to work when entering a single URL, but the tool works fine when uploading a text file list of URLs, such as the photo below:
Either way, that is how you go about getting bulk URLs in tiny form – in no time at all!
I hate love, you hate love, we all hate love, phishing bait! Who knew that malware could be your friend? With ScrapeBox, we can turn the most evil of evil’s into an inbound link opportunity by playing the Good Samaritan. Not all webmasters are the savvy types and many of them do not even use Google Webmaster Tools or for that matter, some don’t even check their website more than once every 3 months. As is the case with internet vulnerabilities, malware and other exploits make their way around the net like an out of control pest. Why not help out others who are less fortunate and inform them? You may just get a link out of the process because they will be so grateful.
This addon connects to a Google database and checks the sites for any Malware currently or from days gone by. As the process is running, you are able to glance very quickly and see which ones are the offenders. Note that sometimes errors will occur for various reasons. For this example, I grabbed a list of pinging URLs. The screenshot below shows the system in action and what you can expect to see:
Pretty simple isn’t it? Now as far as link opportunities are concerned, this takes a bit of skill, but it could be worth the effort depending on the website you have found that is infected. Here are a few steps to take:
Run a list of URLs through the Malware Checker addon
Export the URLs and check them in OpenSiteExplorer for domain authority
Sort the URL list by descending domain authority
Use a browser WHOIS plugin or the built in WHOIS scraper of this tool in order to gather the contact information for each URL/webmaster
Here’s the hard part – you need to reach out to the webmaster and let them know that their site is hosting malware or some other exploit (do not visit the website as it may infect your computer).
Do not ask for a link at this point – wait for them to get back to you and for the issue to be resolved.
Once you have a dialogue with the owner, feel free to form a partnership somehow.
There is no set guideline on how to use this for a backlink opportunity. You have to be creative here as it will be different for every industry you are in.
When you want your information to get shared and indexed quickly, Google+ is a great way to get the job done. If for SOME UNKNOWN REASON you cannot use G+ for this venture, you can always resort to using an indexer service. With ScrapeBox, you have the option of utilizing a pre made list of indexing websites that are sure to get your pages noticed. Here is how you do it:
As you can see in the picture, there is a nice list pre-built for you. This is easy to find. All you have to do is:
Navigate to the addons tab on the main screen of the software
Select the “Show Available Addons” option
Browse to the Rapid Indexer and highlight it.
Download the list from the description section.
Once you have this accomplished (within a minute), you will now want to load up the actual addon itself. As always, make sure it is installed first!
On the addon screen, you have the option of loading up a bunch of URLs you own, alonside the list of indexer services. Note that the limit is roughly around 1, 000, 000 – yes that is 1 million total. So if you have 100, 000 indexer sites and 10 URLs that you own…. well you do the math. Personally though, the average for any normal white-hat webmaster is just a small select few URLs that they own, mostly one or two – along with a few hundred indexer sites – still though, a G+ is an awesome way to get the job done as well. There is also the option to export the list. This isn’t really needed for your own personal use, unless you were planning to do some reporting on the matter.
This is a neat feature that many may not be aware of. As with everything else, the power of ScrapeBox is in the addons. Like usual, install it and once that is ready – launch it!
Now, what the page scanner does – is that it lets you analyze the HTML source code of a particular URL and then categorize that URL based off of your own custom footprints…. very cool. Think of the possibilities here… Let’s dig deeper. Below you see a screenshot of the addon window.
The very first thing you want to do is import your list of URLs to scan. For this example, I will use a well known WordPress blog (added through the “Load urls from” button above).
Next, you want to edit your own custom footprint (the edit footprints button above). That will look something like the following screen:
Once you have your footprints done and your URLs ready to go, you will want to begin the process of actually scanning the pages. This is how that will look:
With that all setup, you are now ready to begin and start categorizing your websites. Think really hard about how you could use this to your advantage…..
For the “xxxx-th” time: again if you don’t have this addon installed, you will want to hover over the addons tab along the top bar of the software, and then select “show available addons” – followed by installing the Link Extractor module.
When the module appears on the screen, you see some options available to you along the bottom of the addon window, such as seen below:
Now what these options do are quite simple, but quite powerful. Here is a quick synopsis:
Internal: Links from the individual domain
External: Links that link to other domains
Both: Internal & external together
So, once you load up your URL list your screen will now appear like so:
Once your reporting is complete, you can then export the results as you see fit. Duplicates are removed automatically which is really nice. What you do with this list is where the real power is.
Everyone wants to find out what their competition is doing…………. Of course I’m right. You have the ability to do some competitor research with ScrapeBox. It’s by no means a be-all-end-all kind of research, but it IS there so you might as well take a look at it.
What this does, is that it pulls the number of results that come up in Google for a particular keyword. Everyone knows this number is not accurate (especially when you browse to the end pages of the SERPs and find out that the real number is actually a lot smaller), however for a general overview of the landscape, it’s a great way to become familiar with the industry you are tackling. To start, you want to:
Enter the keywords you want to search for (this can be done with a text list)
Click Start!
See the picture below:
With the results finished, you will have a list of the results returned for every keyword. Do other tools do this job? Yes they do. Now you have another option should one of your tools no longer work.
Everyone likes to know when their pages were last cached in Google’s database right? All repeat after me….. YEEESSSS. Ok great! So how would like it if you knew when all of your pages were last cached? You could export the results and save them into an Excel spreadsheet (or OpenOffice) – and sort the data to see which pages on your site were having issues being cached lately?
You’ll need the addon installed obviously, so once you install it from within Scrapebox – open it up and you will see a window like so:
I loaded up a text file with a couple of websites, and this is what appears after the Cache Extractor completes its work:
With the ability to export the results as:
CSV
Excel
TXT
…. your reporting options are great!
With this data available, you are now able to focus on the parts of your website that seem to be slow in getting regularly cached.
When considering the fact that your outreach campaigns take a lot of time to manage – and that the value/message of your contact with random website owners has to be right on target, the last thing you need happening is to get burned on the fact that the pagerank of “said” site is a fake. Granted I could really care less about PR in today’s SEO market (compared with 2003) – however it is still a general rule of thumb for a website’s showing in the industry. I personally use other metrics for judging a site’s worth, but PR still has to be considered into the mix for complete-ness sake.
Anyways, here is how you look at page rank issues with ScrapeBox:
As most addons go with this tool – you can load up your information from the included options – I always use a text file myself, but each to their own. With your URLs loaded up, your screen will now appear like so:
All that is required now is to click “Start”. Of course the next screen will now look like:
The good news is that both Powered By Search and The Weather Network are who they say they are – isn’t this beautiful? So the next time you need to check if the Pagerank is being faked, spoofed, goofed, or what not – you can fire up bulk checking abilities through ScrapeBox.
I don’t think there is a single ethical person out there in our world who likes duplicate content – emphasis on honest/ethical. Have no fear, ScrapeBox is here! …. and here you thought that this tool was meant for spamming duplicate garbage – no-no my white-hat friend, this tool is just the opposite of that. Want to know how? Read on!
The first plan of action is of course to sit down and think about how you would use this addon. What do you normally undertake throughout your working day, that involves removing or stripping duplicate “stuff” – so you are left with simply original materials? Ask yourself this question and think about it. Tools do absolutely nothing unless they are used properly and in the right context.
For the sake of this example, I will use two example text files. They will be:
Colors
Numbers
So let’s look at how you would introduce these files into the program. Take a peek below:
First, you will want to select the sources of files to merge. In order to make this work as it should, the tool requires that the data all be in one location to begin with. In our example case, I am using two text files.
Once you select both text files from the windows explorer window that pops up, you then want to click the source button here:
This button is the source location and is where the merged files will appear when joined into a new file. When you click the source button, you will get a window:
Asking you where you want to save the file
Asking for the name of the merged file export
In my case, I called this file “combo”. Now we select the final source location for the output of our file, AFTER we remove either duplicate URLs or duplicate domains. Take a look at the screenshot below:
When you click the button for the source location (for the final file) – it will ask you what you want to call it & where to save it.
Here’s a handy tip to note – you are not limited to just URLs, you can prune email addresses, etc… oh the possibilities!
The domain name lookup feature is very useful and it is much easier than typing one name after another into GoDaddy’s URL finder (for 40 minutes) – because we all know that every domain we ever think of has already been registered. Reminds me of the time I literally typed a bunch of random letters into an email registration form on GMAIL and it told me that it had already been taken….
So what is the point of this tool? Well, just like the name sounds – it allows you to search for available domains, domains that can be registered – you name it – it’s quick, it’s easy – and it works. Let’s see what the process involves:
Unlike other sections in this guide – the domain checker is INSIDE the keyword scraper and is not accessed via the addons tab. Of course there had to be one trap in all of this!
Once you are inside the keyword scraper, you will want to enter in your phrases like so:
You enter in your phrases and click “Scrape” along the bottom right side there. When the results are returned, you should get something like so:
Really simple isn’t it? Next, select the domain button and you will be presented with the following:
As you can see, it really is straightforward. Of course, exact match domains are not worth your time nor is squatting domains, but this is a great way to check a list of ideas very quickly without spending a heap of time typing in one thought after another. Time saved is time earned is it not?
Are you a data junkie? How much do you love looking at page titles, descriptions, and even keywords? If this sounds like something that makes you excited – keep your pants on because ScrapeBox can handle that as well. The way this works is very straightforward.
The first thing you do is plug in a keyword to harvest your URLs from.
After your URLs are harvested, you then hover your mouse over the “Grab” tab on the right hand side – and select the “meta info from harvested URL list”.
Your screen should now look like the following:
Not very intimidating is it? Can you guess what the next stay may be? If you guessed pressing the start button – you would be a genius! For completeness sake, here is what your screen should look like once things are rolling.
Last but certainly not least is the ability to check domains for their IP and country of origin, otherwise known as domain resolving or IP resolving. While this probably would not be used daily, it is still a handy feature to have available. The first step in this process is to fire up the proper addon by heading to the addons tab at the top of the tool – then selecting Domain Resolver. If the addon is not showing up, you need to install it from the list of available addons.
As you can see from the screenshot below, you have the ability to either load a pre-saved list of URLs or you can manually enter in domains by clicking “Add Entries”.
Once you have that finished, all you have to do is either tick the “try to resolve location” option or simply click Resolve to begin the process. When all is said and done, your results should look similar to the following:
That is how you resolve IPs and that is how you use ScrapeBox! While there are more uses for Scrapebox, this list is a pretty good summary of all the good you can do with the tool. As with anything in life, it can be used for both good and bad. So the next time someone tells you that ScrapeBox is nothing but a black hat tool – you can refer them to this post for the win.
I’d love to hear your feedback on this. Do you see yourself using this tool for any of your daily SEO tasks or have you used it in the past for any of the techniques mentioned here? If so – what was the reason you chose to use ScrapeBox over other sets of tools?
Need more help with Scrapebox? Still not sure how to grow your business with SEO? Learn more by booking a free 25 minute marketing assessment with us.

Frequently Asked Questions about scrapebox manual

How do you set up a Scrapebox?

I bet everybody knows Scrapebox, more or less. In short – it’s a tool used for mass scraping, harvesting, pinging, and posting tasks in order to maximize the amount of links you can gain for your website to help it rank better in Google.

What can Scrapebox do?

ScrapeBox is a piece of software that costs $97. This software collects (or scrapes obviously) information off of the internet. Some of its features include: Harvesting of proxies.Sep 10, 2019

What is Scrapebox software?

Leave a Reply

Your email address will not be published. Required fields are marked *