• April 18, 2024

How To Create A Web Crawler

How to build a web crawler? - Scraping-bot.io

How to build a web crawler? – Scraping-bot.io

At the era of big data, web scraping is a life saver. To save even more time, you can couple ScrapingBot to a web crawling bot.
What is a web crawler?
A crawler, or spider, is an internet bot indexing and visiting every URLs it encounters. Its goal is to visit a website from end to end, know what is on every webpage and be able to find the location of any information. The most known web crawlers are the search engine ones, the GoogleBot for example. When a website is online, those crawlers will visit it and read its content to display it in the relevant search result pages.
How does a web crawler work?
Starting from the root URL or a set of entries, the crawler will fetch the webpages and find other URLs to visit, called seeds, in this page. All the seeds found on this page will be added on its list of URLs to be visited. This list is called the horizon. The crawler organises the links in two threads: ones to visit, and already visited ones. It will keep visiting the links until the horizon is empty.
Because the list of seeds can be very long, the crawler has to organise those following several criterias, and prioritise which ones to visit first and revisit. To know which pages are more important to crawl, the bot will consider how many links go to this URL, how often it is visited by regular users.
What is the difference between a web scraper and a web crawler?
Crawling, by definition, always implies the web. A crawler’s purpose is to follow links to reach numerous pages and analyze their meta data and content.
Scraping is possible out of the web. For example you can retrieve some information from a database. Scraping is pulling data from the web or a database.
Why do you need a web crawler?
With web scraping, you gain a huge amount of time, by automatically retrieving the information you need instead of looking for it and copying it manually. However, you still need to scrape page after page. Web crawling allows you to collect, organize and visit all the pages present on the root page, with the possibility to exclude some links. The root page can be a search result or category.
For example, you can pick a product category or a search result page from amazon as an entry, and crawl it to scrape all the product details, and limit it to the first 10 pages with the suggested products as well.
How to build a web crawler?
The first thing you need to do is threads:
Visited URLsURLs to be visited (queue)
To avoid crawling the same page over and over, the URL needs to automatically move to the visited URLs thread once you’ve finished crawling it. In each webpage, you will find new URLs. Most of them will be added to the queue, but some of them might not add any value for your purpose. Hence why you also need to set rules for URLs you’re not interested in.
Deduplication is a critical part of web crawling. On some websites, and particularly on e-commerce ones, a single webpage can have multiple URLs. As you want to scrape this page only once, the best way to do so is to look for the canonical tag in the code. All the pages with the same content will have this common canonical URL, and this is the only link you will have to crawl and scrape.
Here’s an example of a canonical tag in HTML: previousDepth) {
previousDepth =;
(`——- CRAWLING ON DEPTH LEVEL ${previousDepth} ——–`);}
return nextLink;}
function peekInQueue() {
return linksQueue[0];}
//Adds links we’ve visited to the seenList
function addToSeen(linkObj) {
seenLinks[] = linkObj;}
//Returns whether the link has been seen.
function linkInSeenListExists(linkObj) {
return seenLinks[] == null? false: true;}
How to build a simple web crawler | by Low Wei Hong

How to build a simple web crawler | by Low Wei Hong

Step by step guides to scrape the Best Global University RankingThree years ago, I was working as a student assistant in the Institutional Statistics Unit at NTU Singapore. I was required to obtain the Best Global University Ranking by manually copying from the website and pasting into an excel sheet. I was frustrated, as my eyes were tired, from looking at the computer screen continuously for long hours. Hence, I was thinking about whether there is a better way to do that time, I googled for automation and I found the answer for it — web scraping. Since then, I managed to create 100+ web crawlers and here is my first-ever web scraper that I would like to eviously, what I did was to use requests plus BeautifulSoup to finish the task. However, after three years when I look back to the same website, I found out that there is a way to get the JSON data instead which works way you are thinking of automating your boring and repetitive tasks, please promise me you’ll read till the end. You will learn how to create a web crawler so that you can focus on more value-added this article, I would like to share how I build a simple crawler to scrape universities’ rankings from first thing to do when you want to scrape the website is to inspect the web element. Why do we need to do that? This is actually to find whether there exists a more robust way to get the data or to obtain cleaner data. For the former, I did not go into so deep to dig out the API this time. However, I do find a way to extract cleaner data so that I can reduce the data cleansing you do not know how to inspect the web element, you just need to navigate to any position of the webpage, right-click, click on inspect, then click on the Network tab. After that, refresh your page and you should see a list of network activities appear one by one. Let’s us look at the specified activity that I have been selecting using my cursor in the screenshot above(i. e. the “search? region=africa&… “) that, please refer to the purple box in the screenshot above, highlighting the URL that the browser will send the request to in order to get the data to be presented to, we can imitate the browser behavior by sending the request to that URL and get the data we need right? But before that, why do I choose to call the request URL instead of the original website URL? Let’s click on the Preview tab, you will notice that all the information we need, including university ranking, address, countries, etc.. are all in the results field which is highlighted in the blue is the reason why we scrape through the URL. The data return by the URL is in a very nice format — JSON above screenshot shows a comparison of the code between today and 3 years before. 3 years before when I was a newbie in web scraping, I just use requests, BeautifulSoup, tons of XPATH and heavy data cleaning processes to get the data I need. However, if you compare the code that I have written today, I just need to use x to get the data and no data cleaning your information, x is built on top of requests, but it supports additional functions like provides async APIs and with x, you can send an HTTP/2 request. For a more complete comparison, you may refer to this quest URL: now, let’s pay attention to the link we are going to use as shown above. You will notice that you can change the values for region and subject as they are parameters to the URL. (For more information on URL parameters, here is a good read) However, do note that the values for them are limited to the region and subjects provided by the instance, you can change region=africa to region=asia or subjects=agricultural-sciences to subjects=chemistry. If you are interested to know what are the supported regions and subjects, you can visit my repo to check knowing how to query this URL to obtain the data you need, the left-over part is how many pages you need to query for a particular combination of region and, let’s take this URL as an example, copy and paste the URL into your browser and press enter, then use command+f to search for the keyword “last_page”, and you will find a similar screenshot as below. *Do note that I have installed a chrome extension that could help me to prettify the plain data into JSON format. This is why you can see that the data shown in my browser is nicely ngratulation, you manage to find the last_page variable as indicated above. Now, the only remaining process is how to go to the next page and get the data if the last_page is larger than is how I figure out the way to navigate to page 2. Take this link as an, click on the page number 2, and then view on the right panel. Pay attention to the purple box, you will notice there is an addition of page=2 in the Request URL. This means that you just need to append &page={page_number} to the original request URL in order to navigate through different, you have the whole idea of how to create a web scraper to obtain the data from the you would like to have a look at the full Python code, feel free to visit sourceThank you so much for reading until the is what I want you to get after reading this that there are many different ways to scrape the data from a website, for instance getting the link to obtain the data in JSON some time to inspect the website, if you manage to find the API to retrieve data, this can save you a lot of reason that I am comparing my code from 3 years before and the code I have written today is to give you an idea of how you can improve your web crawling and coding skills through continuous hard, the result will definitely come. — Low Wei HongIf you have any questions or ideas to ask or add on, feel free to comment below! Low Wei Hong is a Data Scientist at Shopee. His experiences involved more on crawling websites, creating data pipeline and also implementing machine learning models on solving business provides crawling services that can provide you with the accurate and cleaned data which you need. You can visit this website to view his portfolio and also to contact him for crawling can connect with him on LinkedIn and Medium.
How to Build a Web Crawler – A Guide for Beginners

How to Build a Web Crawler – A Guide for Beginners

As a newbie, I built a web crawler and extracted 20k data successfully from the Amazon Career website. How can you set up a crawler and create a database which eventually turns to your asset at No Cost? Let’s dive right in.
What is a web crawler?
A web crawler is an internet bot that indexes the content of a website on the internet. It then extracts target information and data automatically. As a result, it exports the data into a structured format (list/table/database).
Why do you need a Web Crawler, especially for Enterprises?
Imagine Google Search doesn’t exist. How long will it take you to get the recipe for chicken nuggets without typing in the keyword? There are 2. 5 quintillion bytes of data created each day. That said, without Google Search, it’s impossible to find the information.
From Hackernoon by Ethan Jarrell
Google Search is a unique web crawler that indexes the websites and finds the page for us. Besides the search engine, you can build a web crawler to help you achieve:
1. Content aggregation: it works to compile information on niche subjects from various resources into one single platform. As such, it is necessary to crawl popular websites to fuel your platform in time.
2. Sentiment Analysis: it is also called opinion mining. As the name indicates, it is the process to analyze public attitudes towards one product and service. It requires a monotonic set of data to evaluate accurately. A web crawler can extract tweets, reviews, and comments for analysis.
3. Lead generation: Every business needs sales leads. That’s how they survive and prosper. Let’s say you plan to make a marketing campaign targeting a specific industry. You can scrape email, phone number, and public profiles from an exhibitor or attendee list of Trade Fairs, like attendees of the 2018 Legal Recruiting Summit.
How to build a web crawler as a beginner?
A. Scraping with a programming language
writing scripts with computer languages are predominantly used by programmers. It can be as powerful as you create it to be. Here is an example of a snippet of bot code.
From Kashif Aziz
Web scraping using Python involves three main steps:
1. Send an HTTP request to the URL of the webpage. It responds to your request by returning the content of web pages.
2. Parse the webpage. A parser will create a tree structure of the HTML as the webpages are intertwined and nested together. A tree structure will help the bot follow the paths that we created and navigate through to get the information.
3. Using python library to search the parse tree.
Among the computer languages for a web crawler, Python is easy-to-implement comparing to PHP and Java. It still has a steep learning curve that prevents many non-tech professionals from using it. Even though it is an economic solution to write your own, it’s still not sustainable regard to the extended learning cycle within a limited time frame.
However, there is a catch! What if there is a method that can get you the same results without writing a single line of code?
B. Web scraping tool comes in handy as a great alternative.
There are many options, but I use Octoparse. Let’s go back to the Amazon Career webpage as an example:
Goal: build a crawler to extract administrative job opportunities including Job title, Job ID, description, basic qualification, preferred qualification, and page URL.
URL:
1. Open Octoparse and select “Advanced Mode”. Enter the above URL to set up a new task.
2. As one can expect, the job listings include detail-pages that spread over to multiple pages. As such, we need to set up pagination so that the crawler can navigate through it. To this, click the “Next Page” button and choose “Look click Single Button” from the Action Tip Panel
3. As we want to click through each listing, we need to create a loop item. To do this, click one job listing. Octoparse will work its magic and identify all other job listings from the page. Choose the “Select All” command from the Action Tip Panel, then choose “Loop Click Each Element” command.
4. Now, we are on the detail page, and we need to tell the crawler to get the data. In this case, click “Job Title” and select “Extract the text of the selected element” command from the Action Tip Panel. As follows, repeat this step and get “Job ID”, “Description, ” “Basic Qualification”, “Preferred Qualification” and Page URL.
5. Once you finish setting up the extraction fields, click “Start Extraction” to execute.
However, that’s not All!
For SaaS software, it requires new users to take a considerable amount of training before thoroughly enjoy the benefits. To eliminate the difficulties to set up and use. Octoparse adds “Task Templates” covering over 30 websites for starters to grow comfortable with the software. They allow users to capture the data without task configuration.
As you gain confidence, you can use Wizard Mode to build your crawler. It has step-by-step guides to facilitate you to develop your task. For experienced experts, “Advanced Mode” should be able to extract the enterprise volume of data. Octoparse also provides rich training materials for you and your employees to get most of the software.
Final thoughts
Writing scripts can be painful as it has high initial and maintenance costs. No single web page is identical, and we need to write a script for every single site. It is not sustainable if you need to crawl many websites. Besides, websites likely changes its layout and structure. As a result, we have to debug and adjust the crawler accordingly. The web scraping tool is more practical for enterprise-level data extraction with fewer efforts and costs.
Consider you may have difficulties to find a web scraping tool, I compile a list of most popular scraping tools. This video can walk you through to get your device that fits your needs! Feel free to take advantage of it.
Author: Ashley Ng
Ashley is a data enthusiast and passionate blogger with hands-on experience in web scraping. She focuses on capturing web data and analyzing in a way that empowers companies and businesses with actionable insights. Read her blog here to discover practical tips and applications on web data extraction.
日本語記事:ゼロからWebクローラーを構築する方法Webスクレイピングについての記事は 公式サイトでも読むことができます。Artículo en español: Cómo Construir Un Web Rastreador (Crawler) Desde Cero: Una Guía para PrincipiantesTambién puede leer artículos de web scraping en el Website Oficial

Frequently Asked Questions about how to create a web crawler

Leave a Reply

Your email address will not be published. Required fields are marked *