• March 27, 2024

Is Web Scraping Legal In The Uk

Data scraping: “everybody else was doing it, so I thought it …

By Angus McLean, Partner, Simmons & Simmons LLP
Published: 30 September 2015
I learnt to my cost as a schoolboy that while there can be considerable merit in taking a risk-based approach to compliance decisions, the “everybody else was doing it” defence tends not to hold much water if you are the unlucky one who gets caught. In no area of my practice have I been reminded about this salutary lesson more frequently in recent years than on the issue of data scraping.
A fast growing trend
Call it what you will – data mining, web scraping or any of the other commonly used euphemisms – the practice of systematically extracting data from third party websites (without the permission of the website owner) is on the rise in the hedge fund industry. This can be done manually or, as is more often the case, by specially developed computer programmes. The same legal issues arise in both cases, although it is arguable that manual extraction is marginally less risky because it tends to be harder for a website owner to detect than software-enabled scraping.
The mere fact that data scraping is becoming so ubiquitous seems to be the main cause of the commonly held assumption that it carries no legal risk. However, as the 13 or so European flight price comparison websites that have been the target of Ryanair’s wrath over the last 3-4 years can vouch, my childhood excuse does not provide much insurance against costly litigation.
Is data scraping illegal?
As things currently stand, many acts of data scraping are potentially illegal under UK law. The exact nature of the illegal activity depends on a variety factors. Unfortunately, therefore, every situation needs to be analysed on its own facts. However, the two most common claims that can be brought against data scrapers are (a) breach of contract and (b) IP infringement (specifically, database right infringement). Depending on the precise circumstances, it is possible that a data scraper could also infringe copyright or trade mark rights, breach data protection legislation and/or contravene the Computer Misuse Act 1990.
To have a justified breach of contract claim, the owner of the website in question has to show that its terms and conditions of use (Ts&Cs) are enforceable and have been breached. The second requirement is obviously down to the wording of the Ts&Cs in question. However, it is becoming increasingly common for website Ts&Cs to expressly prohibit data scraping (or equivalent activities). The other issue is whether the data scraper is technically bound by the Ts&Cs in question.
At present there is no clear English case law on this issue. However, it is reasonably safe to assume that any Ts&Cs that a user has had to “click to accept” will be binding. If the Ts&Cs are binding and rule out data scraping, then in the vast majority of cases the website owner will have a valid breach of contract claim.
Determining whether there is also a database right infringement claim is also a highly fact specific exercise. The analysis will depend on:
the type and volume of data that is being extracted;
the frequency with which the data is being extracted; and
the level of investment that was required to develop the database from which the data is being extracted.
If the database required a substantial investment to put together and data is being taken on a systematic basis, database right infringement may also be an issue.
What are the risks in practice?
To date, relatively few European website owners seem to have been sufficiently exercised about third parties extracting data from their sites to pursue full-blown litigation. That said, as the Ryanair cases show, past performance is no guarantee of future results. It is, therefore, important to understand what the consequences of a data scraping complaint might be to provide the proper context for any risk-based analysis of whether those risks are outweighed by the benefits the scraping activities are expected to generate.
Depending on the type of claim that is available to the website owner in question, the key risks faced by a data scraper under UK law are likely to be:
injunction (including pre-trial injunctions);
financial liability (in the form of damages or, in certain circumstances, an account of profits);
disclosure obligations; and
reputational damage.
Although the final two risks are not really formal legal remedies, in my experience they have just as much of a deterrent effect as the more traditional legal remedies (e. g. injunctions and damages or an account of profits). This is because the prospect of having to disclose the type of investment activities for which the data in question is being used, is often seen as the most commercially damaging consequence of a data scraping dispute. Of course, as with the other risks identified above, it may be possible to avoid having to disclose information about the ends to which the data is being applied by settling a potential claim before it escalates into full-blown litigation. However, assuming that will be possible in every case clearly involves a degree of risk in itself.
The calculation method that will be used to determine any financial liability a fund might incur also plays a big part in the risk analysis. The precise calculation method that applies will depend on the type of claims that are available to the website owner (in particular, whether it has a valid claim for database right infringement as well as breach of contract). If it is limited to a contractual claim, a website owner will generally only be able to recover the loss it has incurred. If it does not license out the data in question, its loss may well be negligible. In such circumstances the website owner might be able to claim damages based on a notional reasonable royalty set by the court by reference to the licence fees that are charged for similar datasets.
If a website owner also has a valid claim for database right infringement, it is entitled to opt for an account of the profits the fund has made from its infringing activities. Clearly, such an award could be substantial if the fund generates significant profits directly from the use of the data in question. However, it is often the case that the data in question forms just one data point in a model that includes a variety of other factors. In that case, the fund’s liability should be limited to the proportion of any profits that are attributable to the use of the data in question only.
This means that it may ultimately be difficult for a website owner to identify any significant profits that are directly attributable to the use of the data in question. Unfortunately, that will not necessarily prevent a sufficiently motivated website owner from trying.
[email protected]
Return to AIMA Journal – Q3 2015
Is Screen Scraping and Web Crawling Legal in the UK?

Is Screen Scraping and Web Crawling Legal in the UK?

Web Crawling and Screen Scraping
As the importance and value of big data continues to rise, so does the number of companies using web crawling services (or “spiders”) to obtain such data. Companies use spiders for screen scraping websites for information and data which is copied or extracted by the spider for the company to then analyse or publish on its own website.
This practice is understandably divisive since the website owners who are victim to the scraping do not want their content to be taken and used without their consent, whilst the companies undertaking the scraping argue that they should be free to make use of the information which is already in the public domain.
The best examples of screen scraping are price comparison sites, such as airline flight comparison sites. The comparison site uses a spider to scan the websites of the different airlines. The data scraped from those websites is then compiled on the comparison site, providing consumers with a very handy tool.
Legal Status of Screen Scraping
The legal status of scraping is not a simple area of the law. Whilst there is no specific law prohibiting scraping, in recent years two prominent cases have delivered differing verdicts on the matter. In both cases the decisions (as to whether the technique of screen scraping was against the law) hinged on whether: (i) intellectual property rights subsisted in the data which was mined, (ii) whether the scraping was an infringement of those rights and (iii) whether it is possible for the website owner to limit the re-use of the data through the use of T&Cs.
In Ryanair Ltd v PR Aviation BV [2015] the Court of Justice of the European Union (“CJEU”) held that no intellectual property rights subsisted in the scraped data (Ryanair’s database of flight times and prices) and therefore the company scraping the data had not infringed Ryanair’s IP. This was because the database was not the result of the requisite creative input necessary to be afforded copyright protection.
The CJEU made it clear, however, that it is possible for a website owner to restrict the re-use of the mined data through its terms and conditions. This is therefore something that companies should bear in mind – if they access a website and consent to the terms of use which contain a restriction on the re-use of the website data, if they do go on to re-use that data they may be liable for breach of contract.
The Supreme Court took a different approach to the CJEU in NLA v Meltwater [2013] where it was held that Meltwater’s use of news headlines which it had scraped from news website as links to the relevant news articles was enough to amount to copyright infringement, because unlike the database of flight times, the news headlines did require a certain amount of creative input. Meltwater subsequently went on to obtain the express consent of the NLA to mitigate its losses.
As explained above there is no specific law against scraping or using publicly available information which has been obtained through the use of scraping techniques, however, the owner of the website may have a claim against the user if the scraping and subsequent use of the information infringes the website owner’s intellectual property rights, or if the user is in breach of any terms and conditions of website use.
Most Common IP Rights
The most common IP rights which may be held to subsist in such information is copyright and database rights. In terms of copyright, copyright protection is afforded to original works and is intended to prevent copying. As demonstrated above in the two cases, whether the content which is scraped is protected by copyright will depend on the facts of the case – to what extent is the data the result of creative input and therefore protected by copyright, and how much is being copied?
For example, if significant portions (“a substantial copy”) of text from a blog (i. e. creative material) are being scraped then this may well amount to copyright infringement.
Furthermore, it is possible for website owners to prohibit companies from scraping information from their sites through the use of contractual restrictions. If the user agrees to a website’s terms of use which includes a restriction on scraping and using the publicly available information on their website but the user decides to go ahead and scrape and use that information, the website owner may be able to claim against the user for breach of contract.
Another consideration when screen scraping is data protection. If the information being gathered contains personal data (which may not be as obvious as a name and address, but rather a username or email address), the user will need to ensure that they are compliant with data protection legislation.
Currently the relevant law in the UK in relation to data protection is the Data Protection Act 1998 (“DPA”). In May 2018 the new European General Data Protection Regulation (“GDPR”) comes into force and despite Brexit, it will become law in the UK. GDPR is far more extensive than the DPA and the penalties for non-compliance are far greater.
Under the GDPR, the individual who the personal data relates to must give their consent to the processing of their data. That means consent must be freely given, relates to a specific purpose, is informed and unambiguous. It is hard to find examples of how scraping data, which includes personal data, without the individual’s consent could fall within the law.
Another consideration is the management of the mined data. Managing big data is becoming an increasingly popular discussion point given the significant increase in its use and value. The Information Commissioner’s Office has published some guidance for organisations who handle big data:
Anonymisation – although it may not be possible to fully anonymise data, companies should try to mitigate the risk of re-identification to the point where the chance is extremely remote.
Privacy impact assessments – companies should carry out such assessments before processing data to assess how its use of the data is likely to impact on the individuals whose data is being analysed.
Web Crawler Summary
In summary, there is no specific piece of legislation which restricts the use of a web crawler to gather information. The website owners, however, may have legal rights against the company under intellectual property law and contract law.
Each case will turn on its own facts though and this is very much dependent upon what information is scraped from the websites. Companies should beware of contractual provisions which they have agreed to in respect of a website’s terms of use – these may prohibit the user from taking and using the data off the site.
If the data being scraped includes personal data, then compliance with data protection law must also be borne in mind.
The only way to be truly certain that the rights of a website owner have not been infringed is to obtain their express consent to the screen scraping and subsequent use of the information.
Is Web Scraping Illegal? Depends on What the Meaning of the Word Is

Is Web Scraping Illegal? Depends on What the Meaning of the Word Is

Depending on who you ask, web scraping can be loved or hated.
Web scraping has existed for a long time and, in its good form, it’s a key underpinning of the internet. “Good bots” enable, for example, search engines to index web content, price comparison services to save consumers money, and market researchers to gauge sentiment on social media.
“Bad bots, ” however, fetch content from a website with the intent of using it for purposes outside the site owner’s control. Bad bots make up 20 percent of all web traffic and are used to conduct a variety of harmful activities, such as denial of service attacks, competitive data mining, online fraud, account hijacking, data theft, stealing of intellectual property, unauthorized vulnerability scans, spam and digital ad fraud.
So, is it Illegal to Scrape a Website?
So is it legal or illegal? Web scraping and crawling aren’t illegal by themselves. After all, you could scrape or crawl your own website, without a hitch.
Startups love it because it’s a cheap and powerful way to gather data without the need for partnerships. Big companies use web scrapers for their own gain but also don’t want others to use bots against them.
The general opinion on the matter does not seem to matter anymore because in the past 12 months it has become very clear that the federal court system is cracking down more than ever.
Let’s take a look back. Web scraping started in a legal grey area where the use of bots to scrape a website was simply a nuisance. Not much could be done about the practice until in 2000 eBay filed a preliminary injunction against Bidder’s Edge. In the injunction eBay claimed that the use of bots on the site, against the will of the company violated Trespass to Chattels law.
The court granted the injunction because users had to opt in and agree to the terms of service on the site and that a large number of bots could be disruptive to eBay’s computer systems. The lawsuit was settled out of court so it all never came to a head but the legal precedent was set.
In 2001 however, a travel agency sued a competitor who had “scraped” its prices from its Web site to help the rival set its own prices. The judge ruled that the fact that this scraping was not welcomed by the site’s owner was not sufficient to make it “unauthorized access” for the purpose of federal hacking laws.
Two years later the legal standing for eBay v Bidder’s Edge was implicitly overruled in the “Intel v. Hamidi”, a case interpreting California’s common law trespass to chattels. It was the wild west once again. Over the next several years the courts ruled time and time again that simply putting “do not scrape us” in your website terms of service was not enough to warrant a legally binding agreement. For you to enforce that term, a user must explicitly agree or consent to the terms. This left the field wide open for scrapers to do as they wish.
Fast forward a few years and you start seeing a shift in opinion. In 2009 Facebook won one of the first copyright suits against a web scraper. This laid the groundwork for numerous lawsuits that tie any web scraping with a direct copyright violation and very clear monetary damages. The most recent case being AP v Meltwater where the courts stripped what is referred to as fair use on the internet.
Previously, for academic, personal, or information aggregation people could rely on fair use and use web scrapers. The court now gutted the fair use clause that companies had used to defend web scraping. The court determined that even small percentages, sometimes as little as 4. 5% of the content, are significant enough to not fall under fair use. The only caveat the court made was based on the simple fact that this data was available for purchase. Had it not been, it is unclear how they would have ruled. Then a few months back the gauntlet was dropped.
Andrew Auernheimer was convicted of hacking based on the act of web scraping. Although the data was unprotected and publically available via AT&T’s website, the fact that he wrote web scrapers to harvest that data in mass amounted to “brute force attack”. He did not have to consent to terms of service to deploy his bots and conduct the web scraping. The data was not available for purchase. It wasn’t behind a login. He did not even financially gain from the aggregation of the data. Most importantly, it was buggy programing by AT&T that exposed this information in the first place. Yet Andrew was at fault. This isn’t just a civil suit anymore. This charge is a felony violation that is on par with hacking or denial of service attacks and carries up to a 15-year sentence for each charge.
In 2016, Congress passed its first legislation specifically to target bad bots — the Better Online Ticket Sales (BOTS) Act, which bans the use of software that circumvents security measures on ticket seller websites. Automated ticket scalping bots use several techniques to do their dirty work including web scraping that incorporates advanced business logic to identify scalping opportunities, input purchase details into shopping carts, and even resell inventory on secondary markets.
To counteract this type of activity, the BOTS Act:
Prohibits the circumvention of a security measure used to enforce ticket purchasing limits for an event with an attendance capacity of greater than 200 persons.
Prohibits the sale of an event ticket obtained through such a circumvention violation if the seller participated in, had the ability to control, or should have known about it.
Treats violations as unfair or deceptive acts under the Federal Trade Commission Act. The bill provides authority to the FTC and states to enforce against such violations.
In other words, if you’re a venue, organization or ticketing software platform, it is still on you to defend against this fraudulent activity during your major onsales.
The UK seems to have followed the US with its Digital Economy Act 2017 which achieved Royal Assent in April. The Act seeks to protect consumers in a number of ways in an increasingly digital society, including by “cracking down on ticket touts by making it a criminal offence for those that misuse bot technology to sweep up tickets and sell them at inflated prices in the secondary market. ”
In the summer of 2017, LinkedIn sued hiQ Labs, a San Francisco-based startup. hiQ was scraping publicly available LinkedIn profiles to offer clients, according to its website, “a crystal ball that helps you determine skills gaps or turnover risks months ahead of time. ”
You might find it unsettling to think that your public LinkedIn profile could be used against you by your employer.
Yet a judge on Aug. 14, 2017 decided this is okay. Judge Edward Chen of the U. S. District Court in San Francisco agreed with hiQ’s claim in a lawsuit that Microsoft-owned LinkedIn violated antitrust laws when it blocked the startup from accessing such data. He ordered LinkedIn to remove the barriers within 24 hours. LinkedIn has filed to appeal.
The ruling contradicts previous decisions clamping down on web scraping. And it opens a Pandora’s box of questions about social media user privacy and the right of businesses to protect themselves from data hijacking.
There’s also the matter of fairness. LinkedIn spent years creating something of real value. Why should it have to hand it over to the likes of hiQ — paying for the servers and bandwidth to host all that bot traffic on top of their own human users, just so hiQ can ride LinkedIn’s coattails?
I am in the business of blocking bots. Chen’s ruling has sent a chill through those of us in the cybersecurity industry devoted to fighting web-scraping bots.
I think there is a legitimate need for some companies to be able to prevent unwanted web scrapers from accessing their site.
In October of 2017, and as reported by Bloomberg, Ticketmaster sued Prestige Entertainment, claiming it used computer programs to illegally buy as many as 40 percent of the available seats for performances of “Hamilton” in New York and the majority of the tickets Ticketmaster had available for the Mayweather v. Pacquiao fight in Las Vegas two years ago.
Prestige continued to use the illegal bots even after it paid a $3. 35 million to settle New York Attorney General Eric Schneiderman’s probe into the ticket resale industry.
Under that deal, Prestige promised to abstain from using bots, Ticketmaster said in the complaint. Ticketmaster asked for unspecified compensatory and punitive damages and a court order to stop Prestige from using bots.
Are the existing laws too antiquated to deal with the problem? Should new legislation be introduced to provide more clarity? Most sites don’t have any web scraping protections in place. Do the companies have some burden to prevent web scraping?
As the courts try to further decide the legality of scraping, companies are still having their data stolen and the business logic of their websites abused. Instead of looking to the law to eventually solve this technology problem, it’s time to start solving it with anti-bot and anti-scraping technology today.
Get the latest from imperva
The latest news from our experts in the fast-changing world of application, data, and edge security.
Subscribe to our blog

Frequently Asked Questions about is web scraping legal in the uk

Is it legal to scrape websites UK?

As explained above there is no specific law against scraping or using publicly available information which has been obtained through the use of scraping techniques, however, the owner of the website may have a claim against the user if the scraping and subsequent use of the information infringes the website owner’s …Feb 6, 2017

Can I get in trouble for web scraping?

Web scraping and crawling aren’t illegal by themselves. … Web scraping started in a legal grey area where the use of bots to scrape a website was simply a nuisance. Not much could be done about the practice until in 2000 eBay filed a preliminary injunction against Bidder’s Edge.

Leave a Reply

Your email address will not be published. Required fields are marked *