September 23, 2022
7 min.

How to Scrape the Web Without Being Blocked or Blacklisted: Some Tips

Web scraping can be tough, especially since most  websites actively try to stop developers from scraping their websites using a number of tactics such as IP address identification, HTTP request header checking, CAPTCHAs, JavaScript checks, and etc. Developers can, however, create web scrapers that are virtually impossible to detect by using a variety of related techniques to get around these restrictions as well. Here are a few quick tips for website scraping that won't result in blocking:

  1. Set Real Request Headers

Real web browsers will have a wide variety of headers set, and cautious websites can check any of them to block your web scraper. Therefore, the activity of your scraper should be similar to that of a typical user visiting the target website as closely as possible. 

*You can check the request headers sent by your current browser at Httpbin

User-Agent is one of the most significant headers for web scraping. The server receives information from that string about the requesting User-Agent's operating system, vendor, and version. Set these headers using the library of your choice so that the target website thinks your web scraper is a normal web browser.

  1. Proxies

Websites may block an IP address if your scraper sends them too many queries from that IP address. You can then use a proxy server with a different IP address and it will serve as a bridge between your web scraper and the website server.

Proxies come in a variety of forms. Here is an article about them. 

To make your activity appear to come from lots of different users and not be suspicious, you should ideally use rotating IP addresses. Additionally, having the possibility to use another IP helps in situations where it is blocked or blacklisted.

Yet another crucial difference between proxies is that some rely on a residential IP while others use a data center IP. Keep in mind that data center IPs are trustworthy but simple to spot and block. Residential IP proxies are more difficult to detect because they belong to an ISP (Internet Server Provider) that may assign them to real users.

  1. Set a Referrer-URL

An HTTP request header - "the Referrer" lets the website know from which site you are coming. Since Google Search is typically set as the default search engine, it is generally a good idea to set this so that it appears as though you are coming from Google.

Using a tool like similarweb, you can also find out which websites are the most frequently used referrals to any given site; regularly, this will be a social media site like YouTube or another similar site. 

  1. Set random request intervals

It will be quite obvious that your web scraper is a robot if it sends one request each second for hours at a time. No human being would continuously refresh a website at precise one-second intervals. Schedule your requests at random intervals so they seem to be coming from actual people in order to prevent being detected and blocked. Delays of 2-10 seconds should be ok.

Be polite and refrain from overloading a page with requests. You can usually find a line with the word "crawl-delay" in the robots.txt file of a website that specifies how long a web scraper should wait between requests. So long as it's there, follow it.

  1. Use headless browsers

To prevent being blocked when web scraping, you'll need your interactions with the target website to look like typical users accessing the URLs. Use of a headless web browser is one of the best approaches for achieving it. They function like real web browsers and don't require a graphical user interface.

Most widely used web browsers, including Firefox and Google Chrome have headless mode. You must make the behavior of the browser look realistic even if you use an official browser in headless mode. To do that, it's usual practice to provide a few specific request headers, such as a User-Agent.

  1. Look out for Honeypot traps

Some websites make links invisible to normal users so that they can detect when a bot follows them. When that occurs, you'll be quickly identified and blocked. So check the CSS style sheet of the page for attributes like "display: none" or "visibility: hidden" to prevent that.

Honeypot traps can also be set up by making the links the same color as the page background. It is advisable to look for the hex value of that color in the CSS attributes because it is frequently the color white. It can be "color: #fff;" or "color: #ffffff."

  1. Use a CAPTCHA-solving service

One of the most challenging tasks when scraping a URL is getting over CAPTCHAs. These computer tests are designed to distinguish between people and bots.

Thanks to services like Scrape.Fun, that are fully integrated solutions, and narrow CAPTCHA-solving solutions like 2Captcha or AntiCAPTCHA, that can be easily integrated for CAPTCHA solving, it is possible to get around these restrictions. 

  1. Track Website Changes

For different reasons, many websites change their layouts, which frequently breaks scrapers. When building your scraper, you must consider these modifications and implement ongoing monitoring so that you can verify that your crawler is still functional (normally, simply counting the number of successful requests per crawl should be enough).

  1. Scrape data out of the Google Cache

You can use the Google cache to scrape data from websites whose data doesn't change regularly. Certain websites are cached by Google. You can request its cached data rather than sending a request to the actual one. Add the website's URL in front of this URL to access the cache on any web page.

http://webcache.googleusercontent.com/search?q=cache:URL (URL of the webpage to scrape). 

As this article has shown, some websites use a variety of techniques to prevent you from scraping their content. It may not be enough to use one method to avoid blocking, so follow these tips and tricks for successful web scraping.

Read similar blogs