Back to blog

Are there unscrapable websites?

Web scraping is a well-known technique for extracting data from various websites. The presumption is that you can scrape any data if it is publicly available. So are there any unscrapable websites?


I have to share the good news with you – technically, all of them are scrapable if you know how to do it. The thing is that some are harder to crack than others. Certain webmasters can be very anxious and overly protective of their content. They try to guard it by using various anti-scraping techniques and tools. It’s understandable but quite tricky.


You see, the internet has its own set of rules, and if you’re able to access data publicly, you can easily collect it too. Of course, you should abide by the laws concerning copyright protection if you publish your findings.

Mariam Nakani

May 07, 2021

4 min read

web scraping challenges

The Signs That You’re Failing to Extract Data From the Website

First things first, let’s establish when you should start panicking that something went wrong in your attempt to scrape a particular website:

  • You’re getting HTTP 4xx response codes. This indicates that the website’s server you’re trying to access won’t process your request. It might mean that your scraper setup is incorrect, or the proxies you are using are not authenticating correctly, etc.
  • You’re getting the requested content in parts or not at all. Usually, the page download size should start from around 1MB, so if you’re getting less than 50KB, it means that something went wrong. This might also mean that you’ve encountered Captcha.
  • You’re getting the wrong information. Well, this means that you’re dipping your fingers in the so-called honeypots. In other words, the webmaster has set up traps for scrapers in the form of links in the HTML. They are not visible for the regular user, but any web scraper will request these links. In turn, you can get blocked.
  • Your request is timing out. This may indicate that you are sending too many requests to a website, forcing a drop in response speed.

Main Web Scraping Challenges

Now, let’s dive in and dissect some of the most common challenges you can experience when web scraping.

Captchas

Captchas’ primary purpose is to filter human traffic from various bots, including web scrapers. They present multiple challenges to people who visit certain websites. These challenges are easily solvable for human beings but will complicate the flow for bots.

Captchas can easily be triggered by unusual traffic or many connections from a single IP address. Besides, a suspicious scraper fingerprint can set it off, too.

How to Avoid Captchas?

If you’ve already encountered Captcha, try to rotate your IP address. Of course, this works great if you have a high-quality proxy network. So the best thing is to always prepare for web scraping by getting high-quality, “clean” residential proxy IP addresses. This way, there’s less chance to encounter Captchas.

Otherwise, you can use a Captcha solving service. Certain websites use real people to solve these challenges for you! The price is pocket-friendly too – it costs around 1-3 dollars per 1,000 challenges

Find more tips & tricks on how to avoid Captchas.

Are You Dipping Your Fingers Into the Honeypots?

The popular webmaster’s method detects whether they are getting unwanted visitors, like web scrapers, on their websites. Essentially, it’s bait for web scrapers in the form of links in the HTML that is not visible to regular site visitors.

These traps can redirect your scraper to endless blank pages. Then this anti-scraping tool fingerprints the properties of your requests and blocks you.

How to Avoid Honeypots?

First, when you’re developing a scraper, make sure that it will only follow visible links to avoid any honeypot traps. If you think that your scraper has already bitten this bait, look for “display: none” or “visibility: hidden” CSS properties in a link. If you detect one, it’s time to do an about-face in a track.

A quick heads up – webmasters tend to change their honeypots’ URLs and texts as they know that web scrapers learn to avoid them. So keep your web scrapers up to date!

Besides, keep in mind that webmasters may also try to protect their content by continually changing the site’s markup, attributes, structure, CSS, etc. If you haven’t prepared your scraper for these changes, it can abruptly stop when entering this unfamiliar environment.

Every website has different architecture. So, based on how website is created and being updated continuously, you need to test it before scrapping, and detect all the changes. Then, update your scraper so that it won’t be shocked by the new environment. After all, you have to take into account its feelings too.

Wrapping It Up

So, these are the main challenges that you can come across when you scrape the web. You can see that it is more than possible to overcome all these challenges if you know how to do so.

One last word before you go. Keep in mind that if the website you’re trying to scrape sees that requests are coming from a single IP address, you’ll be quickly blocked. Use good quality proxy servers with a large pool of IP addresses for your web scraping projects. And don’t forget to use automatic IP rotation!

About the author

Mariam Nakani

Say hello to Mariam! She is very tech savvy - and wants you to be too. She has a lot of intel on residential proxy providers, and uses this knowledge to help you have a clear view of what is really worth your attention.

All information on Smartproxy Blog is provided on an "as is" basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on Smartproxy Blog or any third-party websites that may be linked therein.

Related Articles

Anti-scraping

Anti-Scraping Techniques And How To Outsmart Them

Businesses collect scads of data for a variety of reasons: email address gathering, competitor analysis, social media management – you name it. Scraping the web using Python libraries like Scrapy, Requests, and Selenium or, occasionally, the Node.js Puppeteer library has become the norm.

But what do you do when you bump into the iron shield of anti-scraping tools while gathering data with Python or Node.js? If not too many ideas flash across your mind, this article is literally your stairway to heaven cause we’re about to learn the most common anti-scraping techniques and how to combat them.

James Keenan

Nov 08, 2021

7 min read

© 2018-2024 smartproxy.com, All Rights Reserved