pypiscreen-scraping95% confidence\u2191 55

Screen scraping: getting around "HTTP Error 403: request disallowed by robots.txt"

Full error message
Is there a way to get around the following?

httperror_seek_wrapper: HTTP Error 403: request disallowed by robots.txt

Is the only way around this to contact the site-owner (barnesandnoble.com).. i'm building a site that would bring them more sales, not sure why they would deny access at a certain depth.

I'm using mechanize and BeautifulSoup on Python2.6.

hoping for a work-around

You can try lying about your user agent (e.g., by trying to make believe you're a human being and not a robot) if you want to get in possible legal trouble with Barnes & Noble. Why not instead get in touch with their business development department and convince them to authorize you specifically? They're no doubt just trying to avoid getting their site scraped by some classes of robots such as price comparison engines, and if you can convince them that you're not one, sign a contract, etc, they may well be willing to make an exception for you. A "technical" workaround that just breaks their policies as encoded in robots.txt is a high-legal-risk approach that I would never recommend. BTW, how does their robots.txt read?

API access

Get this solution programmatically \u2014 free, no authentication.

curl https://depscope.dev/api/error/32d1f3720465ecb334d714f4ce39328a10f20c8619ae8bdd28bb829835fdd6de
hash \u00b7 32d1f3720465ecb334d714f4ce39328a10f20c8619ae8bdd28bb829835fdd6de
Screen scraping: getting around "HTTP Error 403: reques… — DepScope fix | DepScope