0% found this document useful (0 votes)
92 views

How To Scrape Product Data From Amazon - A Complete Guide - Oxylabs

Uploaded by

Mario Reiley
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views

How To Scrape Product Data From Amazon - A Complete Guide - Oxylabs

Uploaded by

Mario Reiley
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Sign up

Back to blog

Tutorials Scrapers

Scraping Amazon Product Data: A Complete Guide

Maryia Stsiopkina
Share
2023-02-28 8 min read

Amazon is packed with useful e-commerce data, such as product information, reviews, and prices.
Extracting this data efficiently and putting it to use is imperative for any modern business. Whether
you intend to monitor the performance of your products sold by third-party resellers or track your
competitors, you need reliable web scraping services, like Amazon Scraper, to grasp this data for
market analytics.

Amazon scraping, however, has its peculiarities. In this step-by-step guide, we’ll go over every stage
needed to create an Amazon web scraper.

Setting up for scraping


To follow along, you will need Python. If you do not have Python 3.8 or above installed, head to
python.org and download and install Python.
Next, create a folder to save your code files for web scraping Amazon. Once you have a folder,
creating a virtual environment is generally a good practice.

The following commands work on macOS and Linux. These commands will create a virtual
environment and activate it:

1
2

If you are on Windows, these commands will vary a little as follows:

1
2

The next step is installing the required Python packages.

You will need packages for two broad steps—getting the HTML and parsing the HTML to query
relevant data.

Requests is a popular third-party Python library for making HTTP requests. It provides a simple and
intuitive interface to make HTTP requests to web servers and receive responses. This library is perhaps
the most known library related to web scraping.

The limitation of the Requests library is that it returns the HTML response as a string, which is not easy
to query for specific elements such as listing prices while working with web scraping code.

This is where Beautiful Soup steps in. Beautiful Soup is a Python library used for web scraping to pull
the data out of HTML and XML files. It allows you to extract information from the page by searching
for tags, attributes, or specific text.

To install these two libraries, you can use the following command:

1
If you are on windows, use Python instead of Python3. The rest of the command remains unchanged:

Note that we are installing version 4 of the Beautiful Soup library.

It's time to try out the Requests scraping library. Create a new file with the name amazon.py and enter
the following code:

1
2
3
4
5
6

Save the file and run it from the terminal.

In most cases, you cannot view the desired HTML. Amazon will block this request, and you will see the
following text in the response:

If you print the , you will see that instead of getting 200, which means success,
you get 503, which means an error.

Amazon knows this request was not using a browser and thus blocks it.
It is a common practice employed by many websites. Amazon will block your requests and return an
error code beginning with 500 or sometimes even 400.

The solution is simple. You can send the headers along with your request that a browser would.

Sometimes, sending only the is enough. At other times, you may need to send more
headers. A good example is sending the header.

To identify the user-agent sent by your browser, press F12 and open the Network tab. Reload the
page. Select the first request and examine Request Headers.

You can copy this and create a dictionary for the headers.

The following example shows a dictionary with the and accept-language headers:

1
2
3
4

You can send this dictionary to the optional parameter of the get method as follows:
1

Executing the code with these changes will show the expected HTML with the product details.

Another note is that if you send as many headers as possible, you will not need Javascript rendering.
If you need rendering, you will need tools like Playwright or Selenium.

Scraping Amazon product data


When web scraping Amazon products, typically, you would work with two categories of pages — the
category page and the product details page.

For example, open https://www.amazon.com/b?node=12097479011 or search for Over-Ear


Headphones on Amazon. The page that shows the search results is the category page.

The category page displays the product title, product image, product rating, product price, and,
most importantly, the product URLs page. If you want more details, such as product descriptions, you
will get them only from the product details page.

Let's examine the structure of the product details page.

Open a product URL, such as https://www.amazon.com/Bose-QuietComfort-45-Bluetooth-Canceling-


Headphones/dp/B098FKXT8L, in Chrome or any other modern browser, right-click the product title,
and select Inspect. You will see that the HTML markup of the product title is highlighted.

You will see that it is a span tag with its id attribute set to "productTitle".
Similarly, if you right-click the price and select Inspect, you will see the HTML markup of the price.

You can see that the dollar component of the price is in a span tag with the class "a-price-whole",
and the cents component is in another span tag with the class set to "a-price-fraction".

Similarly, you can locate the rating, image, and description.

Once you have this information, add the following lines to the code we have written so far:

1. Send a GET request with custom headers

1
2

Beautiful Soup supports a unique way of selecting tags that utilize the find methods. Alternatively,
Beautiful Soup also supports CSS selectors. You can use either of these to get the same results. In this
guide, we will use CSS selectors, which are universal ways to select elements. CSS selectors work with
almost all web scraping tools that can be used for web scraping Amazon product data.

We are now ready to use the Soup object to query for specific information.

2. Locate and scrape product name

The product name or the product title is located in a span element with its id productTitle. It's easy to
select elements using the id that is unique.

See the following code for example:


1

We send the CSS selector to the select_one method, which returns an element instance.

We can extract information from the text using the text attribute.

Upon printing it, you will see that there are few white spaces. To fix that, add function call as
follows:

3. Locate and scrape product rating

Scraping Amazon product ratings needs a little more work.

First, let's create a selector for rating:

Now, the following statement can select the element that contains the rating.

1
Note that the rating value is actually in the title attribute:

1
2
3

Lastly, we can use the replace method to get the number:

4. Locate and scrape product price

The product price is located in two places — below the product title and also on the Buy Now box.

We can use either of these tags to scrape Amazon product prices.

Let's create a CSS selector for the price:

This CSS selector can be passed to the select_one method of BeautifulSoup as follows:

You can now print the price

1
5. Locate and scrape product image

Let's scrape the default image. This image has the CSS selector as #landingImage. With this
information, we can write the following lines of code to get the image URL from the src attribute:

1
2

6. Locate and scrape product description

The next step in scraping Amazon product information is scraping the product description.

The methodology remains the same — create a CSS selector and use the select_one method.

The CSS selector for the description is as follows:

It means that we can extract the element as follows:

1
2

Handling product listing

So far, we have explored how to scrape product information.

However, to reach the product information, you will begin with product listing or category pages.

For example, https://www.amazon.com/b?node=12097479011 is the category page for over-ear


headphones.

If you examine this page, you will notice that all the products are contained in a that has a
special attribute [data-asin]. In that , all the product links are in an h2 tag.
With this in mind, the CSS Selector would be as follows:

We can read the href attribute of this selector and run a loop. However, note that the links will be
relative. You would need to use the urljoin method to parse these links.

1
2
3
4
5
6
7
8
9
10

Handling pagination

The link to the next page is in a link that contains the text Next. We can look for this link using the
contains operator of CSS as follows:

1
2
3
4

7. Export scraped product data to a CSV file

The data we are scraping is being returned as a dictionary. This is intentional. We can create a list
that contains all the scraped products.
1
2
3
4
5
6
7

This can then be used to create a Pandas object:

1
2

Reviewing final script


Putting together everything, the following is the final script:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
Best
40 practices
41
Scraping
42 Amazon without proxies or dedicated scraping tools is full of obstacles. Just like many other
43 scraping targets, Amazon has rate-limiting in place, meaning it can block your IP address if
popular
you 44
exceed the established limit. Apart from that, Amazon uses bot-detection algorithms that can
45
check your HTTP headers for any suspicious details. Also, you should be ready to constantly adapt to
46
the different page layouts and various HTML structures.
47
48
Considering these factors, it’s recommended to follow some common practices to prevent getting
49
detected and blocked by Amazon. Some of the most useful tips are:
50
1.Use51a real User-Agent. It’s important to make your User-Agent look as plausible as possible. Here’s
the52list of the most common user agents.
53
2Set54
your fingerprint. Many websites use Transmission Control Protocol (TCP) and IP fingerprinting to
. 55
detect bots. To avoid getting spotted, you need to make sure your fingerprint parameters are always
56
consistent.
57
58
3Change the crawling pattern. To develop a successful crawling pattern, you should think about
. 59
how a regular user would behave while exploring a page and add clicks, scrolls, and mouse
60
movements
61
accordingly.
62
63
Easier
64
solution to extract Amazon data
65
And66this is only a small portion of the requirements you should keep in mind when scraping Amazon.
67
Alternatively, you can turn to a ready-made scraping solution designed specifically for scraping
68
Amazon - Amazon Scraper API. With this scraper, you can:
69
Scrape
70 and parse various Amazon page types, including Search, Product, Offer listing, Questions
&71Answers, Reviews, Best Sellers, and Sellers.
72
Target
73 localized product data in 195 locations worldwide;

Retrieve accurate parsed results in JSON format without installing any other library;

Enjoy multiple handy features, such as bulk scraping and automated jobs.

Let's look at Amazon Scraper API in action.

Extracting product details

Consider the example of getting product data from product pages.

All you need is the product URL — irrespective of the country of the Amazon store. For example, the
following code extracts details for the Bose QC 45 from Amazon.com:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
You will get the complete product data returned in JSON format.

Another way to get the information is by ASIN of the product. The only line you need to modify is the
payload:

1
2
3
4
5
6
7
8
9
10

Note the optional parameter domain. You can use this parameter to get Amazon data from any
domain, such as amazon.co.uk.

Searching products

Searching for the products is very easy.

Again, the only code that changes is the payload. Here is the payload for the search for "bose":
1
2
3
4
5
6
7
8
9
10

Notice how it requests 10 pages beginning with page 1. Also, we limit the search to category id
12097479011, which is Amazon's category id for headphones.

Conclusion
You can write code to scrape Amazon products using the Requests and Beautiful Soup libraries. It
may need some effort, but it works. Sending custom headers, rotating user-agents, and proxy rotation
can help bypass bans or rate limiting.

However, the easiest solution to scrape Amazon products is using the Amazon Scraper API. Oxylabs
also allows you to gather data from 50 other marketplaces using its E-Commerce Scraper API.
If you have any questions, do not hesitate to contact us.

Frequently asked questions

Does Amazon allow scraping?

Can scraping be detected?

Does Amazon ban IP?

How to bypass CAPTCHA while scraping Amazon?

Forget about complex web scraping processes


Choose Oxylabs' advanced web intelligence collection solutions to gather real-time public data hassle-free.

Sign up

About the author

Maryia Stsiopkina
Content Manager

Maryia Stsiopkina is a Content Manager at Oxylabs. As her passion for writing was developing, she was
writing either creepy detective stories or fairy tales at different points in time. Eventually, she found herself in
the tech wonderland with numerous hidden corners to explore. At leisure, she does birdwatching with
binoculars (some people mistake it for stalking), makes flower jewelry, and eats pickles.

Learn more about Maryia Stsiopkina


All information on Oxylabs Blog is provided on an "as is" basis and for informational purposes only. We make no representation and
disclaim all liability with respect to your use of any information contained on Oxylabs Blog or any third-party websites that may be
linked therein. Before engaging in scraping activities of any kind you should consult your legal advisors and carefully read the
particular website's terms of service or receive a scraping license.

Related articles
Tutorials

Python Cache: How to Speed Up


Your Code with Effective Caching

Vytenis Kaubre
2023-05-17

Tutorials

How to Use cURL with REST API

Yelyzaveta Nechytailo
2023-05-16

Tutorials

Python Syntax Errors: Common


Mistakes and How to Fix Them

Vytenis Kaubre
2023-05-08

Get the latest news from


data gathering world

I’m interested

Scale up your business with Oxylabs®


Register

Contact sales

COMPANY PROXIES

About us Datacenter Proxies

Our values Shared Datacenter Proxies

Affiliate program Dedicated Datacenter Proxies

Service partners Residential Proxies

Press area Static Residential Proxies

Residential Proxies sourcing SOCKS5 Proxies

Careers Mobile Proxies

Our products Rotating ISP Proxies

OxyCon

Project 4beta ADVANCED PROXY SOLUTIONS

Web Unblocker

SCRAPER APIS RESOURCES

SERP Scraper API FAQ

E-Commerce Scraper API Documentation

Real Estate Scraper API Blog

Web Scraper API

TOP LOCATIONS INNOVATION HUB

United States Adaptive Parser

United Kingdom Oxylabs' Patents


Canada

Germany

India

All locations

GET IN TOUCH
General: [email protected]
Support: [email protected]
Career: [email protected]

Certified data centers and upstream providers

English

Connect with us

Privacy Policy Trust & Safety


Vulnerability Disclosure Policy
oxylabs.io© 2023 All Rights Reserved

You might also like