Explore What Pagination Is and How to Implement It Properly
Table of contents
- What is pagination?
- Solution 1. Deleting page pagination from the index with the help of noindex
- Solution 2. “View all” and rel=”canonical”
- Solution 3. Rel=”prev”/”next”
What is pagination?
Pagination is an ordinal numbering of pages, which is usually located at the top or bottom of the site pages.
In most cases, it is used for main pages and partitions. It often looks like this:
Let’s look at some of the potential problems that arise when you use pagination, without paying attention to certain issues:
Limit search engines visits for your site
When search engines crawl your site, the depth and number of pages they visit at a time will vary, depending on the site’s trust, the content update rate, etc. And, if you have a huge number of pages with pagination, then the likelihood that search engines will go through all pages of pagination and index all the end pages (goods/articles) is significantly reduced. Also, the limit will be spent on visiting pagination pages, and not on visiting really important pages of the site.
Problem with duplicates
Depending on the structure of your pages with pagination, it is very likely that some pages may contain similar or identical content. In addition to this, you will often find out that you have the same title and meta description tags on your site. In this case, duplicate content can cause difficulties for search engines when it’s time to determine the most relevant pages for a particular search query.
SEO specialists have already developed 4 ways to solve this problem.
Solution 1. Deleting page pagination from the index with the help of noindex
In most cases, this method is a priority and can be implemented fast. The main point is in the exclusion of all pages of pagination from the index, except the first.
It is implemented in the following way:
The meta tag
<meta name="robots" content="noindex, follow" />
is added to the HEAD section on all but the first page. Thus, we exclude all pages of pagination from the index, except the main page of the catalog and at the same time we ensure the indexing of all products/pages that belong to this catalog. Pay attention to such nuances:
- If you place the description text of the main page of the catalog, then it is still desirable to place it only on the first page.
- You should check that the first page URL is not duplicated. For example, when the pagination is implemented like this:
you should add a link to the first page, in case you are not on the first page
and from this page
301 redirects to site.com/catalog have to be configured.
- suitable for Yandex;
- least difficult of all solutions;
- a great way to exclude all pages of pagination from the index, if there is no logical reason for including them in the index.
- although it solves the potential problem of pagination, at the same time, we exclude paginal content from the index;
- if there are many products, then if you do not use the XML sitemap, the products that are located deep in the directory will be indexed for a long time.
Solution 2. “View all” and rel=”canonical”
This method requires the use of Google to create a separate “View All” page, where all products/pages from this catalog are displayed, and on all pages of the pagination we put rel=”canonical” on the page “View all”.
Implementation of this method: after you have created the “View all” page (for example, it is site.com/catalog/view-all.html), then on all pages of the pagination you need to place the following into the HEAD section:
<link rel="canonical" href="http://site.com/catalog/view-all.html" />
Thus, we show the search engines that each page of pagination, is a part of the “View all” page so to speak. Google claims that
- this is the most preferable for them method;
- users tend to view the entire category on one page at once (although this point is rather controversial and depends on the situation).
The “View All” page should load rather quickly, preferably within 1-3 seconds. Therefore, this method is ideally suited for a category that has a number of pages with pagination from 5 to 20 and is not suitable for directories with hundreds of pages of pagination.
- priority method for Google;
- all the contents of the pagination will be located in the index of the search page through the page “View all”.
- not suitable if there are many pages or many quality pictures for products/articles;
- rather a complex implementation on most standard CMS.
Solution 3. Rel=”prev”/”next”
Our last option to solve the problem with pagination can be the most confusing, but this is perhaps the most universal method for Google (Yandex does not take into account these directives). Since the implementation is rather complicated, you should be very careful when applying this method. Let’s see how this works.
For example, you have 4 pages in the directory. Using rel=”prev”/”next” you essentially create a chain between all the pages in this directory. This chain starts from the first page: for this you add to the HEAD section:
<link rel="next" href="http://site.com/page2.html">
For the first page, this is the only attribute. For the second page, you must specify both the previous page and the following:
<link rel="prev" href="http://site.com/page1.html">
<link rel="next" href="http://site.com/page3.html">
For the third page we do the same as for the second one
<link rel="prev" href="http://site.com/page2.html">
<link rel="next" href="http://site.com/page4.html">
When we are on the last page (in this case fourth, we should specify only the previous page in the chain:
<link rel="prev" href="http://site.com/page3.html">
Using these rel=”prev”/”next” attributes, Google merges the page data into a single element in the index. Typically for users, this will be the first page, since usually, it is the most relevant page.
- rel=”prev” and rel=”next” are for Google ancillary attributes, not directives;
- both the relative and absolute URLs can be used as values (in accordance with the valid values of the tag);
- if you specify a reference in the document, the relative paths will be determined based on the base URL;
- if Google detects errors in your markup (for example, if the expected value of the rel=”prev” or rel=”next” attribute is missing), further page indexing and content recognition will be performed based on Google’s heuristic algorithm;
- it should be checked that the first page URL is not duplicated.
- this method allows you to solve the problem of pagination without using “View all”;
- implementation occurs only with minor changes in HTML.
- these attributes are not taken into account by Yandex;
- implementation can be quite complex;
- inserting links in the chain of pages should be done very carefully.
You’ve probably come across endless scrolling of goods on e-commerce sites, where the products are constantly downloaded when scrolling to the bottom of the screen. Although this is an excellent opportunity to improve usability, this method has to be used correctly. It is desirable that the products are not automatically loaded when scrolling. Instead, add a button “Show more items” under the latest products. A good implementation of this method you can see on the wikimart.ru on the final branches of the directory.
Proper use of parameters
When you use the rel=”prev”/”next” attributes, pages with pagination can contain parameters that do not change the content:
- session variables;
- change the number of items per page.
In this case, we get duplicated content. To solve the problem you can use the combination rel=”prev”/”next” and rel=”canonical”.
To do this, firstly, you need to make sure that all pagination pages with rel=”prev”/”next” use the same parameter. Secondly, for each URL with a parameter, it is necessary to register its canonical page without this parameter.
Proper use of filters and rel=”prev”/”next”
Now, let’s look at an example where we use the parameters by which we can/want to give out unique content, and it’s important for us to keep such filtered pages in the index. For example, we have a category with sneakers, and we want to create landing pages for search deliveries with different brands, using parameters in the URL.
In this case
- You do not need to use rel=”canonical” in the main category since the content is unique;
- create for each brand their unique chains based on the attribute rel=”prev”/”next”;
- write the unique and relevant title, description and text for category description for each filter
To conclude, here are our recommendations for solving the problem with pagination:
- if you have the technical ability to create the page “View all” (such pages are quickly loaded and not very large in size), then you can use this option since Google recommends it, and Yandex understands the directive rel=”canonical”;
- but probably in most cases the best option is to associate the use of the rel=”next page/prev page” attribute (Google understands it) as well as the robots=”noindex, follow” meta tag (both Google and Yandex understand it).
Get Free Website SEO Score Online
Improve your SEO rating with the best website checker
Every text on the web is created for some purpose. An effective text is the one that performs the necessary tasks. To achieve your goals, we recommend performing SEO content analysis at the several
E-commerce retailers are generally skeptical when they are told that they can double sales and traffic to their site by applying SEO best practices.
Meta robots tags are used to pass instructions to search engines on how they should index or crawl sections of a website. This article gives an overview of the different values that can be used in a
If you understand how to use schema.org, but do not dare to mark pages up because of the complexity of the process, this article is for you. There is an effective and easy-to-use alternative - the
All search engines, including Google, have problems with duplicate content. When the same content is shown at numerous locations on the internet, a search engine can’t determine which URL should be
All the painstaking site optimization work can be crossed out by technical mistakes that will interfere with: the project's ability to be fully indexed, the search robot's ability to understand the