Noindex in HTML and HTTP header issue means that website has pages where a ‘noindex’ directive is specified in the meta robots tag and in the HTTP response header (X-Robots tag).
The importance of the issue
This guideline tells a search engine that you don’t want to index the current URL. Therefore, the page will not be indexed and will not take part in the ranking. When prohibiting indexing, you need to clearly understand that a user might need this page, however, it has no point for the search engine and you don’t plan on generating traffic from it. For instance, a Thank you page after a form completion might become such a page.
How to check the issue
Using any browser is enough to check the issue. Open the source code of the flawed page. To do this, click the right mouse button at any spot of the page and choose “browse the code” option, or apply an online tool https://codebeautify.org/source-code-viewer.
Find a directive <meta name=”robots” content=”noindex” />. If the value of the content = noindex, there is an issue at the page.
To check the server’s response, use a tool https://redbot.org/ or any similar tool. The presence of an X-Robots-Tag: noindex in the server’s response means there is an issue at the page.
Detailed directive description https://developers.google.com/search/reference/robots_meta_tag?hl=en
How to fix this issue
When using <meta name=”robots” content=”noindex” /> substitute noindex value for index.
When using the X-Robots-Tag: googlebot: delete noindex from the response.
Note that it is necessary to check both options, as using one of them does not exclude the use of another one.
Please note! Suspending pages from indexation is common when promoting a website. A page can be suspended from indexation purposefully. Before changing the indexation settings, make sure that the current configuration is incorrect. Coordinate your activity with a SEO specialist, if necessary.
Detect pages with noindex directive
Crawl the website to collect all pages with noindex directive in HTML and HTTP header