Disallowed by robots.txt issue means that URLs that are blocked in a robots.txt file.
The importance of the issue
The scanning prohibition in a robots.txt file recommends a search engine to avoid visiting a specific page. This is a common practice for pages you do not want to show to the search robot. In case you prohibit the scanning of pages you plan on generating traffic from, it might lead to issues and drops in ranking.
How to check the issue
You can check a specific URL using a tool for testing the robots.txt file https://www.google.com/webmasters/tools/robots-testing-tool. Your website must be added to Search Console in advance.
Analyze not only pages disallowed by robots.txt but the entire site!
Make a full audit to find out and fix your technical SEO in order to improve your SERP results.
How to fix this issue
Before fixing this, make sure that scanning has not been prohibited purposefully, which is a common SEO practice!
Delete the option that blocks the useful page. Make sure that the page can be accessed using a testing tool for the robots.txt file https://www.google.com/webmasters/tools/robots-testing-tool . Make sure that pages that should not be scanned are still prohibited. Rule syntax for the robots.txt file can be browsed through Google support https://support.google.com/webmasters/answer/6062596?hl=en&ref_topic=6061961