Robots.txt is a text file which contains parameters of the site indexing for web crawlers. It is usually used to ignore pages in the search engine results.
To see the page URL, click the “…” button on the page thumbnail and select Settings.
Copy the link from the Page URL and paste.
For example: Disallow: /page_copy2/