Robots.txt file
Robots.txt is a text file which contains parameters of the site indexing for web crawlers. It is usually used to make crawlers ignore certain pages in the search engine results.
To set up the parameters of your Robots.txt file, go to Settings → Robots.txt.
If you want to exclude the whole site from web indexing (i.e. to hide it from all search engines), copy the text below and paste to the File content field:
User-agent: *
Disallow: /
Disallow: /
If you want to exclude only a separate page, copy:
User-agent: *
Disallow: /page/
Disallow: /page/
Here /page/ is the URL of the page you need to exclude.
To see the page URL, click the “…” button on the page thumbnail and select Settings.
Copy the address in the Page URL and paste:
Disallow: /page_copy2/
Learn more about creating a robots.txt file in this Google article.
Canonical URLs
Search engines may consider as duplicates several pages with almost identical content, or one page with multiple URLs.
If this occurs, a search engine chooses only one page to show in the search results. This page is called canonical.
If you want to set any page as canonical, then turn on the Enable canonical URLs toggle.