Over the past few years, single page web applications and their frameworks have gained immense popularity. And no wonder – the advantages for end users and web developers are substantial. These solutions are fast and user-friendly, support ReSTful APIs and enable distributing the processing workload between the server and client computers. Finally, it is much easier to convert such web application into a mobile one.
The Sitemaps protocol allows us to inform search engines about pages on our website that are available for crawling. A Sitemap is an xml file that lists URLs for a site. There you can specify information about each page: last update time, change frequency, and how important it is in relation to other URLs on the site. Search engine web crawlers like Googlebot read this file to more intelligently crawl your site.
<?xml version="1.0" encoding="UTF-8"?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:xhtml="http://www.w3.org/1999/xhtml"> <url> <loc>http://kruschecompany.com/</loc> <xhtml:link rel="alternate" hreflang="de" href="http://kruschecompany.com/de/" /> <xhtml:link rel="alternate" hreflang="en" href="http://kruschecompany.com/" /> <lastmod>2016-05-17</lastmod> <changefreq>yearly</changefreq> <priority>1.0</priority> </url> </urlset>
Using a sitemap doesn't guarantee that all the items in your sitemap will be crawled and indexed, as Google processes rely on complex algorithms to schedule crawling. However, in most cases, your site will benefit from having a sitemap, and you'll never be penalized for having one.
To explain search crawler that your application is a single-page, you have to add <meta name="fragment" content="!"> tag at the top of site .
But, the best way to indicate pages that should be indexed is to use a sitemap.XML file. It's like to say to the search engine: "I’d appreciate if you could focus on these particular URLs."
The sitemap allows indicating canonical URLs (non-canonical are not applicable) of the site with page priority, last modification date, change frequency and the crucial for multilanguage websites - returning <hreflang> links.
It is worth mentioning that sitemaps require some promotion, so as the first stop of the search crawlers is a robot.TXT file. Adding the following line: "Sitemap: http://www.example.com/sitemap.xml" will discover the location of it for the first and all further visits.
The second step of sitemap promotion is submitting it to the search webmaster tools sites, which is a good way to prompt a crawl in usually next few hours.
Integration with social media is a much-needed setting today, which is why we use such protocols like Open Graph to optimize and structure the information we want to share in social networks.
Created for Facebook, Open Graph protocol is now used to control the data when a user shares the URL link to some web-site content.
To integrate OG (Open Graph) to your Website, all you need to do is to put special <meta> tags into <head> section of the HTML page you want to share.
OG meta tags are responsible for how your web page will look like when shared in social media. When the user shares a URL link for the first time, Facebook crawler analyzes the page, collects information about it and creates a graphical object, which will then be shown on Facebook pages.
There are some required tags for OG:
- og: title - the name (e. g. of the article);
- og: description - short description of your data content;
- og: type - the data type of page content (the default is "website");
- og: image - URL address of the picture to represent the page;
- og: url - canonical URL of the page.
If the page doesn't have OG <meta> tags, Facebook crawler will automatically search for required content and independently deliver the information found on your page.
It isn't always a suitable option because the crawler can select any information that seems relevant, but it might not meet your needs.
So, setting up Open Graph meta tags to your page is the best way to integrate website with social networks. This is something that is easy to do, if you ever worked with meta tags before.
<head> <meat property="og:title" content="Some Title"/> <meat property="og:description" content="Short description"/> <meat property="og:type" content="article"/> <meat property="og:image" content="http://example.com/progressive/image.jpg"/> <meat property="og:url" content="http://example.com/current-url"/> </head>
The rel=canonical link element is an HTML element that helps developers avert duplicate content. Using it will improve a site’s SEO, as Google’s bots don’t like when your website has a lot of similar content.
The idea is simple: if you have a few similar options of the same content, you choose one version and make it “canonical”, and then inform search engines about this. This helps solving the problem with the duplicate content where search engines don’t know which of the content pieces they should show.
Choosing a proper canonical URL for every set of similar URLs improves the SEO of your site. Because the search engine knows which version is authoritative, it can count all the links towards all the different versions, as links to that single version.
If you want to use rel="canonical" link element for single-page applications, you generate URL dynamically.
Also, remember that canonical and ‘sitemap.xml’s URLs must be the same!