Breaking News

Following the general guidelines below will help Google find, index, and rank your site

Following the general guidelines below will help Google find, index, and rank your site

We strongly suggest that you really pay attention to quality guidelines below, which explain some of the practice that violates the law can lead to a site being removed from Google's index as a whole, or otherwise affected by a manual spam action or algorithms. If the site affected by a spam action, it may no longer appear in a search result on google.com or on google's partner site.





General guidelines

Help google find your pages 

  • Make sure that all pages on the site can be reached by a link from another page that can be found.Referring link should include text or, for images, alt attribute, that are relevant to the target page.
  • Give the file a site map with links pointing to important pages on your site. in addition, provide a page that contains a list of links (readable by humans) to the important pages on the site (sometimes called a site index page or site map)
  • Limit the number of links on the page to a reasonable amount (Maximum a few thousand)
  • Make sure that your web server supports HTTP header if-Modified-since appropriatelyThis feature direct the web server to tell Google whether your content has changed since the last time crawl the site.
  • Supporting this feature saves you bandwidth and overhead.
  • Use a robots.txt file on the web server to manage the budget crawling to prevent crawling of infinite space like the search result page keep your robot.txt file using the robot.txt testing tool.
How to help Google find your site.

Submit to Google in


Console. Google uses your sitemap to learn about the structure of the site and improve coverage of your web pages.

Make sure that any site that should know about your pages aware your site is online.


Helps Google Understand Your Page

  • Create a site useful, information-rich, and white pages that describe your content is clear and precise.
  • Think about the words users would type to find your pages, and make sure that your site actually includes those word.
  • Make sure that the <title> and alt attribute are descriptive, specific, and accurate.
  • Design your site in order to have a clear hierarchy of conceptual pages.
  • Follow the recommended best practices for images, video, and unstructured data.
  • If using a content management system (eg, Wix or Wordpress), make sure that the system creates pages and links that can be crawled by search engines.
  • To help google fully understand your content, allow all of the assets of sites that will greatly affect the rendering of pages crawled: for example, CSS file, and JavaScrip that affect comprehension of the page.
  • Google indexing system render a web page, CSS, and JavaScript.
  • To see the page assets that can not be crawled by Googlebot, or to debug directives in your robots.txt file, use the blocked resource report in search console and the fetch as google and the robot.txt Tester.
  • Allow search bots to crawl your sites without session Ids or URL parameters that track their path through the site. These techniques are useful for tracking individual user behavior, but the access pattern of bots if different entirely.
  • Using these techniques may result in incomplete indexing of your site, as bots may not be able to remove URLs that look different but actually point to the same page.
  • Create content that is important to your site visible by default.
  • Google can crawl HTML content that is hidden in the elements related to navigation such as tabs or part of the expansion, but we assume this content is less accessible to the user, and you should make sure that the most important information can be seen on the default page view.
  • Make a reasonable effort to ensure that the advertising links on your pages do not affect search engine rankings.
  • For example, use robots.txt or rel="no follow" to prevent the advertising links that are not followed by the crawler.