1. Validating User’s Input
2. Simple Client-side Calculations
3. Greater Control
4. Platform Independent
5. Handling Dates and Time
6. Generating HTML Content
7. Detecting the User’s Browser and OS
Googlebot crawls, renders, and indexes a page.
When Googlebot fetches a URL from the crawling queue by making an HTTP request it first checks if you allow crawling. Googlebot reads the robots.txt file. If it marks the URL as disallowed, then Googlebot skips making an HTTP request to this URL and skips the URL.