Skip to content

laurynasusas/webcrawler

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Brief

Write a simple web crawler in Go. The crawler should be limited to one domain - so when crawling it would crawl the domain, but not follow external links, for example to the Facebook and Twitter accounts.

Given a URL, your program should output a site map showing each page's url, title, static assets, internal links and external links.

The number of pages that are crawled should be configurable. We suggest crawling wikipedia and limiting the number of pages to 100.

Approach

  1. Scrape website.
  2. As soon as internal link found - start scraping it concurently
  3. Continue extracting internal/external/static(Tags - "img", "audio", "script", "video", "embed", "source")
  4. Once all data is scrapped - Print out result of this page.

The way it is implemented is pretty wild and should be relatively fast but should be further tested to make it more stable.

To further optimise it:

  • Add workers
  • Reuse connection
  • Benchmark data extraction.
  • Consider GoQuery/Regex(Not recommended)

How to run

$ make runwiki

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages