| Skip to main content | Skip to navigation |

Do not Crawl in the DUST: Different URLs with Similar Text

  • Uri Schonfeld, Technion - Israel Institute of Technology, Israel
  • Ziv Bar-Yossef, Technion - Israel Institute of Technology, Israel
  • Idit Keidar, Technion - Israel Institute of Technology, Israel

Full text:

Track: Posters

We consider the problem of DUST: Different URLs with Similar Text. Such duplicate URLs are prevalent in web sites, as web server software often uses aliases and redirections, translates URLs to some canonical form, and dynamically generates the same page from various different URL requests. We present a novel algorithm, DustBuster, for uncovering DUST; that is, for discovering rules for transforming a given URL to others that are likely to have similar content. DustBuster is able to detect DUST effectively from previous crawl logs or web server logs, without examining page contents. Verifying these rules via sampling requires fetching few actual web pages. Search engines can benefit from this information to increase the effectiveness of crawling, reduce indexing overhead as well as improve the quality of popularity statistics such as PageRank.

Other items being presented by these speakers

Organised by

ECS Logo

in association with

BCS Logo ACM Logo

Platinum Sponsors

Sponsor of The CIO Dinner

Valid XHTML 1.0! IFIP logo WWW Conference Committee logo Web Consortium logo Valid CSS!