Wednesday, December 24, 2014

How to avoid getting into infinite loops when designing a web crawler | Runhe Tian Coding Practice



How to avoid getting into infinite loops when designing a web crawler | Runhe Tian Coding Practice
First, how does the crawler get into a loop? The answer is very simple: when we re-parse an already parsed page. This would mean that we revisit all the links found in that page, and this would continue in a circular fashion.
Be careful about what the interviewer considers the "same" page. Is it URL or content? One could easily get redirected to a previously crawled page.
So how do we stop visiting an already visited page? The web is a graph-based structure, and we commonly use DFS (depth first search) and BFS (breadth first search) for traversing graphs. We can mark already visited pages the same way that we would in a BFS/DFS.
We can easily prove that this algorithm will terminate in any case. We know that each step of the algorithm will parse only new pages, not already visited pages. So, if we assume that we have N number of unvisited pages, then at every step we are reducing N (N-1) by 1. That proves that our algorithm will continue until they are only N steps.
http://www.hawstein.com/posts/12.5.html

http://www.quora.com/How-do-web-crawlers-avoid-getting-into-infinite-loops
By relying on the Canonical URL (Specify your canonical), limiting the crawling depth and in some cases extracting a unique identifier from the scanned page (for example, a product number when crawling an e-commerce site).
Honeypot Traps
Don't follow the same crawling pattern
Be polite to your target website

https://tianrunhe.wordpress.com/2012/04/08/how-to-avoid-getting-into-infinite-loops-when-designing-a-web-crawler/
Be careful about what the interviewer considers the “same” page. Is it URL or content? One could easily get redirected to a previously crawled page.
http://www.quora.com/How-do-web-crawlers-avoid-getting-into-infinite-loops
By relying on the Canonical URL (Specify your canonical), limiting the crawling depth and in some cases extracting a unique identifier from the scanned page (for example, a product number when crawling an e-commerce site).

http://stackoverflow.com/questions/5834808/designing-a-web-crawler
Read full article from How to avoid getting into infinite loops when designing a web crawler | Runhe Tian Coding Practice

Labels

Review (572) System Design (334) System Design - Review (198) Java (189) Coding (75) Interview-System Design (65) Interview (63) Book Notes (59) Coding - Review (59) to-do (45) Linux (43) Knowledge (39) Interview-Java (35) Knowledge - Review (32) Database (31) Design Patterns (31) Big Data (29) Product Architecture (28) MultiThread (27) Soft Skills (27) Concurrency (26) Cracking Code Interview (26) Miscs (25) Distributed (24) OOD Design (24) Google (23) Career (22) Interview - Review (21) Java - Code (21) Operating System (21) Interview Q&A (20) System Design - Practice (20) Tips (19) Algorithm (17) Company - Facebook (17) Security (17) How to Ace Interview (16) Brain Teaser (14) Linux - Shell (14) Redis (14) Testing (14) Tools (14) Code Quality (13) Search (13) Spark (13) Spring (13) Company - LinkedIn (12) How to (12) Interview-Database (12) Interview-Operating System (12) Solr (12) Architecture Principles (11) Resource (10) Amazon (9) Cache (9) Git (9) Interview - MultiThread (9) Scalability (9) Trouble Shooting (9) Web Dev (9) Architecture Model (8) Better Programmer (8) Cassandra (8) Company - Uber (8) Java67 (8) Math (8) OO Design principles (8) SOLID (8) Design (7) Interview Corner (7) JVM (7) Java Basics (7) Kafka (7) Mac (7) Machine Learning (7) NoSQL (7) C++ (6) Chrome (6) File System (6) Highscalability (6) How to Better (6) Network (6) Restful (6) CareerCup (5) Code Review (5) Hash (5) How to Interview (5) JDK Source Code (5) JavaScript (5) Leetcode (5) Must Known (5) Python (5)

Popular Posts