Wednesday, October 28, 2015

[CareerCup] 10.7 Simplified Search Engine 简单的搜索引擎 - Grandyang - 博客园



[CareerCup] 10.7 Simplified Search Engine 简单的搜索引擎 - Grandyang - 博客园
10.7 Imagine a web server for a simplified search engine. This system has 100 machines to respond to search queries, which may then call out using processSearch(string query) to another cluster of machines to actually get the result. The machine which responds to a given query is chosen at random, so you can not guarantee that the same machine will always respond to the same request. The method processSearch is very expensive. Design a caching mechanism for the most recent queries. Be sure to explain how you would update the cache when data changes.

https://github.com/filipegoncalves/interview-questions/blob/master/systems_design/Cache.md
This problem would be much easier if the same query was always sent to the same machine: we could just build a caching layer on each machine, cache replies locally, and invalidate the cache whenever local information was updated. Unfortunately, this is not the case, so our system will have to be a little bit smarter than that.
Clearly, we need a caching layer manager that sits between the two clusters. This layer is a component in the architecture which acts as an intermediary between the processSearch() request and the cluster that actually processes the request. We can use the cluster's machines to create a distributed caching infrastructure (much like a distributed key-value store) in a similar way to that of memcached. Memcached uses a client-server model, where the client knows every server (this is reasonable since we're talking about a cluster of 100 machines), and uses that information to compute which server stores the value of a given key (if it's cached). Note that different machines in the cluster may be handed the results of a query that was answered by another machine; we essentially provide a deterministic hash function that no matter what, always maps a given query to the same machine. In other words, we build a deterministic system on top of the cluster, but just for the purposes of caching. This is a high-level view of how memcached works. The advantage of this architecture is that servers don't know each other, so we have a reasonably sharded and scalable shared-nothing module (no synchronization to deal with and no shared state, which is easier to manage, debug and maintain).
We do have a single point of failure, which is also a contention point - the caching layer manager. We could explore the possibility of eliminating the caching layer manager entirely and let the cluster that issues requests deal with the caching infrastructure, but then we would have to worry about synchronizing the caching state across the machines in that cluster, which is not really their job. Another approach is to make the caching layer smarter and distributed: for example, create an overlay network where each node keeps track of O(log(N)) nodes (for a cluster of size N), and requests are iteratively forwarded to the right machine (according to the hash value) in at most O(log(N)) steps (that is, use something like Chord Distributed Hash Table to associate query keys with the servers that cache the results). This is good because each client need not keep track of every other server anymore, and it also guarantees that a query, if cached, is deterministically placed on the same machine.
What about cache invalidation? Cache invalidation is a hard problem. The right approach here depends on where (and how) exactly the writes take place, and what kind of guarantees we want to provide to user code.
There are basically 3 ways to deal with data updates when caching is used:
  • Write-through: in this paradigm, writes are performed on cache and then are passed through to the underlying storage medium. This eliminates any problems that might arise with cache inconsistency, but it comes at the cost of slower writes (assuming that the underlying storage is slow, as is usually the case). However, it has the added benefit of ensuring that writes are not lost if the server fails unexpectedly (of course, assuming that underlying storage is reliable). In other write policies, such as write-back, updated data may be lost if the machine fails unexpectedly.
  • Write-back: write-back is fast; updates to the data are only written to cache (the data is only updated on disk when that specific cache entry is purged). It can have huge performance boosts, but it can be a problem if servers are expected to fail. This may not be a big deal if we're ok with missing a few minor updates to the search engine database, or it might be a huge deal if we want to offer stronger guarantees (for example, in a service like Dropbox it would be a catastrophe). If all we're looking for is efficiency and we're ok with some data loss / inconsistency, then this is definitely the way to go. For a search engine it might be a good choice, but not necessarily the best.
  • Write-around: write-around takes a different approach and writes directly into the underlying medium, invalidating the cache entries affected (if there are any). This can be good for services where writes are more common than reads, because we don't keep useless data around in the cache that is not likely to be read. Yes, writes will be more expensive, but if we don't plan on reading back the data any time soon, then why bother? Just purge it and write directly on disk. For the purposes of a search engine, this doesn't look like a smart choice. Search engines have a hugeread-to-write ratio, so it is very likely that updated data will be read very, very soon.
Taking into account the high read-to-write ratio, it might seem that write-back would be a good choice here. If we want to offer stronger consistency guarantees, we should look at write-through, or we can try some sort of cache-level replication by storing the same query with two different hashes that map to two different machines


Option 3: Each machine stores a segment of the cache.
A third option is to divide up the cache, such that each machine holds a different part of it. Then, when machine i needs to look up the results for a query, machine i would figure out which machine holds this value, and then ask this other machine (machine j) to look up the query in j's cache.
But how would machine i know which machine holds this part of the hash table?
One option is to assign queries based on the formula hash (query) % N. Then, machine i only needs to apply this formula to know that machine j should store the results for this query.
So, when a new query comes in to machine i, this machine would apply the formula and call out to machine j. Machine j would then return the value from its cache or call processSearch(query) to get the results. Machine j would update its cache and return the results back to i.


Step 3: Updating results when contents change
if the data doesn't require instant refreshing (which it probably doesn't), we could periodically
crawl through the cache stored on each machine to purge queries tied to the updated URLs.

A good way to handle Situation #3 (and likely something we'd want to do anyway) is to implement an "automatic time-out" on the cache. That is, we'd impose a time out where no query, regardless of how popular it is, can sit in the cache for more than x minutes. This will ensure that all data is periodically refreshed.

To better support the situation where some queries are very popular. For example, suppose (as an extreme example) a particular string constitutes 1 % of all queries. Rather than machine i forwarding the request to machine j every time, machine i could forward the request just once to j, and then i could store the results in its own cache as well.

The "automatic time out" mechanism: As initially described, this mechanism purges any data after X minutes. However, we may want to update some data (like current news) much more frequently than other data (like historical stock prices). We could implement timeouts based on topic or based on URLs. In the latter situation, each URL would have a time out value based on how frequently the page has been updated in the past. The time out for the query would be the minimum of the time outs for each URL.

这道题说假设有一个简单搜索引擎的网络服务器,系统共有100个机子来响应检索,可以用processSearch(string query)来得到其他机子上的结果,每台机子响应检索是随机的,不保证每个机子都会响应到同一个请求。processSearch方法非常昂贵,设计一个缓存机制来应对近期检索。根据书中描述,我们先来做一些假设:
1. 与其说根据需要调用processSearch,倒不如设定所有的检索处理发生在第一个被调用的机子上。
2. 我们需要缓存的检索是非常大量的。
3. 机器之间的调用很快。
4. 检索的结果是一个有序的URL链表,每个URL由50个字符的标题和200个字符的概要组成。
5. 最常访问的检索会一直出现的缓存器中。

系统需求:
主要需要实现下列两个功能:
1. 高效查找当给定了一个关键字时
2. 新数据会代替旧数据的位置
我们还需要更新和清楚缓存当搜索结果改变了。由于一些检索非常的常见病永久的在缓存器中,我们不能等缓存器自然失效。

步骤一:设计单个系统的存存器
我们可以混合使用链表和哈希表来实现,我们建立一个链表,当某个节点被访问了,自动将其移到开头,这样链表的末尾就是最老的数据。我们用哈希表来建立检索和链表中节点的映射,这样不仅可以让我们高效的返回缓存的结果,而且可以把节点移到链表前段,参见代码如下:

步骤二: 扩展到多个机子
对于多个机子,我们有许多中选择:
选择1:每个机子有自己的缓存器,这种方法的好处是快速,因为没有机子间的调用,但是缺点是不高效
选择2:每个机子都有缓存器的拷贝,当新项目添加到缓存器,发送给所有的机器,设计目的是为了让常用检索存在于所有机子上,缺点是缓存器的空间有限,无法保存大量数据
选择3:每个机器保存缓存器的一部分,当机器i需要得到一个检索的结果,它需要找出哪个机子有这个结果,并到那个机子上取结果。但是问题是机子i怎么知道那个机子上有结果,我们可以用哈希表来建立映射,hash(query)%N,能快速知道哪个机子有想要的结果。

步骤三:更新结果当内容改变
有些检索很常用,所以会永远存在缓存器中,我们需要一些机制能更新结果,当其内容改变了,缓存器中的结果页应该相应变换,主要有下列三种情况:
1. URL的内容改变了
2. 当网页的排行改变了,那么结果的顺序也变了
3. 对于特定的检索有了新的页面
对于1和2,我们建立单独的哈希表告诉我们哪一个检索和哪个URL之间有映射,这个可以在不同的机子上分别完成,但是可能会需要很多数据。另外,如果数据不需要即时刷新,我们也可以周期性的来更新缓存器。对于3,我们可以实现一个自动超时机制,我们设定一个时间段,如果在这个时间段里没有检索,不管它之间被检索的有多频繁,我们都清除它,这样保证了所有数据都会被周期性的更新。

步骤四:进一步地增强
一个优化是当某个检索特别频繁时,比如一个检索占了1%的比重时,我们与其让机器i发送请求给机器j,倒不如在机器i上将结果存在自己的缓存器中。
再有就是我们将检索根据哈希值分别不同机器,而不是随机分配。
另外一个优化就是之前提到的自动超时Automatic Time Out机制,就是x分钟后自动清除数据,但是有时候我们对不同的数据希望设定不同的x值,这样每一个URL都有一个超时值基于此页面过去被更新的频率。
Read full article from [CareerCup] 10.7 Simplified Search Engine 简单的搜索引擎 - Grandyang - 博客园

Labels

Review (572) System Design (334) System Design - Review (198) Java (189) Coding (75) Interview-System Design (65) Interview (63) Book Notes (59) Coding - Review (59) to-do (45) Linux (43) Knowledge (39) Interview-Java (35) Knowledge - Review (32) Database (31) Design Patterns (31) Big Data (29) Product Architecture (28) MultiThread (27) Soft Skills (27) Concurrency (26) Cracking Code Interview (26) Miscs (25) Distributed (24) OOD Design (24) Google (23) Career (22) Interview - Review (21) Java - Code (21) Operating System (21) Interview Q&A (20) System Design - Practice (20) Tips (19) Algorithm (17) Company - Facebook (17) Security (17) How to Ace Interview (16) Brain Teaser (14) Linux - Shell (14) Redis (14) Testing (14) Tools (14) Code Quality (13) Search (13) Spark (13) Spring (13) Company - LinkedIn (12) How to (12) Interview-Database (12) Interview-Operating System (12) Solr (12) Architecture Principles (11) Resource (10) Amazon (9) Cache (9) Git (9) Interview - MultiThread (9) Scalability (9) Trouble Shooting (9) Web Dev (9) Architecture Model (8) Better Programmer (8) Cassandra (8) Company - Uber (8) Java67 (8) Math (8) OO Design principles (8) SOLID (8) Design (7) Interview Corner (7) JVM (7) Java Basics (7) Kafka (7) Mac (7) Machine Learning (7) NoSQL (7) C++ (6) Chrome (6) File System (6) Highscalability (6) How to Better (6) Network (6) Restful (6) CareerCup (5) Code Review (5) Hash (5) How to Interview (5) JDK Source Code (5) JavaScript (5) Leetcode (5) Must Known (5) Python (5)

Popular Posts