Tuesday, March 1, 2016

Crack the System Design Interview



https://medium.com/@theobendixson/interviewing-with-facebook-onsite-part-4-e88ed9168a20
The systems design interview asks you to design a networked iOS application from scratch, discussing all of the separate modules and how they work together.
I think it’s a tough interview because systems design asks you to exercise the part of your mind that doesn’t see things in black and white (notoriously difficult for programmers). Whereas so many programming problems have a clearly defined and logical answer, systems design often has all sorts of grey areas.
While building your product, have you ever asked yourself questions like the following?
  • When viewing a screen displaying my chat messages with another user, how often should the app ping the server to see if there’s a new message?
  • When and how often should the app refresh your friends’ profile pictures?
  • Should every piece of data downloaded from the server go through the persistence layer (a.k.a. Core Data or Realm) before being displayed?
  • Should the app poll the server to determine what relevant data have changed, or should it rely on push notifications instead?
  • What sorts of data should be retrieved while the app is in the background?

Here’s a short list of what goes into each decision you make
  • Battery life
  • Network connection or lack thereof
  • How approachable your app is to users
  • Aesthetics
  • The cost of running web services
  • Scalability
  • Reliability
  • Speed, or performance
  • Apple’s Human Interface Guidelines
  • The number of expensive developer hours it takes to build the feature
  • The project timeline
The key is to understand that there are always tradeoffs inherent in any system you design.
When your interviewer asks you to design a specific kind of app, try to identify which business goals take the highest priority. The design you decide to implement should make sure these priorities are emphasized above all others.


http://www.tugberkugurlu.com/archive/software-architecture-and-system-design---getting-your-grip-and-some-related-resources


https://hackernoon.com/anatomy-of-a-system-design-interview-4cb57d75a53f

Step 1: Requirement Gathering:

Step 2: System interface definition

Step 4: Defining the data model



https://medium.com/@samson_hu/how-to-pass-technical-interviews-4653ea9220e5
  • The systems design interview goes through a well defined process: requirements, constraints/estimates, high level design, detailed design. DO NOT SKIP A STEP! Forgetting requirements and estimates and moving immediately to laying out a system is almost a guaranteed fail. Resist the urge to just give an architecture you’re familiar with without any digging

https://www.interviewbit.com/courses/system-design/
Do note that no design is correct or wrong. There are just good designs and bad designs which heavily depend on the use case. 
Hence, it is extremely important to clarify the requirements for the problem asked
  • Consistency: Assuming you have a storage system which has more than one machine, consistency implies that the data is same across the cluster, so you can read or write to/from any node and get the same data.
    • Eventual consistency : Exactly what the name suggests. In a cluster, if multiple machines store the same data, an eventual consistent model implies that all machines will have the same data eventually. Its possible that at a given instance, those machines have different versions of the same data ( temporarily inconsistent ) but they will eventually reach a state where they have the same data.
  • Availability: In the context of a database cluster, Availability refers to the ability to always respond to queries ( read or write ) irrespective of nodes going down.
  • Partition Tolerance: In the context of a database cluster, cluster continues to function even if there is a “partition” (communications break) between two nodes (both nodes are up, but can’t communicate).
Sharding : With most huge systems, data does not fit on a single machine. In such cases, sharding refers to splitting the very large database into smaller, faster and more manageable parts called data shards.
  • Feature expectations ( First 2 mins ) : 
    As said earlier, there is no wrong design. There are just good and bad designs and the same solution can be a good design for one use case and a bad design for the other. It is extremely important hence to get a very clear understanding of whats the requirement for the question.
  • Estimations ( 2-5 mins )
    Next step is usually to estimate the scale required for the system. The goal of this step is to understand the level of sharding required ( if any ) and to zero down on the design goals for the system.
    For example, if the total data required for the system fits on a single machine, we might not need to go into sharding and the complications that go with a distributed system design.
    OR if the most frequently used data fits on a single machine, in which case caching could be done on a single machine.
  • Design Goals ( 1 mins ) 
    Figure out what are the most important goals for the system. It is possible that there are systems which are latency systems in which case a solution that does not account for it, might lead to bad design.
  • Skeleton of the design ( 4 - 5 mins )
    30-40 mins is not enough time to discuss every single component in detail. As such, a good strategy is to discuss a very high level with the interviewer and go into a deep dive of components as enquired by the interviewer.
  • Deep dive ( 20-30 mins )
    This is an extension of the previous section.

CAP

Design a distributed key value store which is highly consistent and is network partition tolerant.
This is the first part of any system design interview, coming up with the features which the system should support. As an interviewee, you should try to list down all the features you can think of which our system should support. Try to spend around 2 minutes for this section in the interview. You can use the notes section alongside to remember what you wrote.

  • Q: What would the estimated QPS be for this DB? 
    A: Let's assume around 100k
 This is usually the second part of a design interview, coming up with the estimated numbers of how scalable our system should be. Important parameters to remember for this section is the number of queries per second and the data which the system will be required to handle.
Try to spend around 5 minutes for this section in the interview. 
Total storage size : 100 TB as estimated earlier
Total estimated QPS : Around 10M 
Q: What is the minimum number of machines required to store the data?
A: Assuming a machine has 10TB of hard disk, we would need minimum of 100TB / 10 TB = 10 machines to store the said data. Do note that this is bare minimum. The actual number might be higher if we decide to have replication or more machines incase we need more shards to lower the QPS load on every shard.



  • Latency - Is this problem very latency sensitive (Or in other words, Are requests with high latency and a failing request, equally bad?). For example, search typeahead suggestions are useless if they take more than a second.
  • Consistency - Does this problem require tight consistency? Or is it okay if things are eventually consistent?
  • Availability - Does this problem require 100% availability?
There could be more goals depending on the problem. It's possible that all parameters might be important, and some of them might conflict. In that case, you’d need to prioritize one over the other. 
Q: Is Latency a very important metric for us?
A: No, but it would be good to have a lower latency.
Q: Consistency vs Availability?
A: As the question states, we need tight consistency and partitioning. Going by the CAP theorem ( Nicely explained at http://robertgreiner.com/2014/08/cap-theorem-revisited/), we would need to compromise with availability if we have tight consistency and partitioning. As is the case with any storage system, data loss is not acceptable.



 Lets dig deeper into every component one by one. Discussion for this section will take majority of the interview time(20-30 minutes).
Note : In questions like these, the interviewer is looking at how you approach designing a solution. So, saying that I’ll use a NoSQL DB like HBase is not an ideal answer. It is okay to discuss the architecture of HBase for example with rationale around why some components were designed the way they were.
Q: Is sharding required?
So, the best course of action would be to shard the data and distribute the load amongst multiple machines.
Q: Should the data stored be normalized?
http://www.studytonight.com/dbms/database-normalization.php
Q: Can I shard the data so that all the data required for answering my most frequent queries live on a single machine?
A: Most applications are built to store data for a user ( consider messaging for example. Every user has his / her own mailbox ). As such, if you shard based on every user as a row, its okay to store data in a denormalized fashion so that you won’t have to query information across users. In this case, lets say we go with storing data in denormalized fashion.
A: If the data is normalized, then we need to join across tables and across rows to fetch data. If the data is already sharded across machine, any join across machines is highly undesirable ( High latency, Less indexing support ).
With storing denormalized information however, we would be storing the same fields at more than one place. However, all information related to a row ( or a key ) would be on the same machine. This would lead to lower latency.
However, if the sharding criteria is not chosen properly, it could lead to consistency concerns ( After all, we are storing the same data at multiple places ).





Q: What if the master keeping track of where the blocks are stored dies?
Anwer: To overcome this problem we keep a standby master which in the failover process becomes the acting master and keeps unavilability to minimum. Now, to keep the standby master upto date we can have a shared network file system. When any namespace modification is performed by the Active master, it durably logs a record of the modification to an edit log file stored in the shared directory. The Standby node constantly watches this directory for edits, and when edits occur, the Standby node applies them to its own namespace. In the event of a failover, the Standby will ensure that it has read all of the edits from the shared storage before promoting itself to the Active state. This ensures that the namespace state is fully synchronized before a failover occurs. 

Design a distributed key value store which is highly available and is network partition tolerant
https://www.interviewbit.com/problems/highly-available-database/
W and R are called write and read consistency number respectively. To recap, W is the minimum number of nodes from which the coordinating node should get an ack before making a write successful and R is the minimum number of nodes from which the coordinating node should get back read values to return them back to the client.
R, W together forms quorum of the system. For a read to be consistent(return the latest write), we need to keep W + R > P.
Depending on the feature requirement W and R can be adjusted, for example to have very fast writes we can keep W = 1 and R = P. If our system is read heavy we can keep R = 1 and W = P. If read and write are equally distributed, we can keep both R and W as (P+1)/2.









Q: What kind of consistency can we provide?
A: If we keep W = P, we can provide strong consistency but we won't be available for some writes even if one of our DB machine dies.
Earlier we saw in master-master configuration that in network partition cases, our masters may diverge to provide availability. In our current system, essentially all of our nodes are master and the point that they will diverge should be taken for granted and we should build our system considering it.
Therefore we should build for the case where W is less than P, hence our writes will be propagated i.e. some machines will have an updated view of data and some will not, therefore they are not consistent. The best we can guarentee here is eventual consistency, that is in due time, all the changes will be applied to every server and in the end they will all have a consistent view of the data.

To achieve eventual consistency, we need to be able to resolve differences between data on two servers. There are a couple of detect and resolve data conflicts that may arise.
First, if data(key, value) we store is such that value is just a single column, we can use a simple criteria of LWW(last write wins) to resolve conflicts. So if two servers have different view of a key, in the resolve step we can update the server with the stale with the new data and therefore become consistent.
The other way is to store augmented data for each row indicating all the coordinating nodes for the row till now. Now, to detect and understand conflict we can compare the augmented data. If one is a subset of the other(all the writes seen by one of the row has been seen by the other row) we can safely ignore the one with smaller augmented data. Otherwise, we have a conflit for our row and need application level logic to resolve it. This way is usually required when our value if composed of more than one independent column.





https://puncsky.com/hacking-the-software-engineer-interview/?isAdmin=true
http://www.puncsky.com/blog/2016/02/14/crack-the-system-design-interview/
Interviewers are looking for future teammates that they like to work with. The future teammates are expected to be, at least, capable of solving problems independently. There are so many solutions to any given problem, but not all of them are suited given the context. So the interviewee has to specify different choices and their tradeoffs. To summarize, system design interview is a happy show-off of our knowledge on technologies and their tradeoffs. Ideally, keep talking what the interviewer expect throughout the interview, before they even have to ask.


Keep talking for 45 mins could be easy, as long as we are armed with the following four steps and three common topics. Take “design Pinterest” for example.

1.1 Step 1. Clarify Requirements and Specs

First things first, the ultimate goals should always be clear.
Pinterest is a highly scalable photo-sharing service:
  • features: user profile, upload photos, news feed, etc.
  • scaling out: horizontal scalability and micro services.

1.2 Step 2. Sketch Out High Level Design

Do not dive into details before outlining the big picture. Otherwise, going off too far towards a wrong direction would make it harder to even provide a roughly correct solution. We will regret wasting time on irrelevant details when we do not have time to finish the task.
OK, let us sketch out the following diagram without concerning too much about the implementation detail of these components.
pinterest architecture

1.3 Step 3. Discuss individual components and how they interact in detail

When we truly understand a system, we should be able to identify what each component is and explain how they interact with one another. Take these components in the above diagram and specify each one by one. This could lead to more general discussions, such as the three common topics in Section 2, and to more specific domains, like how to design the photo storage data layout

1.3.1 Load Balancer

Generally speaking, load balancers fall into three categories:
  • DNS Round Robin (rarely used): clients get a randomly-ordered list of IP addresses.
    • pros: easy to implement and free
    • cons: hard to control and not responsive, since DNS cache needs time to expire
  • L3/L4 Load Balancer: traffic is routed by IP address and port. L3 is network layer (IP). L4 is session layer (TCP).
    • pros: better granularity, simple, responsive
  • L7 Load Balancer: traffic is routed by what is inside the HTTP protocol. L7 is application layer (HTTP).
It is good enough to talk in this level of detail on this topic, but in case the interviewer wants more, we can suggest exact algorithms like round robin, weighted round robin, least loaded, least loaded with slow start, utilization limit, latency, cascade, etc.

1.3.2 Reverse Proxy

Reverse proxy, like varnish, centralizes internal services and provides unified interfaces to the public. For example, www.example.com/index and www.example.com/sports appear to come from the same domain, but in fact they are from different micro services behind the reverse proxy. Reverse proxy could also help with caching and load balancing.

1.3.3 (Frontend) Web Tier

This is where web pages are served, and usually combined with the service / backend tier in the very early stage of a web service.
Stateless
There are two major bottlenecks of the whole system – requests per second (rps) and bandwidth. We could improve the situation by using more efficient tech stack, like frameworks with async and non-blocking reactor pattern, and enhancing the hardware, like scaling up (aka vertical scaling) or scaling out (aka horizontal scaling).
Internet companies prefer scaling out, since it is more cost-efficient with a huge number of commodity machines. This is also good for recruiting, because the target skillsets are equipped by. After all, people rarely play with super computers or mainframes at home.
Frontend web tier and service tier must be stateless in order to add or remove hosts conveniently, thus achieving horizontal scalability. As for feature switch or configs, there could be a database table / standalone service to keep those states.
Web Application and API
MVC(MVT) or MVVC pattern is the dominant pattern for this layer. Traditionally, view or template is rendered to HTML by the server at runtime. In the age of mobile computing, view can be as simple as serving the minimal package of data transporting to the mobile devices, which is called web API. People believe that the API can be shared by clients and browsers. And that is why single page web applications are becoming more and more popular, especially with the assistance of frontend frameworks like react.js, angular.js, backbone.js, etc.

1.3.4 App Service Tier

The single responsibility principle advocates small and autonomous services that work together, so that each service can do one thing well and not block others. Small teams with small services can plan much more aggressively for the sake of hyper-growth.
Service Discovery
How do those services find each other? Zookeeper is a popular and centralized choice. Instances with name, address, port, etc. are registered into the path in ZooKeeper for each service. If one service does not know where to find another service, it can query Zookeeper for the location and memorize it until that location is unavailable.
Zookeeper is a CP system in terms of CAP theorem (See Section 2.3 for more discussion), which means it stays consistent in the case of failures, but the leader of the centralized consensus will be unavailable for registering new services.
In contrast to Zookeeper, Uber is doing interesting work in a decentralized way, named hyperbahn, based on Ringpop consisten hash ring. Read Amazon’s Dynamo to understand AP and eventual consistency.
Micro Services
For the Pinterest case, these micro services could be user profile, follower, feed, search, spam, etc. Any of those topics could lead to an in-depth discussion. Useful links are listed in Section 3: Future Studies, to help us deal with them.
However, for a general interview question like “design Pinterest”, it is good enough to leave those services as black boxes.. If we want to show more passion, elaborate with some sample endpoints / APIs for those services would be great.

1.3.5 Data Tier

Although a relational database can do almost all the storage work, please remember do not save a blob, like a photo, into a relational database, and choose the right database for the right service. For example, read performance is important for follower service, therefore it makes sense to use a key-value cache. Feeds are generated as time passes by, so HBase / Cassandra’s timestamp index is a great fit for this use case. Users have relationships with other users or objects, so a relational database is our choice by default in an user profile service.
Data and storage is a rather wide topic, and we will discuss it later in Section 2.2 Storage.

1.4 (Optional) Step 4. Back-of-the-envelope Calculation

The final step, estimating how many machines are required, is optional, because time is probably up after all the discussions above and three common topics below. In case we run into this topic, we’d better prepare for it as well. It is a little tricky… we need come up with some variables and a function first, and then make some guesses for the values of those variables, and finally calculate the result.
The cost is a function of CPU, RAM, storage, bandwidth, number and size of the images uploaded each day.
  • N users 1010
  • i images / (user * day) 10
  • s size in bytes / image 106
  • viewed v times / image 100
  • d days
  • h requests / sec 104 (bottleneck)
  • b bandwidth (bottleneck)
  • Server cost: $1000 / server
  • Storage cost: $0.1 / GB
  • Storage = Nisd
Remember the two bottlenecks we mentioned in section 1.3.3 Web Tier? – requests per second (rps) and bandwidth. So the final expression would be
Number of required servers = max(Niv/h, Nivs/b)

2 Three Common Topics

There are three common topics that could be applied to almost every system design question. They are extracted and summarized in this section.

2.1 Communication

How do different components interact with each other? – communication protocols.
Here is a simple comparison of those protocols.
  • UDP and TCP are both transport layer protocols. TCP is reliable and connection-based. UDP is connectionless and unreliable.
  • HTTP is in the application layer and normally TCP based, since HTTP assumes a reliable transport.
  • RPC, an application layer protocol, is an inter-process communication that allows a computer program to cause a subroutine or procedure to execute in another address space (commonly on another computer on a shared network), without the programmer explicitly coding the details for this remote interaction. That is, the programmer writes essentially the same code whether the subroutine is local to the executing program, or remote. In an Object-Oriented Programming context, RPC is also called remote invocation or remote method invocation (RMI).
Further discussions
Since RPC is super useful, some interviewers may ask how RPC works. The following picture is a brief answer.
RPC
*Stub procedure: a local procedure that marshals the procedure identifier and the arguments into a request message, and then to send via its communication module to the server. When the reply message arrives, it unmarshals the results.
We do not have to implement our own RPC protocols. There are off-the-shelf frameworks.
  • Google Protobuf: an open source RPC with only APIs but no RPC implementations. Smaller serialized data and slightly faster. Better documentations and cleaner APIs.
  • Facebook Thrift: supports more languages, richer data structures: list, set, map, etc. that Protobuf does not support) Incomplete documentation and hard to find good examples.
    • User case: Hbase/Cassandra/Hypertable/Scrib/..
  • Apache Avro: Avro is heavily used in the hadoop ecosystem and based on dynamic schemas in Json. It features dynamic typing, untagged data, and no manually-assigned field IDs.
Generally speaking, RPC is internally used by many tech companies for performance issues, but it is rather hard to debug and not flexible. So for public APIs, we tend to use HTTP APIs, and are usually following the RESTful style.
  • REST (Representational state transfer of resources)
    • Best practice of HTTP API to interact with resources.
    • URL only decides the location. Headers (Accept and Content-Type, etc.) decide the representation. HTTP methods(GET/POST/PUT/DELETE) decide the state transfer.
    • minimize the coupling between client and server (a huge number of HTTP infras on various clients, data-marshalling).
    • stateless and scaling out.
    • service partitioning feasible.
    • used for public API.

2.2 Storage

2.2.1 Relational Database

Relational database is the default choice for most use cases, by reason of ACID (atomicity, consistency, isolation, and durability). One tricky thing is consistency – it means that any transaction will bring database from one valid state to another, (different from the consistency in CAP, which will be discussed in Section 2.3).
Schema Design and 3rd Normal Form (3NF)
To reduce redundancy and improve consistency, people follow 3NF when designing database schemas:
  • 1NF: tabular, each row-column intersection contains only one value
  • 2NF: only the primary key determines all the attributes
  • 3NF: only the candidate keys determine all the attributes (and non-prime attributes do not depend on each other)
Db Proxy
What if we want to eliminate single point of failure? What if the dataset is too large for one single machine to hold? For MySQL, the answer is to use a DB proxy to distribute data, either by clustering or by sharding.
Clustering is a decentralized solution. Everything is automatic. Data is distributed, moved, rebalanced automatically. Nodes gossip with each other, (though it may cause group isolation).
Sharding is a centralized solution. If we get rid of properties of clustering that we don’t like, sharding is what we get. Data is distributed manually and does not move. Nodes are not aware of each other.

2.2.2 NoSQL

In a regular Internet service, the read write ratio is about 100:1 to 1000:1. However, when reading from a hard disk, a database join operation is time consuming, and 99% of the time is spent on disk seek. Not to mention a distributed join operation across networks.
To optimize the read performance, denormalization is introduced by adding redundant data or by grouping data. These four categories of NoSQL are here to help.
Key-value Store
The abstraction of a KV store is a giant hashtable/hashmap/dictionary.
The main reason we want to use a key-value cache is to reduce latency for accessing active data. Achieve an O(1) read/write performance on a fast and expensive media (like memory or SSD), instead of a traditional O(logn) read/write on a slow and cheap media (typically hard drive).
There are three major factors to consider when we design the cache.
  1. Pattern: How to cache? is it read-through/write-through/write-around/write-back/cache-aside?
  2. Placement: Where to place the cache? client side/distinct layer/server side?
  3. Replacement: When to expire/replace the data? LRU/LFU/ARC?
Out-of-box choices: Redis/Memcache? Redis supports data persistence while memcache does not. Riak, Berkeley DB, HamsterDB, Amazon Dynamo, Project Voldemort, etc.
Document Store
The abstraction of a document store is like a KV store, but documents, like XML, JSON, BSON, and so on, are stored in the value part of the pair.
The main reason we want to use a document store is for flexibility and performance. Flexibility is obtained by schemaless document, and performance is improved by breaking 3NF. Startup’s business requirements are changing from time to time. Flexible schema empowers them to move fast.
Out-of-box choices: MongoDB, CouchDB, Terrastore, OrientDB, RavenDB, etc.
Column-oriented Store
The abstraction of a column-oriented store is like a giant nested map: ColumnFamily<RowKey, Columns<Name, Value, Timestamp>>.
The main reason we want to use a column-oriented store is that it is distributed, highly-available, and optimized for write.
Out-of-box choices: Cassandra, HBase, Hypertable, Amazon SimpleDB, etc.
Graph Database
As the name indicates, this database’s abstraction is a graph. It allows us to store entities and the relationships between them.
If we use a relational database to store the graph, adding/removing relationships may involve schema changes and data movement, which is not the case when using a graph database. On the other hand, when we create tables in a relational database for the graph, we model based on the traversal we want; if the traversal changes, the data will have to change.
Out-of-box choices: Neo4J, Infinitegraph, OrientDB, FlockDB, etc.

2.3 CAP Theorem

When we design a distributed system, trading off among CAP (consistency, availability, and partition tolerance) is almost the first thing we want to consider.
  • Consistency: all nodes see the same data at the same time
  • Availability: a guarantee that every request receives a response about whether it succeeded or failed
  • Partition tolerance: system continues to operate despite arbitrary message loss or failure of part of the system
In a distributed context, the choice is between CP and AP. Unfortunately, CA is just a joke, because single point of failure is a red flag in the real distributed systems world.
To ensure consistency, there are some popular protocols to consider: 2PC, eventual consistency (vector clock + RWN), Paxos, In-Sync Replica, etc.
To ensure availability, we can add replicas for the data. As to components of the whole system, people usually do cold standby, warm standby, hot standby, and active-active to handle the failover.

Basic Availability:

Soft State: between state and stateless
Eventual Consistency:

http://en.wikipedia.org/wiki/Consistency_model 

Strong consistency

Eventual consistency

Causal consistency 

Read-your-writes Consistency

Session consistency
Monotonic read consistency
Monotonic write consistency


https://www.ibm.com/developerworks/community/blogs/RohitShetty/entry/high_availability_cold_warm_hot?lang=en
  • Active-Active (Load Balanced): In this method both the primary and secondary systems are active and processing requests in parallel. Data replication happens through software capabilities and would be bi-directional. This generally provides a recovery time that is instantaneous.

https://www.reddit.com/r/cscareerquestions/comments/30y89g/how_do_you_answer_interview_questions_on_large/










Labels

Review (572) System Design (334) System Design - Review (198) Java (189) Coding (75) Interview-System Design (65) Interview (63) Book Notes (59) Coding - Review (59) to-do (45) Linux (43) Knowledge (39) Interview-Java (35) Knowledge - Review (32) Database (31) Design Patterns (31) Big Data (29) Product Architecture (28) MultiThread (27) Soft Skills (27) Concurrency (26) Cracking Code Interview (26) Miscs (25) Distributed (24) OOD Design (24) Google (23) Career (22) Interview - Review (21) Java - Code (21) Operating System (21) Interview Q&A (20) System Design - Practice (20) Tips (19) Algorithm (17) Company - Facebook (17) Security (17) How to Ace Interview (16) Brain Teaser (14) Linux - Shell (14) Redis (14) Testing (14) Tools (14) Code Quality (13) Search (13) Spark (13) Spring (13) Company - LinkedIn (12) How to (12) Interview-Database (12) Interview-Operating System (12) Solr (12) Architecture Principles (11) Resource (10) Amazon (9) Cache (9) Git (9) Interview - MultiThread (9) Scalability (9) Trouble Shooting (9) Web Dev (9) Architecture Model (8) Better Programmer (8) Cassandra (8) Company - Uber (8) Java67 (8) Math (8) OO Design principles (8) SOLID (8) Design (7) Interview Corner (7) JVM (7) Java Basics (7) Kafka (7) Mac (7) Machine Learning (7) NoSQL (7) C++ (6) Chrome (6) File System (6) Highscalability (6) How to Better (6) Network (6) Restful (6) CareerCup (5) Code Review (5) Hash (5) How to Interview (5) JDK Source Code (5) JavaScript (5) Leetcode (5) Must Known (5) Python (5)

Popular Posts