Monday, June 30, 2014

Collected Interview Questions: Database

Prepared Statement Example
Prepared statements use question marks (?), which are placeholders for where actual values that will be used in the SQL should be “plugged” .

In JDBC, when we call connection.prepareStatement,the prepared SQL template is sent to the Database with the placeholder values (the “?”) left blank. Then, the Database will parse, compile, and perform query optimization on the template. After that, the Database will store the optimized query plan.

What are the advantages of using prepared statements?
1. They provide better performance. Even though a prepared statement can be executed many times, it is is compiled and optimized only once by the database.
2. They can prevent SQL injection attacks.
This is because the query is first compiled and optimized before any user input is added, which makes it impossible for user input to change and therefore compromise the integrity of the SQL statement., there is no way the data input by a hacker can be interpreted as SQL.
What are properties of a transaction?
Properties of the transaction can be summarized as ACID Properties.
1. Atomicity
In this , a transaction consists of many steps. When all the steps in the transaction go completed it get reflected in DB or if any step fails, all the transactions are rolled back.
2. Consistency
The database will move from one consistent state to another if the transaction succeeds and remain in the original state if the transaction fails.
3. Isolation
Every transaction should operate as if it is the only transaction in the system
4. Durability
Once a transaction has completed successfully, the updated rows/records must be available for all other transactions on a permanent basis

What is a primary key?

A primary key is a column whose values uniquely identify every row in a table. Primary key values can never be reused. If a row is deleted from the table, its primary key may not be assigned to any new rows in the future.
No two rows can have the same primary key value.
Every row must have a primary key value
Primary key field cannot be null
Values in primary key columns can never be modified or updated

What are the different type of normalization?

1. First Normal Form (1NF)
A relation is said to be in first normal form if and only if all underlying domains contain atomic values only.
2. Second Normal Form (2NF)
A relation is said to be in 2NF if and only if it is in 1NF and every non key attribute is fully dependent on the primary key
3. Third Normal Form (3NF)
A relation is said to be in 3NF if and only if it is in 2NF and every non key attribute is non-transitively dependent on the primary key.

What is a SQL Composite Primary Key?

A Composite primary key is a set of columns whose values uniquely identify every row in a table.

What is a Foreign Key?

What is a Unique Key?
Unique key is same as primary with difference being existence of null. Unique key field allows one value as NULL value

Define Join and explain different type of joins?

In order to avoid data duplication , data is stored in related tables . 
Join keyword is used to fetch data from related table. 
Join return rows when there is at least one match in both tables. Type of joins are
Right Join 
Return all rows from the right table, even if there are no matches in the left table .
Outer Join
Left Join
Return all rows from the left table, even if there are no matches in the right table .
Full Join
Return rows when there is a match in one of the tables .

What is Self-Join?

Self-join is query used to join a table to itself. Aliases should be used for the same table comparison.
What is Cross Join?
Cross Join will return all records where each row from the first table is combined with each row from the second table.

What is a view?

Views are virtual tables. Unlike tables that contain data, views simply contain queries that dynamically retrieve data when used.
What is a materialized view?
Materialized views is also a view but are disk based . Materialized views get updated on specific duration, base upon the interval specified in the query definition. We can index materialized view.

What are the advantages and disadvantages of views in a database?

1. Views doesn't store data in a physical location.
2. View can be use to hide some of the columns from the table
3. Views can provide Access Restriction, since data insertion , update and deletion is not possible on the view.


1. When a table is dropped , associated view become irrelevant.
2. Since view are created when a query requesting data from view is triggered, its bit slow
3. When views are created for large tables, it occupy more memory

What is a stored procedure?

Stored Procedure is a function which contain collection of SQL Queries. Procedure can take inputs , process them and send back output.

What is a trigger?

Triggers are set of commands that get executed when an event(Before Insert,After Insert,On Update,On delete of a row) occurs on a table,views.

Explain the difference between DELETE , TRUNCATE and DROP commands?

Once delete operation is performed Commit and Rollback can be performed to retrieve data. 
But after truncate statement, Commit and Rollback statement cant be performed. 
Where condition can be used along with delete statement but it cant be used with truncate statement. 
Drop command is used to drop the table or keys like primary,foreign from a table.

What is the difference between Cluster and Non cluster Index?

A clustered index reorders the way records in the table are physically stored. 
There can be only one clustered index per table.
It make data retrieval faster. 
A non clustered index does not alter the way it was stored but creates a complete separate object within the table. As a result insert and update command will be faster.

What is Union, minus and Interact commands?

MINUS operator is used to return rows from the first query but not from the second query. INTERSECT operator is used to return rows returned by both the queries.

What are the type of locks?

1. Shared Lock
When a shared lock is applied on data item, other transactions can only read the item, but cant write into it.
2. Exclusive Lock
When a exclusive lock is applied on data item, other transactions cant read or write into the data item.

What is SQL ?

Structured Query Language(SQL) is a language designed specifically for communicating with databases. SQL is an ANSI (American National Standards Institute) standard .

What are the different type of SQL's?

1. DDL – Data Definition Language
DDL is used to define the structure that holds the data.
2. DML – Data Manipulation Language
DML is used for manipulation of the data itself. Typical operations are Insert,Delete,Update and retrieving the data from the table 
3. DCL– Data Control Language DCL is used to control the visibility of data like granting database access and set privileges to create table etc.

How are transactions used?

Database transaction take database from one consistent state to another. At the end of the transaction the system must be in the prior state if transaction fails or the status of the system should reflect the successful completion if the transaction goes through.

Transactions allow you to group SQL commands into a single unit. The transaction begins with a certain task and ends when all tasks within it are complete. The transaction completes successfully only if all commands within it complete successfully. The whole thing fails if one command fails. 

The BEGIN TRANSACTION, ROLLBACK TRANSACTION, and COMMIT TRANSACTION statements are used to work with transactions. A group of tasks starts with the begin statement. If any problems occur, the rollback command is executed to abort. If everything goes well, all commands are permanently executed via the commit statement.\

What is the difference between truncate and delete?

Truncate is a quick way to empty a table. It removes everything without logging each row. Truncate will fail if there are foreign key relationships on the table. Conversely, the delete command removes rows from a table, while logging each deletion and triggering any delete triggers that may be present.

What is the difference between Primary Key and Unique Key?
Both Primary and Unique Key is implemented for Uniqueness of the column. 
Primary Key creates a clustered index of column where as an Unique creates unclustered index of column. 
Moreover, Primary Key doesn’t allow NULL value, however Unique Key does allows one NULL value.

What are indexes in a Database. What are the types of indexes?

Indexes are the quick references for fast data retrieval of data from a database. There are two different kinds of indexes.
Clustered Index
Only one per table.
Faster to read than non clustered as data is physically stored in index order.
Non­clustered Index
Can be used many times per table.
Quicker for insert and update operations than a clustered index.

What is Heap table?

Tables that are present in the memory are called as HEAP tables. These tables are commonly known as memory tables. These memory tables never have values with data type like “BLOB” or “TEXT”. They use indexes which make them faster.

What is the difference between a clustered and a nonclustered index?

A clustered index affects the way the rows of data in a table are stored on disk. When a clustered index is used, rows are stored in sequential order according to the index column value; for this reason, a table can contain only one clustered index, which is usually used on the primary index value.

A nonclustered index does not affect the way data is physically stored; it creates a new object for the index and stores the column(s) designated for indexing with a pointer back to the row containing the indexed values.

SQL Server

What is the default port number for SQL Server?

If enabled, the default instance of Microsoft SQL Server listens on TCP port 1433. Named instances are configured for dynamic ports, so an available port is chosen when SQL Server starts. When connecting to a named instance through a firewall, configure the Database Engine to listen on a specific port, so that the appropriate port can be opened in the firewall.

What is a view? What is the WITH CHECK OPTION clause for a view?

A view is a virtual table that consists of fields from one or more real tables. Views are often used to join multiple tables or to control access to the underlying tables.
The WITH CHECK OPTION for a view prevents data modifications (to the data) that do not confirm to the WHERE clause of the view definition. This allows data to be updated via the view, but only if it belongs in the view.

What is a query execution plan?

SQL Server has an optimizer that usually does a great job of optimizing code for the most effective execution. A query execution plan is the breakdown of how the optimizer will run (or ran) a query. There are several ways to view a query execution plan. This includes using the Show Execution Plan option within Query Analyzer; Display Estimated Execution Plan on the query dropdown menu; or use the SET SHOWPLAN_TEXT ON command before running a query and capturing the execution plan event in a SQL Server Profiler trace.

What does the SQL Server Agent Windows service do?

SQL Server Agent is a Windows service that handles scheduled tasks within the SQL Server environment (aka jobs). The jobs are stored/defined within SQL Server, and they contain one or more steps that define what happens when the job runs. These jobs may run on demand, as well as via a trigger or predefined schedule. This service is very important when determining why a certain job did not run as planned -- often it is as simple as the SQL Server Agent service not running.

As these NoSQL products don’t provide Strong Consistency, they cannot be used where high-level data consistency is required.

NewSQL has as excellent scalability as NoSQL, and at the same time it guarantees ACID like RDBMS which is performed in a single node.

Spanner is a NewSQL created by Google. It is a distributed relational database that can distribute and store data in Google’s BigTable storage system in multiple data centers. Spanner meets ACID (of course, it supports transaction) and supports SQL.


Stack and Heap in C and C++

The Stack
It's a special region of your computer's memory that stores temporary variables created by each function (including the main() function). The stack is a "FILO" (first in, last out) data structure, that is managed and optimized by the CPU quite closely. Every time a function declares a new variable, it is "pushed" onto the stack. Then every time a function exits, all of the variables pushed onto the stack by that function, are freed (that is to say, they are deleted). Once a stack variable is freed, that region of memory becomes available for other stack variables.

Variables allocated on the stack, or automatic variables, are stored directly to this memory. Access to this memory is very fast, and it’s allocation is dealt with when the program is compiled.

1. lives in RAM (random-access memory), but has direct support from the processor via its stack pointer.
2. stack pointer is moved down to create new memory and moved up to release that memory.
3. extremely fast and efficient way to allocate storage, second only to registers.
Every thread requires its own stack, they are separated from other stacks, each stack may grow separately.
very fast access
don't have to explicitly de-allocate variables
space is managed efficiently by CPU, memory will not become fragmented
local variables only
limit on stack size (OS-dependent)
variables cannot be resized

The place where arguments of a function call are stored
The place where registers of the calling function are saved
The place where local data of called function is allocated
The place where called function leaves result for calling function
Supports recursive function calls

The Heap
Variables allocated on the heap, or dynamic variables, have their memory allocated at run time (ie: as the program is executing). Accessing this memory is a bit slower, but the heap size is only limited by the size of virtual memory. This memory remains allocated until explicitly freed by the program and, as a result, may be accessed outside of the block in which it was allocated.

Heap grows toward stack
All threads share the same heap
Data structures may be passed from one thread to another.
variables can be accessed globally
no limit on memory size
(relatively) slower access
no guaranteed efficient use of space, memory may become fragmented over time as blocks of memory are allocated, then freed
you must manage memory (you're in charge of allocating and freeing variables)
variables can be resized using realloc()

Difference between the stack and the heap
Both Stack and Heap are stored in RAM.
Every thread has its own stack, but all threads in one application shares one heap.
Variable allocation is fast on stack where as on heap its slow.
Variables on stack go out of scope automatically once their need is done. That means de-allocation on stack is automatic. On heap, in regards to C and C++ we have to manually de-allocate where as high-level languages such as Java has garbage collection schemes.
On stack, we can access variables without the need for pointers and hence its fast and that is the reason it is used to store local data, method arguments and the call stack etc all that which needs less amount of memory.
You would use stack only when you know for sure how much memory for your data you would need even before compile time. On the other hand, we can use heap without us having to know for sure the amount of memory we need.
Stack is used for static memory allocation and Heap for dynamic memory allocation.
Stack is thread specific and Heap is application specific.
Memory block in stack will be freed when thread is terminated while heap is freed only after application termination.

Stack Overflow and Heap Overflow(OutOfMemory)

Can an object be stored on the stack instead of the heap?
Yes, an object can be stored on the stack. If you create an object inside a function without using the “new” operator then this will create and store the object on the stack, and not on the heap. Suppose we have a C++ class called Member, for which we want to create an object.

Can the stack grow in size? Can the heap grow in size?
The stack is set to a fixed size, and can not grow past it’s fixed size (although some languages have extensions that do allow this). So, if there is not enough room on the stack to handle the memory being assigned to it, a stack overflow occurs. This often happens when a lot of nested functions are being called, or if there is an infinite recursive call.

If the current size of the heap is too small to accommodate new memory, then more memory can be added to the heap by the operating system.

What can go wrong with the stack and the heap?
If the stack runs out of memory, then this is called a stack overflow – and could cause the program to crash. 
The heap could have the problem of fragmentation, which occurs when the available memory on the heap is being stored as noncontiguous (or disconnected) blocks – because used blocks of memory are in between the unused memory blocks. When excessive fragmentation occurs, allocating new memory may be impossible because of the fact that even though there is enough memory for the desired allocation, there may not be enough memory in one big block for the desired amount of memory.
heap overflow is generally called 'out of memory'.


Memory Layout of C Programs | GeeksforGeeks

A typical memory representation of C program consists of following sections.
1. Text Segment:
A text segment , also known as a code segment or simply as text, is one of the sections of a program in an object file or in memory, which contains executable instructions.
As a memory region, a text segment may be placed below the heap or stack in order to prevent heaps and stack overflows from overwriting it.
Usually, the text segment is sharable so that only a single copy needs to be in memory for frequently executed programs.  Also, the text segment is often read-only, to prevent a program from accidentally modifying its instructions.
2. Initialized Data Segment:
Initialized data segment, usually called simply the Data Segment. A data segment is a portion of virtual address space of a program, which contains the global variables and static variables that are initialized by the programmer.
Note that, data segment is not read-only, since the values of the variables can be altered at run time.
This segment can be further classified into initialized read-only area and initialized read-write area.
3. Uninitialized Data Segment:uninitialized data starts at the end of the data segment and contains all global variables and static variables that are initialized to zero or do not have explicit initialization in source code.
4. Stack:
The stack area traditionally adjoined the heap area and grew the opposite direction; when the stack pointer met the heap pointer, free memory was exhausted. 
The stack area contains the program stack, a LIFO structure, typically located in the higher parts of memory. A “stack pointer” register tracks the top of the stack; it is adjusted each time a value is “pushed” onto the stack. The set of values pushed for one function call is termed a “stack frame”; A stack frame consists at minimum of a return address.
Stack, where automatic variables are stored, along with information that is saved each time a function is called. Each time a function is called, the address of where to return to and certain information about the caller’s environment, such as some of the machine registers, are saved on the stack. The newly called function then allocates room on the stack for its automatic and temporary variables. This is how recursive functions in C can work. Each time a recursive function calls itself, a new stack frame is used, so one set of variables doesn’t interfere with the variables from another instance of the function.
5. Heap:
Heap is the segment where dynamic memory allocation usually takes place.
The heap area begins at the end of the BSS segment and grows to larger addresses from there. The Heap area is shared by all shared libraries and dynamically loaded modules in a process.
Read full article from Memory Layout of C Programs | GeeksforGeeks

Operation System Interview: What is Virtual Memory

What is Virtual Memory
Virtual memory is a feature of an operating system.

  It uses disk space as an extension of RAM so that the effective size of usable memory can be much larger than the actual amount of RAM present. OS will write the contents of a currently unused block of memory to the hard disk so that the memory can be used for another purpose. When the original contents are needed again, they are read back into memory. This is all made completely transparent to the user

  It enables a process to use a memory address space that is independent of other processes running in the same system

How Memory Protection Works
When a program starts up, OS creates a page table for the program, and make sure the disk pages it uses do not conflict with the disk pages of other pro

How Address translation Works
Whenever a program requests access to a memory address, the CPU will always work with this as a virtual memory address
  1. The CPU splits virtual address into a virtual page number and a page offset.
  2. The CPU looks into the page table. If the page entry says that the page is not in RAM, it initiates a page fault. This will cause the OS to bring the page into memory. After the OS handles the page fault, it returns back to the same instruction so the CPU ends up trying the instruction over again.
  3. Otherwise, the CPU loads from the memory address offs within page frame f.
  Linux can use either a normal file in the filesystem or a separate partition for swap space. A swap partition is faster, but it is easier to change the size of a swap file (there's no need to repartition the whole hard disk).
  When you know how much swap space you need, you should go for a swap partition, but if you are uncertain, you can use a swap file first, use the system for a while so that you can get a feel for how much swap you need, and then make a swap partition when you're confident about its size.

  Linux allows one to use several swap partitions and/or swap files at the same time. 
Computer science usually distinguishes between swapping (writing the whole process out to swap space) and paging (writing only fixed size parts, usually a few kilobytes, at a time). Paging is usually more efficient, and that's what Linux does.

What is virtual memory:
In computing, virtual memory is a memory management technique that is implemented using both hardware and software. It maps memory addresses used by a program, called virtual addresses, into physical addresses in computer memory.
The operating system manages virtual address spaces and the assignment of real memory to virtual memory. Address translation hardware in the CPU, often referred to as a memory management unit or MMU, automatically translates virtual addresses to physical addresses. Software within the operating system may extend these capabilities to provide a virtual address space that can exceed the capacity of real memory and thus reference more memory than is physically present in the computer.

Operation System Interview: Locks, Deadlock, Starvation and Race Condition
Deadlock happens when a process wait for another one who is using some needed resource (ie: file or database table row) to finish with it, while the other process also wait for the first process to release some other resource.
    static void transfer(BankAccount from, BankAccount to, double amount) {
        synchronized(from) {
            synchronized(to) {
        final BankAccount fooAccount = new BankAccount(1, 100d);
        final BankAccount barAccount = new BankAccount(2, 100d);
        new Thread() {
            public void run() {
                BankAccount.transfer(fooAccount, barAccount, 10d);
        new Thread() {
            public void run() {
                BankAccount.transfer(barAccount, fooAccount, 10d);

    static void transfer(BankAccount from, BankAccount to, double amount) {


with the livelock, each process is waiting “actively”, trying to resolve the problem on its own (like reverting back its work and retry). A live lock occurs when the combination of these processes’s efforts to resolve the problem makes it impossible for them to ever terminate.

two threads tries to transfer money from one account to another one at the same time. But this time, instead of waiting for a lock to be released when a required account is locked, a thread will just revert its work if any, and retry the whole operation in loop until successful :
public class BankAccount {
    double balance;
    int id;
    Lock lock = new ReentrantLock();
    BankAccount(int id, double balance) { = id;
        this.balance = balance;
    boolean withdraw(double amount) {
        if (this.lock.tryLock()) {
            // Wait to simulate io like database access ...
            try {Thread.sleep(10l);} catch (InterruptedException e) {}
            balance -= amount;
            return true;
        return false;
    boolean deposit(double amount) {
        if (this.lock.tryLock()) {
            // Wait to simulate io like database access ...
            try {Thread.sleep(10l);} catch (InterruptedException e) {}
            balance += amount;
            return true;
        return false;
    public boolean tryTransfer(BankAccount destinationAccount, double amount) {
        if (this.withdraw(amount)) {
            if (destinationAccount.deposit(amount)) {
                return true;
            } else {
                // destination account busy, refund source account.
        return false;
    public static void main(String[] args) {
        final BankAccount fooAccount = new BankAccount(1, 500d);
        final BankAccount barAccount = new BankAccount(2, 500d);
        new Thread(new Transaction(fooAccount, barAccount, 10d), "transaction-1").start();
        new Thread(new Transaction(barAccount, fooAccount, 10d), "transaction-2").start();
class Transaction implements Runnable {
    private BankAccount sourceAccount, destinationAccount;
    private double amount;
    Transaction(BankAccount sourceAccount, BankAccount destinationAccount, double amount) {
        this.sourceAccount = sourceAccount;
        this.destinationAccount = destinationAccount;
        this.amount = amount;
    public void run() {
        while (!sourceAccount.tryTransfer(destinationAccount, amount))
        System.out.printf("%s completed ", Thread.currentThread().getName());

Lock Starvation

Lock starvation is all about thread priority. It occurs when a thread, having lesser priority than other ones, is constantly waiting for a lock, never able to take it because other thread(s) with higher priority are constanly aquiring the lock. Suppose our bank account example. The bank adds a feature that constantly watch one’s account balance and send an email if that balance goes below zero (a monitor thread). But in this implementation, the monitor thread is of a higher priority than the transaction threads. Because of this, the transaction threads can take a very long time(say ever) to execute.

    public static void main(String[] args) {
        final BankAccount fooAccount = new BankAccount(1, 500d);
        final BankAccount barAccount = new BankAccount(2, 500d);
        Thread balanceMonitorThread1 = new Thread(new BalanceMonitor(fooAccount), "BalanceMonitor");
        Thread transactionThread1 = new Thread(new Transaction(fooAccount, barAccount, 250d), "Transaction-1");
        Thread transactionThread2 = new Thread(new Transaction(fooAccount, barAccount, 250d), "Transaction-2");
        // Start the monitor
        // And later, transaction threads tries to execute.
        try {Thread.sleep(100l);} catch (InterruptedException e) {}
class BalanceMonitor implements Runnable {
    private BankAccount account;
    BalanceMonitor(BankAccount account) { this.account = account;}
    boolean alreadyNotified = false;
    public void run() {
        System.out.format("%s started execution%n", Thread.currentThread().getName());
        while (true) {
            if(account.getBalance() <= 0) {
                // send email, or sms, clouds of smoke ...
        System.out.format("%s : account has gone too low, email sent, exiting.", Thread.currentThread().getName());


Starvation describes a situation where a thread is unable to gain regular access to shared resources and is unable to make progress. This happens when shared resources are made unavailable for long periods by "greedy" threads. For example, suppose an object provides a synchronized method that often takes a long time to return. If one thread invokes this method frequently, other threads that also need frequent synchronized access to the same object will often be blocked.


A thread often acts in response to the action of another thread. If the other thread's action is also a response to the action of another thread, then livelock may result. As with deadlock, livelocked threads are unable to make further progress. However, the threads are not blocked — they are simply too busy responding to each other to resume work. This is comparable to two people attempting to pass each other in a corridor: Alphonse moves to his left to let Gaston pass, while Gaston moves to his right to let Alphonse pass. Seeing that they are still blocking each other, Alphone moves to his right, while Gaston moves to his left. They're still blocking each other, so...
Bloomberg LP Interview Question:
Disadvantages of locks? What is Deadlock? What is Starvation
Disadvantages of Locks: 1) adds overhead for each access, even when the chances for collision are very rare. 
2) deadlock, where two threads each hold a lock on resources that the other needs before releasing its lock. 
3) When a thread is waiting for a lock, it cannot do anything else. If a thread holding a lock is permanently blocked (due to an infinite loop, deadlock, livelock, or other liveness failure), any threads waiting for that lock will be blocked for ever, and can never make progress.
4) priority inversion: high priority threads cannot proceed if a low priority thread is holding the common lock, this effectively downgrades its priority to that of the lower-priority thread
Optimistic concurrency control:
We proceed with an update, hopeful that you can complete it without interference. This approach relies on collision detection to determine if there has been interference from other parties during the update, in which case the operation fails and can be retried (or not).
For example, in java, besides the heavyweight lock or synchronized, we can use atomic varibales such as AtomicInteger etc to improve concurrency. 

A deadlock occurs when two (or more) threads have created a situation where they are all blocking each other. Imagine that threads T1 and T2 need to acquire both resources A and B in order to do their work. If T1 acquires resource A, then T2 acquires resource B, T1 could then be waiting for resource B while T2 was waiting for resource A. In this case, both threads will wait indefinitely for the resource held by the other thread. These threads are said to be deadlocked.

"A deadlock cannot occur unless all of the following conditions are met: 
Protected access to shared resources, which implies waiting. 
No resource preemption, meaning that the system cannot forcibly take a resource from a thread holding it. 
Multiple independent requests, meaning a thread can hold some resources while requesting others. 
Circular dependency graph, meaning that Thread A is waiting for Thread B which is waiting for Thread C which is waiting for Thread D which is waiting for Thread A." 

Starvation occurs when a scheduler process (i.e. the operating system) refuses to give a particular thread any quantity of a particular resource (generally CPU). 
If there are too many high-priority threads, a lower priority thread may be starved. This can have negative impacts, though, particularly when the lower-priority thread has a lock on some resource.

Race conditions
Race conditions occurs when two thread operate on same object without proper synchronization and there operation interleaves on each other. 
Classical example of Race condition is incrementing. as increment is not an atomic operatio, if multipe threads try to incremet one variable at same time, race conditions occur

The situation where two threads compete for the same resource, where the sequence in which the resource is accessed is significant, is called race conditions. A code section that leads to race conditions is called a critical section.



Review (572) System Design (334) System Design - Review (198) Java (189) Coding (75) Interview-System Design (65) Interview (63) Book Notes (59) Coding - Review (59) to-do (45) Linux (43) Knowledge (39) Interview-Java (35) Knowledge - Review (32) Database (31) Design Patterns (31) Big Data (29) Product Architecture (28) MultiThread (27) Soft Skills (27) Concurrency (26) Cracking Code Interview (26) Miscs (25) Distributed (24) OOD Design (24) Google (23) Career (22) Interview - Review (21) Java - Code (21) Operating System (21) Interview Q&A (20) System Design - Practice (20) Tips (19) Algorithm (17) Company - Facebook (17) Security (17) How to Ace Interview (16) Brain Teaser (14) Linux - Shell (14) Redis (14) Testing (14) Tools (14) Code Quality (13) Search (13) Spark (13) Spring (13) Company - LinkedIn (12) How to (12) Interview-Database (12) Interview-Operating System (12) Solr (12) Architecture Principles (11) Resource (10) Amazon (9) Cache (9) Git (9) Interview - MultiThread (9) Scalability (9) Trouble Shooting (9) Web Dev (9) Architecture Model (8) Better Programmer (8) Cassandra (8) Company - Uber (8) Java67 (8) Math (8) OO Design principles (8) SOLID (8) Design (7) Interview Corner (7) JVM (7) Java Basics (7) Kafka (7) Mac (7) Machine Learning (7) NoSQL (7) C++ (6) Chrome (6) File System (6) Highscalability (6) How to Better (6) Network (6) Restful (6) CareerCup (5) Code Review (5) Hash (5) How to Interview (5) JDK Source Code (5) JavaScript (5) Leetcode (5) Must Known (5) Python (5)

Popular Posts