Thursday, September 10, 2015

Hardware Aware Algorithms (part I) | Balls & Bins, A Mix of Random Thoughts



Hardware Aware Algorithms (part I) | Balls & Bins, A Mix of Random Thoughts
func (p horizontal) compute(in *[m][m]float64) float64 {
    var sum float64
    for i := 0; i < m; i++ {
        for j := 0; j < m; j++ {
            sum += in[i][j]
        }
    }
    return sum
}
 
// vertical order.
type vertical struct{}
 
func (p vertical) compute(in *[m][m]float64) float64 {
    var sum float64
    for i := 0; i < m; i++ {
        for j := 0; j < m; j++ {
            sum += in[j][i]
        }
    }
    return sum
}
The reason behind the observation is pretty simple. Here I'll explain two major factors.
  • Data Cache: Modern processors typically have 2 or 3 layers of data cache, L1D, L2, and L3. The outer layer has larger size but also higher latency. To simply the concept, one can regard data caches as hardware hash tables with a fixed number of slots for each hash value. Items in a cache is evicted in a LRU manner. Prefetchers are responsible for prefetching items into caches. If an item is available when it is needed, it is called a cache hit and the item can be retrieved from the cache directly; otherwise, it is a cache miss and the request is sent to the outer layer and may eventually reach the main memory, which is several orders of slower.
  • Translation Lookaside Buffers (TLBs): When we program, we work in the virtual memory space. There is a translation between virtual addresses and physical addresses. The virtual memory system usually achieves this by mapping pages with the help of a page table. Modern processors have a small hardware cache called the TLB cache to store part of the mapping. A miss on the TLB cache can be very expensive because the translation then proceeds by looking up the page table in a process called a page walk. The page walk requires a lot of time when compared to the processor speed, as it involves reading the contents of multiple memory locations and using them to compute the physical address.
Going back to our previous example. As 2D arrays are stored row-by-row in memory, when we scan over the array horizontally, the prefetcher can easily load the next batch of items that we need for next steps. On the other hand, when we scan over the array vertically, we will have a lot of cache misses which lead to the huge performance penalty.

The prefetchers nowadays are way more sophisticated than just prefetching contiguous memory locations. If the memory access pattern is regular, the prefetcher might still be able to do a pretty good job even if the locations are not near each other.

https://ballsandbins.wordpress.com/2015/03/16/hardware-aware-algorithms-part-ii/
The naive approach parallelizes the outer loop, and thus only enjoys good cache behavior for the inner loop. On the other hand, the divide-and-conquer approach recursively splits the computation into local ranges, which makes the prefetching and TLB load much more effective — at least for computation within each of the two halves. Admittedly, this approach requires more thinking and the code is less clean. For a lot of applications in big data and financial industry, performance is everything. However, there are also some less critical scenarios where code simplicity and readability is valued.
Read full article from Hardware Aware Algorithms (part I) | Balls & Bins, A Mix of Random Thoughts

Labels

Review (572) System Design (334) System Design - Review (198) Java (189) Coding (75) Interview-System Design (65) Interview (63) Book Notes (59) Coding - Review (59) to-do (45) Linux (43) Knowledge (39) Interview-Java (35) Knowledge - Review (32) Database (31) Design Patterns (31) Big Data (29) Product Architecture (28) MultiThread (27) Soft Skills (27) Concurrency (26) Cracking Code Interview (26) Miscs (25) Distributed (24) OOD Design (24) Google (23) Career (22) Interview - Review (21) Java - Code (21) Operating System (21) Interview Q&A (20) System Design - Practice (20) Tips (19) Algorithm (17) Company - Facebook (17) Security (17) How to Ace Interview (16) Brain Teaser (14) Linux - Shell (14) Redis (14) Testing (14) Tools (14) Code Quality (13) Search (13) Spark (13) Spring (13) Company - LinkedIn (12) How to (12) Interview-Database (12) Interview-Operating System (12) Solr (12) Architecture Principles (11) Resource (10) Amazon (9) Cache (9) Git (9) Interview - MultiThread (9) Scalability (9) Trouble Shooting (9) Web Dev (9) Architecture Model (8) Better Programmer (8) Cassandra (8) Company - Uber (8) Java67 (8) Math (8) OO Design principles (8) SOLID (8) Design (7) Interview Corner (7) JVM (7) Java Basics (7) Kafka (7) Mac (7) Machine Learning (7) NoSQL (7) C++ (6) Chrome (6) File System (6) Highscalability (6) How to Better (6) Network (6) Restful (6) CareerCup (5) Code Review (5) Hash (5) How to Interview (5) JDK Source Code (5) JavaScript (5) Leetcode (5) Must Known (5) Python (5)

Popular Posts