Sunday, April 29, 2018

Java 8 Stream 2

.collect( Collectors.joining( "," ) );
In Java 8 how do I transform a Map<K,V> to another Map<K,V> using a lambda?
    Map<String, Column> copy = original.entrySet()
                                  e -> new Column(e.getValue())));
map.forEach((s, integer) ->  map2.put(s, integer));
And because we're just calling an existing method we can use a method reference, which gives us:
Map<String, String> x;
Map<String, Integer> y =
            e -> e.getKey(),
            e -> Integer.parseInt(e.getValue())
Stream.iterate(1, i -> i <= 3, i -> i + 1)
    Stream.of("One", "Two", "Three", "Four")

IntStream.iterate(to - 1, i -> i - 1).limit(to - from)
Stream.of("1", "2", "20", "3")
      .collect(Collectors.toCollection(ArrayDeque::new)) // or LinkedList
    Set<String> wordsSet = words.collect(Collectors.toCollection(TreeSet::new));
int s = -> s.length())
We can also use the Stream.reduce() method which is a more general method:
Stream<Integer> lengthStream = -> s.length());
Optional<Integer> sum = lengthStream.reduce((x, y) -> x + y);
Stream.reduce() take input as an accumulator function which takes two parameters: a partial result of the reduction (in this case, the sum of all processed integers so far) and the next element of the stream (in this case, an integer). It returns a new partial result. In this case, the accumulator function is a lambda expression that adds two Integer values and returns an Integer value.
Instead of using lambda expression, you can also use:
Optional<Integer> sum = lengthStream.reduce(Integer::sum);
In addition, we can also write reduce() method like the following:
int s = wordStream.reduce(0, (x, y) -> x + y.length(), (x, y) -> x + y);
The three parameters are identify, reducer, and combiner.
- identity - identity value for the combiner function
- reducer - function for combining two results
- combiner - function for adding an additional element into a result.

T reduce(T identity, BinaryOperator<T> accumulator);

Where, identity is initial value 
of type T and accumulator is a 
function for combining two values.

        Optional<String> longestString =
                                   .reduce((word1, word2)
                             -> word1.length() > word2.length()
                                           ? word1 : word2);
        Optional<String> String_combine =
                                           .reduce((str1, str2)
                                           -> str1 + "-" + str2)

        int product = IntStream.range(2, 8)
                     .reduce((num1, num2) -> num1 * num2)
A reduction operation is one which allows you to compute a result using all the elements present in a stream. Reduction operations are also called terminal operations because they are always present at the end of a chain of Streammethods. We’ve already been using a reduction method in our previous examples: the toArray method

String result =
                .reduce("", (a,b) -> a + b);
<R,A> R collect(Collector<? super T,A,R> collector)

T reduce(T identity, BinaryOperator<T> accumulator)
So collect returns any R whereas reduce returns T - the type of the Stream.
reduce is a "fold" operation, it applies a binary operator to each element in the stream where the first argument to the operator is the return value of the previous application and the second argument is the current stream element.
collection is an aggregation operation where a "collection" is created and each element is "added" to that collection. Collections in different parts of the stream are then added together.

  • collect() can only work with mutable result objects.
  • reduce() is designed to work with immutable result objects.
    Double totalSalaryExpense =
                               .map(emp -> emp.getSalary())
                               .reduce(0.00,(a,b) -> a+b);
    .reduce((Employee a, Employee b) -> a.getSalary() < b.getSalary() ? b:a);
       int i = IntStream.range(1, 6)
                        .reduce((a, b) -> a * b)
This method has an extra 'identity' parameter.
  • Identity is the initial value of reduction:
    int i = IntStream.empty()
                     .reduce(1, (a, b) -> a * b);
If we have a look at PriorityQueue.iterator() documentation, we’ll see that, unintuitively, iterator() is not traversing the queue according to its priority order:
Returns an iterator over the elements in this queue. The iterator does not return the elements in any particular order
List<String> result = Stream.generate(queue::poll)
Since Java 9, it’ll be possible to rewrite it in a concurrent-friendly manner:

static <T> Stream<T> drainToStream(PriorityQueue<T> queue) {
return Stream.generate(queue::poll)
collectingAndThen adds a single action that is just performed at the end of the collection.
String personWithMaxAge =
        (Optional<Person> p) -> p.isPresent() ? p.get().getName() : "none"
is not different to
Optional<Person> p =
String personWithMaxAge = p.isPresent() ? p.get().getName() : "none";
The actual advantage of specifying the action within a collector shows when you use the resulting collector as input to another collect, e.g. groupingBy(f1, collectingAndThen(collector, f2)).
Definition of Collectors.collectingAndThen() method
Collectors.collectingAndThen() method is defined with the following signature –
public static<T,A,R,RR> Collector<T,A,RR> collectingAndThen(Collector<T,A,R> downstream, Function<R,RR> finisher)
     – 1st input parameter is downstream which is an instance of a Collector<T,A,R> i.e. the standard definition of a collector. In other words, any collector can be used here.
     – 2nd input parameter is finisher which needs to be an instance of a Function<R,RR> functional interface. This function instance takes as input an object of type R which is the output from downstream collector, and it returns an output of type RR which is the final return type of collectingAndThen collector as well.
     – output is a Collector with finisher(return type) of type RR.
    String avgSalary =
            averageSalary -> new DecimalFormat("'$'0.00").format(averageSalary)));

There is no reverse method for stream.

public static <T> Stream<T> reverse(Stream<T> stream) {
        LinkedList<T> stack = new LinkedList<>();
    public static void main(String[] args)
        Stream<Integer> stream = Stream.of(1, 2, 3, 4, 5);
        Stream<Integer> reverse = reverse(stream);

    public static <T> Collector<T, ?, Stream<T>> reverse() {
        return Collectors.collectingAndThen(Collectors.toList(), list -> {
    public static void main(String[] args)
        Stream<Integer> stream = Stream.of(1, 2, 3, 4, 5);
        Stream<Integer> reverse = stream.collect(reverse());
Capture of local variables is restricted to those that are effectively final. Lifting this restriction would present implementation difficulties, but it would also be undesirable; its presence prevents the introduction of a new class of multithreading bugs involving local variables. Local variables in Java have until now been immune to race conditions and visibility problems because they are accessible only to the thread executing the method in which they are declared. But a lambda can be passed from the thread that created it to a different thread, and that immunity would therefore be lost if the lambda, evaluated by the second thread, were given the ability to mutate local variables. Even the ability to read the value of mutable local variables from a different thread would introduce the necessity for synchronization or the use of volatile in order to avoid reading stale data.
An alternative way to view this restriction is to consider the use cases that it discourages. Mutating local variables in idioms like this:
 int sum = 0;
 list.forEach(e -> { sum += e.size(); }); // illegal; local variable 'sum' is not effectively final
frustrates a principal purpose of introducing lambdas. The major advantage of passing a function to the forEach method is that it allows strategies that distribute evaluation of the function for different arguments to different threads. The advantage of that is lost if these threads have to be synchronized to avoid reading stale values of the captured variable.
The restriction of capture to effectively immutable variables is intended to direct developers’ attention to more easily parallelizable, naturally thread-safe techniques. For example, in contrast to the accumulation idiom above, the statement
 int sum = -> e.size()).reduce(0, (a, b) -> a+b);
creates a pipeline in which the results of the evaluations of the map method can much more easily be executed in parallel, and subsequently gathered together by the reduce operation.

The restriction on local variables helps to direct developers using lambdas aways from idioms involving mutation; it does not prevent them. Mutable fields are always a potential source of concurrency problems if sharing is not properly managed; disallowing field capture by lambda expressions would reduce their usefulness without doing anything to solve this general problem.
Yes, with a qualification: they are instances of object subtypes, but do not necessarily possess a unique identity. A lambda expression is an instance of a functional interface, which is itself a subtype of Object. To see this, consider the legal assignments:
    Runnable r = () -> {};   // creates a lambda expression and assigns a reference to this lambda to r 
    Object o = r;            // ordinary widening conversion 

But note that because lambdas do not necessarily possess a unique identity, the equals method inherited from Object has no consistent semantics.
static IntStream revRange(int from, int to) {
    return IntStream.range(from, to)
                    .map(i -> to - i + from - 1);
This avoids boxing and sorting.
In Java 8, Stream can hold different data types, for examples:

But, the Stream operations (filter, sum, distinct…) and collectors do not support it, so, we need flatMap() to do the following conversion :
Stream<String[]>  -> flatMap -> Stream<String>
Stream<Set<String>> -> flatMap -> Stream<String>
Stream<List<String>> -> flatMap -> Stream<String>
Stream<List<Object>> -> flatMap -> Stream<Object>

How flatMap() works :
{ {1,2}, {3,4}, {5,6} } -> flatMap -> {1,2,3,4,5,6}

{ {'a','b'}, {'c','d'}, {'e','f'} } -> flatMap -> {'a','b','c','d','e','f'}

        String[][] data = new String[][]{{"a", "b"}, {"c", "d"}, {"e", "f"}};

        Stream<String[]> temp =;

        //Stream<String>, GOOD!
        Stream<String> stringStream = temp.flatMap(x ->;

        Stream<String> stream = stringStream.filter(x -> "a".equals(x.toString()));

reduction is a terminal operation that aggregates a stream into a type or a primitive. The Java 8 Stream API contains a set of predefined reduction operations, such as average()sum()min(),max(), and count(), which return one value by combining the elements of a stream.

Stream.reduce() is a general-purpose method for generating our custom reduction operations.
Optional<T> reduce(BinaryOperator<T> accumulator)
This method performs a reduction on the elements of this stream, using an associative accumulation function. It returns an Optional describing the reduced value, if any.
T reduce(T identity, BinaryOperator<T> accumulator)
This method takes two parameters: the identity and the accumulator. The identity element is both the initial value of the reduction and the default result if there are no elements in the stream. The accumulator function takes two parameters: a partial result of the reduction and the next element of the stream. It returns a new partial result. The Stream.reduce() method returns the result of the reduction., y) -> x +"," + y)
 .ifPresent(s -> System.out.println("List to String: "+ s));

sum =, (x, y) -> x + y);


Review (572) System Design (334) System Design - Review (198) Java (189) Coding (75) Interview-System Design (65) Interview (63) Book Notes (59) Coding - Review (59) to-do (45) Linux (43) Knowledge (39) Interview-Java (35) Knowledge - Review (32) Database (31) Design Patterns (31) Big Data (29) Product Architecture (28) MultiThread (27) Soft Skills (27) Concurrency (26) Cracking Code Interview (26) Miscs (25) Distributed (24) OOD Design (24) Google (23) Career (22) Interview - Review (21) Java - Code (21) Operating System (21) Interview Q&A (20) System Design - Practice (20) Tips (19) Algorithm (17) Company - Facebook (17) Security (17) How to Ace Interview (16) Brain Teaser (14) Linux - Shell (14) Redis (14) Testing (14) Tools (14) Code Quality (13) Search (13) Spark (13) Spring (13) Company - LinkedIn (12) How to (12) Interview-Database (12) Interview-Operating System (12) Solr (12) Architecture Principles (11) Resource (10) Amazon (9) Cache (9) Git (9) Interview - MultiThread (9) Scalability (9) Trouble Shooting (9) Web Dev (9) Architecture Model (8) Better Programmer (8) Cassandra (8) Company - Uber (8) Java67 (8) Math (8) OO Design principles (8) SOLID (8) Design (7) Interview Corner (7) JVM (7) Java Basics (7) Kafka (7) Mac (7) Machine Learning (7) NoSQL (7) C++ (6) Chrome (6) File System (6) Highscalability (6) How to Better (6) Network (6) Restful (6) CareerCup (5) Code Review (5) Hash (5) How to Interview (5) JDK Source Code (5) JavaScript (5) Leetcode (5) Must Known (5) Python (5)

Popular Posts