Tuesday, May 10, 2016

Java Memory Leak



https://stackoverflow.com/questions/40119188/memory-leak-on-deleteonexithook
This is a long-standing and well known bug that has been reported to Sun/Oracle MANY times over the years. The current bug number is JDK-4872014.
The issue is that each time you use the delete-on-exit API the File gets stored into a HashMap. Since in a long-running server your code rarely intentionally exits, the map can grow without bounds if you are doing this with lots of temporary files.
Essentially, the API is not usable with long-running servers because it's not really intended to be used that way. If you need this functionality you need to implement it yourself and run the cleanup on a schedule, with some way to know which files can be deleted.
https://www.journaldev.com/9113/java-shutdown-hook-runtime-addshutdownhook
  1. Shutdown hooks are initialized but not-started threads. They start when JVM shutdown triggers.
  1. All un-invoked finalizers are executed if finalization-on-exit has been enabled.
  2. There is no guarantee that shutdown hooks will execute, such as system crash, kill command etc. So you should use it only for critical scenarios such as making sure critical resources are released etc.
  3. You can remove a hook using Runtime.getRuntime().removeShutdownHook(hook) method.
  4. Once shutdown hooks are started, it’s not possible to remove them. You will get IllegalStateException.
http://www.importnew.com/12901.html
https://stackoverflow.com/questions/6470651/creating-a-memory-leak-with-java
Creating a memory leak with Java



    Below there will be a non-obvious case where Java leaks, besides the standard case of forgotten listeners, static references, bogus/modifiable keys in hashmaps, or just threads stuck without any chance to end their life-cycle.
    Below there will be a non-obvious case where Java leaks, besides the standard case of forgotten listeners, static references, bogus/modifiable keys in hashmaps, or just threads stuck without any chance to end their life-cycle.
    • File.deleteOnExit() - always leaks the string, if the string is a substring, the leak is even worse (the underlying char[] is also leaked) - in Java 7 substring also copies the char[], so the later doesn't apply; @Daniel, no needs for votes, though.
    I'll concentrate on threads to show the danger of unmanaged threads mostly, don't wish to even touch swing.
    • Runtime.addShutdownHook and not remove... and then even with removeShutdownHook due to a bug in ThreadGroup class regarding unstarted threads it may not get collected, effectively leak the ThreadGroup. JGroup has the leak in GossipRouter.
    • Creating, but not starting, a Thread goes into the same category as above.
    • Creating a thread inherits the ContextClassLoader and AccessControlContext, plus the ThreadGroup and any InheritedThreadLocal, all those references are potential leaks, along with the entire classes loaded by the classloader and all static references, and ja-ja. The effect is especially visible with the entire j.u.c.Executor framework that features a super simple ThreadFactory interface, yet most developers have no clue of the lurking danger. Also a lot of libraries do start threads upon request (way too many industry popular libraries).
    • ThreadLocal caches; those are evil in many cases. I am sure everyone has seen quite a bit of simple caches based on ThreadLocal, well the bad news: if the thread keeps going more than expected the life the context ClassLoader, it is a pure nice little leak. Do not use ThreadLocal caches unless really needed.
    • Calling ThreadGroup.destroy() when the ThreadGroup has no threads itself, but it still keeps child ThreadGroups. A bad leak that will prevent the ThreadGroup to remove from its parent, but all the children become un-enumerateable.
    • Using WeakHashMap and the value (in)directly references the key. This is a hard one to find without a heap dump. That applies to all extended Weak/SoftReference that might keep a hard reference back to the guarded object.
    • Using java.net.URL with the HTTP(S) protocol and loading the resource from(!). This one is special, the KeepAliveCache creates a new thread in the system ThreadGroup which leaks the current thread's context classloader. The thread is created upon the first request when no alive thread exists, so either you may get lucky or just leak. The leak is already fixed in Java 7 and the code that creates thread properly removes the context classloader. There are few more cases (like ImageFetcheralso fixed) of creating similar threads.
    • Using InflaterInputStream passing new java.util.zip.Inflater() in the constructor (PNGImageDecoder for instance) and not calling end() of the inflater. Well, if you pass in the constructor with just new, no chance... And yes, calling close() on the stream does not close the inflater if it's manually passed as constructor parameter. This is not a true leak since it'd be released by the finalizer... when it deems it necessary. Till that moment it eats native memory so badly it can cause Linux oom_killer to kill the process with impunity. The main issue is that finalization in Java is very unreliable and G1 made it worse till 7.0.2. Moral of the story: release native resources as soon as you can; the finalizer is just too poor.
    • The same case with java.util.zip.Deflater. This one is far worse since Deflater is memory hungry in Java, i.e. always uses 15 bits (max) and 8 memory levels (9 is max) allocating several hundreds KB of native memory. Fortunately, Deflater is not widely used and to my knowledge JDK contains no misuses. Always call end() if you manually create a Deflater or Inflater. The best part of the last two: you can't find them via normal profiling tools available.

    Static field holding object reference [esp final field]
    class MemorableClass {
        static final ArrayList list = new ArrayList(100);
    }
    Calling String.intern() on lengthy String
    String str=readString(); // read lengthy string any source db,textbox/jsp etc..
    // This will place the string in memory pool from which you can't remove
    str.intern();
    (Unclosed) open streams ( file , network etc... )
    try {
        BufferedReader br = new BufferedReader(new FileReader(inputFile));
        ...
        ...
    } catch (Exception e) {
        e.printStacktrace();
    }
    Unclosed connections
    try {
        Connection conn = ConnectionFactory.getConnection();
        ...
        ...
    } catch (Exception e) {
        e.printStacktrace();
    }
    Areas that are unreachable from JVM's garbage collector, such as memory allocated through native methods
    In web applications, some objects are stored in application scope until the application is explicitly stopped or removed.
    getServletContext().setAttribute("SOME_MAP", map);
    Incorrect or inappropriate JVM options, such as the noclassgc option on IBM JDK that prevents unused class garbage collection


    https://www.dynatrace.com/news/blog/the-top-java-memory-problems-part-2/

    HTTP Session as Cache

    The Session caching anti-pattern refers to the misuse of the HTTP session as data cache. The HTTP session is used to store user data or state that needs to survive a single HTTP request. This is referred to as “conversational state” and is found in most web applications that deal with non-trivial user interactions. The HTTP session has several problems. First, as we can have many users, a single Web Server can have quite a lot of active sessions, so it is important to keep them small. The second problem is that they are not specifically released by the application at a given point. Instead Web Servers have a session timeout which is often quite high to increase user comfort. This alone can easily lead to quite large memory demands if we consider the number of parallel users. However, in reality we often see HTTP session with multiple megabytes in size.
    These so called session caches happen because it is easy and convenient for the developer to simply add objects to the session instead of thinking about other solutions like a cache. To make matters worse this is often done in a fire and forget mode, meaning data is never removed. 

    Another example is the misuse of the hibernate session to manage the conversational state. The hibernate session is stored in the HTTP session in order to facilitate quick access to data. This means storage of far more state than necessary, and with only a couple of users, memory usage immediately increases greatly. In modern Ajax applications, it may also be possible to shift the conversational state to the client. In the ideal case, this leads to a state-less or state-poor server application that scales much better.
    Another side effect of big HTTP sessions is that session replication becomes a real problem.

    Wrong Cache Usage

    If the cache is incorrectly configured, however, it will grow quickly and indefinitely until memory is full. When a GC is initiated, all the soft references in the cache are cleared and their objects garbage collected. The memory usage drops back to the base level, only to start growing again. This phenomenon can easily be mistaken to be an incorrectly configured young generation. It looks as if objects get tenured to early only to be collected by the next major GC. This kind of problem often leads to a GC tuning exercise that cannot succeed.

    Churn Rate and High transactional memory usage

    Java allows us to allocate a large number of objects very quickly. The generational GC is designed for a large number of very short-lived objects, but there is a limit to everything. If transactional memory usage is too high, it can quickly lead to performance or even stability problems. 
    If too many objects are created in too short a time, this naturally leads to an increased number of GCs in the young generation. Young generation Gcs are only cheap if most objects die! If a lot of objects survive the GC it is actually more expensive than an old generation GC would be under similar circumstances! Thus high memory needs of single transactions might not be a problem in a functional test but can quickly lead to GC thrashing under load. If the load becomes even higher these transactional objects will be promoted to the old generation as the young generation becomes too small. One could approach this from the this angle and increase the size of the young generation, in many cases this will simply push the problem a little further out, but would ultimately lead to even longer GC pauses (due to more objects being alive at the time of the GC).
    The worst of all possible scenarios, which we see often nevertheless, is an Out-of-memory error due to high transactional memory demand. If memory is already tight, higher transaction load might simply max out the available heap. The tricky part is that once the OutOfMemory hits, transactions that wanted to allocate objects but couldn’t are being aborted. Subsequently a lot of memory is released and garbage collected. In other words the very reason for the Out Of Memory is hidden by the OutOfMemory Error! As most memory tools only look at the java memory every couple of seconds they might not even show 100% memory at any point in time.
    Since Java 6 it is possible to trigger a Heap dump in the event of an OutOfMemory which will show the root cause quite nicely in such a case. If there is no OutOfMemory one can use trending or histo memory dumps (check out jmap or Dynatrace) to identify those classes whose object numbers fluctuate the most. Those are usually classes that are allocated and garbage collected a lot. The last resort is to do a full scale allocation analysis.

    Large Temporary Objects

    When working with large documents, it is very important to optimize the processing logic and prevent it from being held completely in the memory.

    Large classes


    in the case of the Hotspot JVM, string constants are a part of the PermGen, which can then quickly become too small. In a concrete case the application had a separate class for every language it supported, where each class contained every single text constant. Each of these classes itself was actually too large already. Due to a coding error, that happened in a minor release, all languages, meaning all classes, were loaded into memory. The JVM crashed during start up no matter how much memory was given to it.

    Same class in memory multiple times

    Especially application servers and OSGi containers tend to have a problem with too many loaded classes and the resulting memory usage. Application servers make it possible to load different applications or parts of applications in isolation to one another. One „feature“ is that multiple versions of the same class can be loaded in order to run different applications inside the same JVM. Due to incorrect configuration this can quickly double or triple the amount of memory needed for classes

    Same class loaded again and again

    What many forget is that classes are garbage collected too, in all three large JVMs. The Hotspot JVM does this only during a major GC, whereas both IBM and JRockit can do so during every GC. Therefore, if a class is used for only a short time, it can be removed from the memory again immediately. Loading a class is not exactly cheap and usually not optimized for concurrency. If the same class is loaded by multiple threads, Java synchronizes these threads. In one real world case, the classes of a script framework (bean shell) were loaded and garbage collected repeatedly because they were used for only a short time and the system was under load. Since this took place in multiple threads, the class loader was quickly identified as the bottleneck once analyzed under load. However, the development took place exclusively on the Hotspot JVM, so this problem was not discovered until it was deployed in production.
    In case of the Hotspot JVM this specific problem will only occur under load and memory pressure as it requires a major GC, whereas in the IBM JVM or JRockit this can already happen under moderate load. The class might not even survive the first garbage collection!
    https://www.dynatrace.com/news/blog/the-top-java-memory-problems-part-1/

    Thread Local Variables

    ThreadLocals are used to bind a variable or a state to a thread. Each thread has its own instance of the variable. They are very useful but also very dangerous. They are often used to track a state, like the current transaction id, but sometimes they hold a little more information. A thread local variable is referenced by its thread and as such its lifecycle is bound to it. In most application servers threads are reused via thread pools and thus are never garbage collected. If the application code is not carefully clearing the thread local variable you get a nasty memory leak.


    These kind of memory leaks can easily be discovered with a heap dump. Just take a look at the ThreadLocalMap in the heap dump and follow the references.

    Mutable static fields and collections

    The most common reason for a memory leak is the wrong usage of statics. A static variable is held by its class and subsequently by its classloader. While a class can be garbage collected it will seldom happen during an applications lifetime. Very often statics are used to hold cache information or share state across threads. If this is not done diligently it is very easy to get a memory leak. Especially static mutable collections should be avoided at all costs for just that reason. A good architectural rule is not to use mutable static objects at all, most of the time there is a better alternative.

    Circular and complex bi-directional references

    This is my favorite memory leak. It is best explained by example:
    org.w3c.dom.Document doc = readXmlDocument();
    org.w3c.dom.Node child = doc.getDocumentElement().getFirstChild();
    doc.removeNode(child);
    doc = null;
    At the end of the code snippet we would think that the DOM Document will be garbage collected. That is however not the case. A DOM Node object always belongs to a Document. Even when removed from the Document the Node Object still  has a reference to its owning document. As long as we keep the child object the document and all other nodes it contains will not be garbage collected

    Wrong implementation of equals/hashcode

    It might not be obvious on the first glance, but if your equals/hashcode methods violate the equals contract it will lead to memory leaks when used as a key in a map. A HashMap uses the hashcode to lookup an object and verify that it found it by using the equals method. If two objects are equal they must have the same hashcode, but not the other way around. If you do not explicitly implement hashcode yourself this is not the case. The default hashcode is based on object identity. Thus using an object without a valid hashcode implementation as a key in a map, you will be able to add things but you will not find them anymore. Even worse if you re-add it, it will not overwrite the old item but really add a new one. And just like that you have memory leak. You will find it easily enough as it is growing, but the root cause will be hard to determine unless you remember this one.
    The easiest way to avoid this is to use unit testcases and one of the available frameworks that tests the equals contract of your classes (e.g. http://code.google.com/p/equalsverifier/).

    Classloader Leaks

    Especially in application servers and OSGi containers there is another form of memory leak, the class loader leak. Classes are referenced by their classloader and normally they will not get garbage collected until the classloader itself is collected. That however only happens when the application gets unloaded by the application server or OSGi container. There are two forms of Classloader Leaks that I can describe off the top of my head.
    In the first an object whose class belongs to the class loader is still referenced by a cache, a thread local or some other means. In that case the whole class loader, meaning the whole application cannot be garbage collected. This is something that happens quite a lot in OSGi containers nowadays and used to happen in JEE Application Servers frequently as well. As it is only happens when the application gets unloaded or redeployed it does not happen very often.
    The second form is nastier and was introduced by bytecode manipulation frameworks like BCEL and ASM. These frameworks allow the dynamic creation of new classes. If you follow that thought you will realize that now classes, just like objects, can be forgotten by the developer. The responsible code might create new classes for the same purpose multiple times. As the class is referenced in the current class loader you get a memory leak that will lead to an out of memory in the permanent generation. The real bad news is that most heap analyzer tools do not point out this problem either, we have to analyze it manually, the hard way. This form or memory 

    https://github.com/square/leakcanary
    https://janatechnology.wordpress.com/2016/01/26/learning-to-let-go/

    https://blog.jooq.org/2015/11/10/beware-of-functional-programming-in-java/

    (“Pure”) Higher order functions MUST be static methods in Java!

    https://discuss.kotlinlang.org/t/lambdas-and-implicit-references-to-the-instance-of-the-enclosing-class/2288
     in Java there are two different situation:
    • Anonymous inner classes always create implicit reference
    • Java 8 lambdas create implicit reference only when we are using some method or field from the enclosing class
    https://stackoverflow.com/questions/28446626/do-java8-lambdas-maintain-a-reference-to-their-enclosing-instance-like-anonymous
    Lambda expressions and method references capture a reference to this only if required, i.e. when this is referenced directly or an instance (non-static) member is accessed.
    Of course, if your lambda expression captures the value of a local variable and that value contains a reference to this it implies referencing this as well…
    https://vickychijwani.me/java-8-method-references/
    https://github.com/vickychijwani/quill/commit/bcd033448a154293f23c2e6b389e3fd1dbd0aebc#diff-1dfd375e703dec2928034354f35b8227L221
    Java lambdas are only sugar for anonymous classes! That should've made me more cautious. Oh well.

        public static void main(String[] args) {
            String x = "abc";
            Runnable r1 = x::length;
            Runnable r2 = x::length;
            System.out.println(r1 == r2);
            // => false (!!)
        }
    

    public class MyClass {
        private Handler mHandler;
    
        // for non-Android folks: think of this as a kind of initialization function
        public void onCreate(/* ... */) {
            mHandler = new Handler(Looper.getMainLooper());
            // first we post a message to the Handler...
            mHandler.postDelayed(this::doSomething, 1000);
        }
    
        // sometime later this gets called, when the object is being destroyed...
        public void onDestroy() {
            mHandler.removeCallbacks(this::doSomething);  // uh-oh, doesn't do what we want
        }
    }
    

    Do you see the problem? Every time I write this::doSomething, Java creates a newinstance of an anonymous class (a Runnable, in this case), which means this::doSomething != this::doSomething (even though that expression doesn't compile verbatim), which means my pending callbacks are not removed. Ugh. This code doesn't do what I intended.

    In my mind, this is a big design error: it violates the principle of least astonishment for programming language design. Even though this::doSomething lookslike an innocuous reference to a method, it actually creates a new instance of Runnable. What's more, the Runnable holds a reference to its enclosing object! This is really badIt makes wrong code look right

    The fix here was simple: hold a reference to the created Runnable and pass in the same reference to removeCallbacks. But now I have to always remember that I cannot just pass method references around with impunity, like I'm used to!
    The Runnable created by this::doSomething holds an implicit reference to the enclosing instance, and you know what the enclosing instance is: an Activity! That makes the Activity stick around on the heap until the Runnable is GC'ed

    Moral of the story: avoid method references, use lambdas instead (or use them if you trust yourself to be super careful at all times). You may have to type () -> this.doSomething() instead, but at least it'll be immediately obvious that you're creating a new object and you won't get an accidental memory leak because of something so simple

    https://janatechnology.wordpress.com/2016/12/30/dealing-with-memory-leaks-from-anonymous-classes-in-android/
    This is all well and good until they last longer than the context around them.
    public class LeakyThreadActivity extends Activity {

        @Override
        protected void onCreate(Bundle savedInstanceState) {
            super.onCreate(savedInstanceState);
            setContentView(R.layout.activity_leak);
            final TextView view = (TextView) findViewById(R.id.textView);
            new Thread() {
                @Override
                public void run() {
                    try {
                        Thread.sleep(10000);
                    } catch (InterruptedException e) {
                        e.printStackTrace();
                    }
                    runOnUiThread(new Runnable() {
                        @Override
                        public void run() {
                            view.setText("Hello");
                        }
                    });
                }
            }.start();
        }

    }
    If the activity is destroyed before the thread is finished executing, the activity cannot be garbage collected
    Creating a thread in this way makes it an anonymous inner class, and thus has access to the outer class’s variables. To maintain this access, it needs a reference to the outer class, or in this case the activity. Thus, if thread is still active when the activity would normally be removed from memory, the inner class tells the garbage collector to stop and prevents the activity from ever being cleaned up. This is particularly problematic because activities tend to take up a large amount of memory with things such as view hierarchies.
    Other asynchronous tasks are also culprits of leaks for similar reasons; Handlers, AsyncTasks and more keep references to the outer class that spawned it after it has been through the end of its lifecycle, preventing it from ever being garbage collected.
    Our inner class needs to hold references to the activity and the textview so it can change the text as needed, but holding these references causes the activity to leak if it’s destroyed while the thread is still running and holding its references. To fix this, we need to use weak references. Weak References are references that we tell the garbage collector we’re not particularly worried about holding on to. If an object is being garbage collected and is only referenced weakly, we let it go. For our purposes this is just what we want; Instead of passing the context as a strong reference to the inner class, we pass it as a weak reference. Then, when we go to use it within the inner class, we simply check if it’s null (and thus was garbage collected) before we use it. In this way, we allow the activity to be released if it hits the destruction part of its lifecycle before our inner class has been executed and cleaned up

    http://blog.nimbledroid.com/2016/05/23/memory-leaks.html
    http://www.modelrefactoring.org/smell_catalog/smells/leaking_inner_class.html
    https://www.androiddesignpatterns.com/2013/01/inner-class-handler-memory-leak.html
    public class SampleActivity extends Activity {
    
      private final Handler mLeakyHandler = new Handler() {
        @Override
        public void handleMessage(Message msg) {
          // ... 
        }
      };
    }
    
    While not readily obvious, this code can cause cause a massive memory leak. Android Lint will give the following warning:
    In Android, Handler classes should be static or leaks might occur.
    1. When an Android application first starts, the framework creates a Looper object for the application’s main thread. A Looper implements a simple message queue, processing Message objects in a loop one after another. All major application framework events (such as Activity lifecycle method calls, button clicks, etc.) are contained inside Messageobjects, which are added to the Looper’s message queue and are processed one-by-one. The main thread’s Looper exists throughout the application’s lifecycle.
    2. When a Handler is instantiated on the main thread, it is associated with the Looper’s message queue. Messages posted to the message queue will hold a reference to the Handler so that the framework can callHandler#handleMessage(Message) when the Looper eventually processes the message.
    3. In Java, non-static inner and anonymous classes hold an implicit reference to their outer class. Static inner classes, on the other hand, do not.
    So where exactly is the memory leak? It’s very subtle, but consider the following code as an example:
    public class SampleActivity extends Activity {
     
      private final Handler mLeakyHandler = new Handler() {
        @Override
        public void handleMessage(Message msg) {
          // ...
        }
      };
     
      @Override
      protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
     
        // Post a message and delay its execution for 10 minutes.
        mLeakyHandler.postDelayed(new Runnable() {
          @Override
          public void run() { /* ... */ }
        }, 1000 * 60 * 10);
     
        // Go back to the previous Activity.
        finish();
      }
    }
    
    When the activity is finished, the delayed message will continue to live in the main thread’s message queue for 10 minutes before it is processed. The message holds a reference to the activity’s Handler, and the Handler holds an implicit reference to its outer class (the SampleActivity, in this case). This reference will persist until the message is processed, thus preventing the activity context from being garbage collected and leaking all of the application’s resources. Note that the same is true with the anonymous Runnable class on line 15. Non-static instances of anonymous classes hold an implicit reference to their outer class, so the context will be leaked.
    To fix the problem, subclass the Handler in a new file or use a static inner class instead. Static inner classes do not hold an implicit reference to their outer class, so the activity will not be leaked. If you need to invoke the outer activity’s methods from within the Handler, have the Handler hold a WeakReference to the activity so you don’t accidentally leak a context. To fix the memory leak that occurs when we instantiate the anonymous Runnable class, we make the variable a static field of the class (since static instances of anonymous classes do not hold an implicit reference to their outer class):
    public class SampleActivity extends Activity {
    
      /**
       * Instances of static inner classes do not hold an implicit
       * reference to their outer class.
       */
      private static class MyHandler extends Handler {
        private final WeakReference<SampleActivity> mActivity;
    
        public MyHandler(SampleActivity activity) {
          mActivity = new WeakReference<SampleActivity>(activity);
        }
    
        @Override
        public void handleMessage(Message msg) {
          SampleActivity activity = mActivity.get();
          if (activity != null) {
            // ...
          }
        }
      }
    
      private final MyHandler mHandler = new MyHandler(this);
    
      /**
       * Instances of anonymous classes do not hold an implicit
       * reference to their outer class when they are "static".
       */
      private static final Runnable sRunnable = new Runnable() {
          @Override
          public void run() { /* ... */ }
      };
    
      @Override
      protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
    
        // Post a message and delay its execution for 10 minutes.
        mHandler.postDelayed(sRunnable, 1000 * 60 * 10);
        
        // Go back to the previous Activity.
        finish();
      }
    }
    
    The difference between static and non-static inner classes is subtle, but is something every Android developer should understand. What’s the bottom line? Avoid using non-static inner classes in an activity if instances of the inner class could outlive the activity’s lifecycle. Instead, prefer static inner classes and hold a weak reference to the activity inside.
    https://dzone.com/articles/a-troublesome-legacy-memory-leaks-in-java
    I have been asked how to prevent memory leaks in the real world across several interviews. That made me think: would it not be a more interesting conversation to talk about how you can create a memory leak? 

    Let’s see how we could provoke a few memory leaks:
    • Do not close an open stream. We typically open streams to connect to database pools, to open network connections, or to start reading files. Not closing them creates memory leaks.
    • Use static fields for holding references. A static object is always in memory. If you declare too many static fields for holding references to objects, this will create memory leaks. The bigger the object, the bigger the memory leak.
    • Use a HashSet that is using an incorrect hashCode() (or not using one at all) or equals(). That way, the HashSet will start increasing in size, and objects will be inserted as duplicates! Actually, when I was asked in interviews “what is the purpose of hasCode() and equals() in Hash Sets” I always had the same answer: to avoid memory leaks! Maybe not the most pragmatic answer, but it’s equally true.
    If you are an Android developer, the possibility of memory leaks increases exponentially. The object Context is mainly used to access and load different resources, and it is passed to many classes and methods as a parameter.
    Imagine the case of a rotating screen. In this scenario, Android destroys the current Activity, and tries to recreate the same state before the rotation happened. In many cases, if let’s say you do not want to reload a long Bitmap, you will keep a static reference that avoids the Bitmap being reloaded. The problem is that this Bitmap is generally instantiated in a Drawable, which ultimately is also linking with other elements, and gets chained to the Context level, leaking the entire class. This is one of the reasons why one should be very careful with static classes.
    Non-static inner classes are largely used in Android because they allow us to access outer classes’ IDs without passing their references directly. However, Android developers will often add inner classes to save time, unaware of the effects on memory performance. A simple AsyncTask is created and executed when the Activity is started. But the inner class needs to have access to the outer class, so memory leaks occur every time the Activity is destroyed, but the AsyncTask is still working. This happens not only when the Activity.finish() method is called, but even when the Activity is destroyed forcibly by the system for configuration changes or memory needs, and then it’s created again. AsyncTask holds a reference to every Activity, making it unavailable for garbage collection when it’s destroyed.
    Think about what happens if the user rotates the device while the task is running: the whole instance of Activity needs to be available all the time until AsyncTask completes. Moreover, most of the time we want AsyncTask to put the result on the screen using the AsyncTask.onPostExecute() method. This could lead to crashes because the Activity is destroyed while the task is still working and views references may be null.
    So what is the solution to this? If we set the inner class as a static one, we cannot access the outer one, so we need to provide the reference to that. In order to increase the separation between the two instances and let the garbage collector work properly with the Activity, let’s use a weaker reference to achieve cleaner memory management. 
    This way, the classes are separated and the Activity can be collected as soon as it’s no longer used, and the AsyncTask object won’t find the Activity instance inside the WeakReference object and won’t execute the AsyncTask.onPostExecute() method code.
    Together with using References properly, we can use these methods to avoid provoking memory leaks in our code:
    • Avoid using non-static inner classes in your Activities, use a static inner class and make a WeakReference.
    • When you have the option to use Context, try using Activity Context instead of Application Context.
    • In general, never keep long-term references to any kind of Context.
    https://dzone.com/articles/memory-leak-andjava-code
    Example 1: Autoboxing
    
                  Long sum=0L;
                   sum =sum+l;
                   return sum;
    Example 4: Using CustomKey
    As in CustomKey we forgot to provide equals() and hashcode() implementation, so a key and value stored in map can’t be retrieved later, as the map get() method checks hashcode() and equals(). But this entry is not able to be GCed, as the map has a reference to it, but application can’t access it. Definitely a memory leak.
    So when you make your Custom key, always provide an equals and hashcode() implementation.
    Example 5: Mutable Custom Key
    Although here we provided equals() and hashcode() for the custom Key, we made it mutable unintentionally after storing it into the map. If its property is changed, then that entry will never be found by the application, but map holds a reference, so a memory leak happens.
    Always make your custom key immutable.


    https://wiki.apache.org/tomcat/MemoryLeakProtection
    Mar 16, 2010 11:47:24 PM org.apache.catalina.loader.WebappClassLoader clearThreadLocalMap
    SEVERE: A web application created a ThreadLocal with key of type [test.MyThreadLocal] (value [test.MyThreadLocal@4dbb9a58]) and a value of type [test.MyCounter] (value [test.MyCounter@57922f46]) but failed to remove it when the web application was stopped. To prevent a memory leak, the ThreadLocal has been forcibly removed.


    Note: this particular leak was actually already cured by previous versions of tomcat 6, because static references of classes loaded by the webappclassloader are nullified (see later).
    http://blog.xiaohansong.com/2016/08/09/ThreadLocal-leak-analyze
    在 Tomcat 中,下面的代码都在 webapp 内,会导致WebappClassLoader泄漏,无法被回收。
    public class MyCounter {
            private int count = 0;
    
            public void increment() {
                    count++;
            }
    
            public int getCount() {
                    return count;
            }
    }
    
    public class MyThreadLocal extends ThreadLocal<MyCounter> {
    }
    
    public class LeakingServlet extends HttpServlet {
            private static MyThreadLocal myThreadLocal = new MyThreadLocal();
    
            protected void doGet(HttpServletRequest request,
                            HttpServletResponse response) throws ServletException, IOException {
    
                    MyCounter counter = myThreadLocal.get();
                    if (counter == null) {
                            counter = new MyCounter();
                            myThreadLocal.set(counter);
                    }
    
                    response.getWriter().println(
                                    "The current thread served this servlet " + counter.getCount()
                                                    + " times");
                    counter.increment();
            }
    }
    
    上面的代码中,只要LeakingServlet被调用过一次,且执行它的线程没有停止,就会导致WebappClassLoader泄漏。每次你 reload 一下应用,就会多一份WebappClassLoader实例,最后导致 PermGen OutOfMemoryException

    在 Tomcat 中,下面的代码都在 webapp 内,会导致WebappClassLoader泄漏,无法被回收。
    public class MyCounter {
            private int count = 0;
    
            public void increment() {
                    count++;
            }
    
            public int getCount() {
                    return count;
            }
    }
    
    public class MyThreadLocal extends ThreadLocal<MyCounter> {
    }
    
    public class LeakingServlet extends HttpServlet {
            private static MyThreadLocal myThreadLocal = new MyThreadLocal();
    
            protected void doGet(HttpServletRequest request,
                            HttpServletResponse response) throws ServletException, IOException {
    
                    MyCounter counter = myThreadLocal.get();
                    if (counter == null) {
                            counter = new MyCounter();
                            myThreadLocal.set(counter);
                    }
    
                    response.getWriter().println(
                                    "The current thread served this servlet " + counter.getCount()
                                                    + " times");
                    counter.increment();
            }
    }
    
    上面的代码中,只要LeakingServlet被调用过一次,且执行它的线程没有停止,就会导致WebappClassLoader泄漏。每次你 reload 一下应用,就会多一份WebappClassLoader实例,最后导致 PermGen OutOfMemoryException



    http://www.cnblogs.com/taney/p/5469500.html
    public interface IPublisher { void Subscribe(ISubscriber sub); void UnSubscribe(ISubscriber sub); void Notify(); } public interface ISubscriber { void OnNotify(); } public class Subscriber : ISubscriber { public String Name { get; set; } public void OnNotify() { Console.WriteLine($"{this.Name} 收到通知"); } } public class Publisher : IPublisher { private List<ISubscriber> _subscribers = new List<ISubscriber>(); public void Notify() { foreach (var s in this._subscribers) s.OnNotify(); } public void Subscribe(ISubscriber sub) { this._subscribers.Add(sub); } public void UnSubscribe(ISubscriber sub) { this._subscribers.Remove(sub); } } static void Main(string[] args) { IPublisher pub = new Publisher(); AttachSubscribers(pub); pub.Notify(); GC.Collect(); Console.WriteLine("垃圾回收结束"); pub.Notify(); Console.ReadKey(); } static void AttachSubscribers(IPublisher pub) { var sub1 = new Subscriber { Name = "订阅者 甲" }; var sub2 = new Subscriber { Name = "订阅者 乙" }; pub.Subscribe(sub1); pub.Subscribe(sub2); // 这里其实赋不赋null都一样,只是为了突出效果 sub1 = null; sub2 = null; }
    在AttachSubscribers方法里,创建了两个订阅者,并进行了订阅,这里的两个订阅者都是在局部创建的,也并没有打算在外部引用它们,它们应该在不久的某个时刻被回收了,但是由于同时它们又存在于发布者的订阅者列表里,发布者“占有”了订阅者,虽然它们都没用了,但暂时不会被销毁,如果发布者一直活着,则这些没用的订阅者也一直得不到回收,那为什么不调用UnSubscribe呢?因为在实际中情况可能很复杂,有些时候UnSubscribe调用的时机会很难确定,而且发布者的任务在于登记和通知订阅者,不应该因此而“占有”它们,不应干涉它们的死活,所以对于这种情况,可以使用“弱引用”实现“非占用”。

    弱引用

    弱引用是一种包装类型,用于间接访问被包装的对象,而又不会产生对此对象的实际引用。所以就不会妨碍被包装的对象的回收。
    public class WeakPublisher : IPublisher
    {
        private List<WeakReference<ISubscriber>> _subscribers = new List<WeakReference<ISubscriber>>();
    
        public void Notify()
        {
            for (var i = 0; i < this._subscribers.Count();)
            {
                ISubscriber s;
                if (this._subscribers[i].TryGetTarget(out s))
                {
                    s.OnNotify();
                    ++i;
                }
                else
                    this._subscribers.RemoveAt(i);
            }
        }
    
        public void Subscribe(ISubscriber sub)
        {
            this._subscribers.Add(new WeakReference<ISubscriber>(sub));
        }
    
        public void UnSubscribe(ISubscriber sub)
        {
            for (var i = 0; i < this._subscribers.Count(); ++i)
            {
                ISubscriber s;
                if (this._subscribers[i].TryGetTarget(out s) && Object.ReferenceEquals(s, sub))
                {
                    this._subscribers.RemoveAt(i);
                    return;
                }
            }
        }
    }

    怎么这么多char[]呀,好像也看不出什么,还是dump一下吧。中间翻阅了一些博客,觉得应该是class loader的问题。oql查询一下。(这里因为是jetty,所以查询了jetty的WebAppClassLoader,如果使用tomcat或者其他容器,应该有对应的Loader,可以通过Saved Queries->PermGen Analysis->ClassLoader Types看一下有哪些ClassLoader)
    http://blog.xiaohansong.com/2016/08/06/ThreadLocal-memory-leak/
    比较两种情况,我们可以发现:由于ThreadLocalMap的生命周期跟Thread一样长,如果都没有手动删除对应key,都会导致内存泄漏,但是使用弱引用可以多一层保障:弱引用ThreadLocal不会内存泄漏,对应的value在下一次ThreadLocalMap调用set,get,remove的时候会被清除
    因此,ThreadLocal内存泄漏的根源是:由于ThreadLocalMap的生命周期跟Thread一样长,如果没有手动删除对应key就会导致内存泄漏,而不是因为弱引用。


    • 每次使用完ThreadLocal,都调用它的remove()方法,清除数据。
    在使用线程池的情况下,没有及时清理ThreadLocal,不仅是内存泄漏的问题,更严重的是可能导致业务逻辑出现问题。所以,使用ThreadLocal就跟加锁完要解锁一样,用完就清理。

    Labels

    Review (572) System Design (334) System Design - Review (198) Java (189) Coding (75) Interview-System Design (65) Interview (63) Book Notes (59) Coding - Review (59) to-do (45) Linux (43) Knowledge (39) Interview-Java (35) Knowledge - Review (32) Database (31) Design Patterns (31) Big Data (29) Product Architecture (28) MultiThread (27) Soft Skills (27) Concurrency (26) Cracking Code Interview (26) Miscs (25) Distributed (24) OOD Design (24) Google (23) Career (22) Interview - Review (21) Java - Code (21) Operating System (21) Interview Q&A (20) System Design - Practice (20) Tips (19) Algorithm (17) Company - Facebook (17) Security (17) How to Ace Interview (16) Brain Teaser (14) Linux - Shell (14) Redis (14) Testing (14) Tools (14) Code Quality (13) Search (13) Spark (13) Spring (13) Company - LinkedIn (12) How to (12) Interview-Database (12) Interview-Operating System (12) Solr (12) Architecture Principles (11) Resource (10) Amazon (9) Cache (9) Git (9) Interview - MultiThread (9) Scalability (9) Trouble Shooting (9) Web Dev (9) Architecture Model (8) Better Programmer (8) Cassandra (8) Company - Uber (8) Java67 (8) Math (8) OO Design principles (8) SOLID (8) Design (7) Interview Corner (7) JVM (7) Java Basics (7) Kafka (7) Mac (7) Machine Learning (7) NoSQL (7) C++ (6) Chrome (6) File System (6) Highscalability (6) How to Better (6) Network (6) Restful (6) CareerCup (5) Code Review (5) Hash (5) How to Interview (5) JDK Source Code (5) JavaScript (5) Leetcode (5) Must Known (5) Python (5)

    Popular Posts