Monday, February 29, 2016

Testing Misc

%r represents the number of milliseconds since the JVM started, not necessarily when the Layout was created. The value is calculated by calling ManagementFactory.getRuntimeMXBean().getStartTime(); when the pattern converter is created and then subtracting the event timestamp from the start time value for each event. Given that the start time never changes this value should grow over time as you are describing.
Log4j doesn't have any way to get the time the request was started. You could capture that in a ThreadContext value and then create your own pattern converter to use that value as the value to subtract from the current system time.
You should use Hamcrest's greaterThan for this case. gt is for verifying arguments of method calls in mock objects:
    private List<Integer> list = Mockito.mock(List.class);

    public void testGreaterThan() throws Exception {
        assertThat(17, is(org.hamcrest.Matchers.greaterThan(10)));

This is implemented by running the test method in a separate thread. If the test runs longer than the allotted timeout, the test will fail and JUnit will interrupt the thread running the test. If a test times out while executing an interruptible operation, the thread running the test will exit (if the test is in an infinite loop, the thread running the test will run forever, while other tests continue to execute).

The timeout specified in the Timeout rule applies to the entire test fixture, including any @Before or @After methods. If the test method is in an infinite loop (or is otherwise not responsive to interrupts) then @After methods will not be called.
Callable<Long> task = new Callable<Long>() {
            public Long call() {
                return domainObject.nextId();
        List<Callable<Long>> tasks = Collections.nCopies(threadCount, task);
        ExecutorService executorService = Executors.newFixedThreadPool(threadCount);
        List<Future<Long>> futures = executorService.invokeAll(tasks);
        List<Long> resultList = new ArrayList<Long>(futures.size());
        // Check for exceptions
        for (Future<Long> future : futures) {
            // Throws an exception if an exception was thrown by the task.
Tip 1 - Life-cycle Manage Your Objects
Object that have a managed life-cycle are are easier to test, the life-cycle allows for set-up and tear-down, which means you can clean-up after your test and no spurious threads are lying around to pollute other tests.

    private ExecutorService executorService;

    public void start() {
        executorService = Executors.newSingleThreadExecutor();

    public void stop() {

Tip 2 - Set a Timeout on Your Tests
Bugs in code (as you'll see below) can result in a multi-threaded test never completing, as (for example) you're waiting on some flag that never gets set. JUnit lets you set a timeout on your test.

@Test(timeout = 100) // in case we never get a notification

Tip 3 - Run Tasks in the Same Thread as Your Test
Typically you'll have an object that runs tasks in a thread pool. This means that your unit test might have to wait for the task to complete, but you're not able to know when it would complete. You might guess,

A trick is to make the task run synchronously, i.e. in the same thread as the test. Here this can be achieved by injecting the executor:

Then you can have use a synchronous executor service (similar in concept to a SynchronousQueue) to test: An updated test that doesn't need to sleep:
Tip 4 - Extract the Work from the Threading
If your thread is waiting for an event, or a time before it does any work, extract the work to its own method and call it directly.

Tip 5 - Notify State Change via Events
An alternative to the previous two tips is to use a notification system, so your test can listen to the threaded object.

    public void update(final Observable o, final Object arg) {
        assert o == sut;
  1. Creates useful code for listening to the object.
  2. Can take advantage of existing notification code, which makes it a good choice where that already exists.
  3. Is more flexible, can apply to both tasks and process orientated code.
  4. It is more cohesive than extracting the work.
  1. Listener code can be complex and introduce its own problems, creating additional production code that ought to be tested.
  2. De-couples submission from notification.
  3. Requires you to deal with the scenario that no notification is sent (e.g. due to bug).
  4. Test code can be quite verbose and therefore prone to having bugs
@com.carrotsearch.randomizedtesting.annotations.Repeat(iterations = 5)
The test execution follows a certain life cycle. And each phase of that life cycle that can be extended is represented by an interface. Extensions can express interest in certain phases in that they implement the corresponding interface(s).
With the ExtendWith annotation a test method or class can express that it requires a certain extension at runtime. All extensions have a common super interface: ExtensionPoint. The type hierarchy of ExtensionPoint lists all places that extension currently can hook in.

Rules are used to add additional functionality which applies to all tests within a test class, but in a more generic way.
For instance, ExternalResource executes code before and after a test method, without having to use @Before and @After. Using an ExternalResource rather than @Before and @After gives opportunities for better code reuse; the same rule can be used from two different test classes.
It is possible to implement such an loop with TestRules (since JUnit 4.9)
A very simple implementation that runs every Test 10 times:
import org.junit.rules.TestRule;
import org.junit.runner.Description;
import org.junit.runners.model.Statement;

public class SimpleRepeatRule implements TestRule {

    private static class SimpleRepeatStatement extends Statement {

        private final Statement statement;

        private SimpleRepeatStatement(Statement statement) {
            this.statement = statement;

        public void evaluate() throws Throwable {
            for (int i = 0; i < 10; i++) {

    public Statement apply(Statement statement, Description description) {
        return new SimpleRepeatStatement(statement);
public class Run10TimesTest {

   public SimpleRepeatRule repeatRule = new SimpleRepeatRule();

   public void myTest(){...}
public class RepeatRule implements TestRule {
  @Retention( RetentionPolicy.RUNTIME )
  @Target( {
  } )
  public @interface Repeat {
    public abstract int times();

  private static class RepeatStatement extends Statement {

    private final int times;
    private final Statement statement;

    private RepeatStatement( int times, Statement statement ) {
      this.times = times;
      this.statement = statement;

    public void evaluate() throws Throwable {
      for( int i = 0; i < times; i++ ) {

  public Statement apply(
    Statement statement, Description description )
    Statement result = statement;
    Repeat repeat = description.getAnnotation( Repeat.class );
    if( repeat != null ) {
      int times = repeat.times();
      result = new RepeatStatement( times, statement );
    return result;
This might also be caused if you try to verify a method which expects primitive arguments with any():
For example, if our method has this signature:
method(long l, String s);
And you try to verify it like this, it will fail with aforementioned message:
verify(service).method(any(), anyString());
Change it to anyLong() and it will work:
verify(service).method(anyLong(), anyString());
As of JUnit 4.7 and Mockito 1.10.17, this functionality is built in; there is an org.mockito.junit.MockitoRule class. You can simply import it and add the line
@Rule public MockitoRule mockitoRule = MockitoJUnit.rule();
For older versions of Mockito (down to 1.10.5 it seems), you have to use: @Rule public MockitoJUnitRule mockito = new MockitoJUnitRule(this);
        thrown.expectMessage(is("Name is empty!"));

        //test detail
        thrown.expect(hasProperty("errCode"));  //make sure getters n setters are defined.
        thrown.expect(hasProperty("errCode", is(666)));
1. Testing algorithms together with coordinators. 
2. Mocking too much. Perhaps the greatest benefit of unit tests is that they force you to write code that can be tested in isolation. In other words, your code becomes modular. When you mock the whole world around your objects, there is nothing that forces you to separate the parts. You end up with code where you can’t create anything in isolation – it is all tangled together. From a recent tweet by Bill Wake:  “It’s ironic – the more powerful the mocking framework, the less pressure you feel to improve your design.”
How can you unit test private methods? If you google this question, you find several different suggestions: test them indirectly, extract them into their own class and make them public there, or use reflection to test them. All these solutions have flaws. My preference is to simply remove the private modifier and make the method package private.

Either the private method does something interesting, and then it should be unit tested in its own right, or it doesn’t do anything interesting, and then it doesn’t need to be unit tested at all.
Furthermore, this doesn’t work well if you practice Test Driven Development (TDD). If you take a bottom up approach, and develop and test the building blocks before putting them together, you often don’t have the public method ready when you are developing the helper methods. Thus you don’t get the benefits of testing while developing
$ npm install -g json-server
$ json-server --watch db.json
Gatling tool

The grinder is sometimes in competition with JMeter in software companies. Developers usually like The Grinder because it’s devopsfriendly, to write test plans using code instead of a GUI. This tool is hosted by SourceForge.
Tsung is a multi-protocol, distributed stress testing tool. It is developed in Erlang, an open-source language made by Ericsson for building robust fault-tolerant distributed applications.
Supported protocols are HTTP, WebDAV, SOAP, PostgreSQL, MySQL, LDAP and Jabber/XMPP servers. Tsung purpose, like any other load and stress testing software, is to simulate users in order to test the scalability and performance of IP based client/server applications.

    It can be distributed on several client machines and is able to simulate hundreds of thousands of virtual users concurrently.
    The main idea is this: a test case that runs in an exactly identical way every time it is run only covers that single execution path. Such tests are very good for verifying if any changes of behavior have happened when new code has been introduced (regression testing) or to assert on corner cases. These "fixed" tests do not bring any new insight into how the program behaves for previously unseen combinations of input arguments, components or environmental settings. And because for complex (or any) software such interactions are hard to predict in advance (think those buffer underruns, null pointers, etc.) running your tests on as many different input combinations as possible should over time increase the confidence that the software is robust and reliable.
    The question how to implement the above concept of "different execution every time" and how to assert on conditions in such case can be solved in many ways. RandomizedRunner provides an implementation of java.util.Random which is initialized with a random seed that is reported (injected into a stack trace) in case of a test failure. So if a test fails it should be, at least theoretically, repeatable if started from the same seed.
    In a randomized test case, the execution will be different every time. For the add method we may randomize the arguments (within their contract bounds) and verify if the outcome satisfies some conditions. For this example, let's say the result of adding two non-negative integers shouldn't be smaller than any of the arguments:
    public void randomizedTesting() {
      // Here we pick two positive integers. Note superclass utility methods.
      int a = randomIntBetween(0, Integer.MAX_VALUE);
      int b = randomIntBetween(0, Integer.MAX_VALUE);
      int result = Adder.add(a, b);
      assertTrue(result + " < (" + a + " or " + b + ")?", result >= a && result >= b);
    This test passes most of the time, but occasionally it will fail due to integer overflow.
    Once a failing execution has been caught it's easy to repeat it (that's the whole point!). Note the first line of the stack trace, it contains the master randomization seed picked for the execution: 2300CE9BBBCFF4C8:573D00C2ABB4AD89. The first number if the "master" seed used in static suite context (class initializers, @BeforeClass and @AfterClass hooks), the second number is the seed derived from the master and used in a test context. To repeat the exact same failing execution we could either override seeds using system properties, as in:
    or we could annotate the class/method in question and fix the seed to a particular value; for instance by adding an annotation to the class:
    After doing so, we would be set for a debugging session to see what the cause of the problem was. The above example is part of a walk-through tutorial available in progressive difficulty.
            //true, check null
            assertThat(null, is(nullValue()));
            //true, check not null
            assertThat("a", is(notNullValue()));
    pip install locustio
    brew install libevent

    Locust requires Python 2.7+. It is not currently compatible with Python 3.x.

    Alternatively, use the ExpectedException rule. This rule lets you indicate not only what exception you are expecting, but also the exception message you are expecting:
    public ExpectedException thrown = ExpectedException.none();
    public void shouldTestExceptionMessage() throws IndexOutOfBoundsException {
        List<Object> list = new ArrayList<Object>();
        thrown.expectMessage("Index: 0, Size: 0");
        list.get(0); // execution will never get past this line
    @Test(expected = IndexOutOfBoundsException.class) 
    public void empty() { 
         new ArrayList<Object>().get(0); 

    public void testExceptionMessage() {
        try {
            new ArrayList<Object>().get(0);
            fail("Expected an IndexOutOfBoundsException to be thrown");
        } catch (IndexOutOfBoundsException anIndexOutOfBoundsException) {
            assertThat(anIndexOutOfBoundsException.getMessage(), is("Index: 0, Size: 0"));
    During development, you may run a single test class repeatedly. To run this through Maven, set the test property to a specific test case.
    mvn -Dtest=TestCircle test

    Spring + JUnit
    JUnit 4.11doesn't work with Spring Unit test framework.
    1. JUnit 4.12
    2. Hamcrest 1.3
    3. Spring 4.3.0.RELEASE
    @ContextConfiguration(classes = {AppConfig.class})
    public class MachineLearningTest {}
    Eclipse JUnit - possible causes of seeing “initializationError” in Eclipse window
    You've probably got one of two problems:
    1) You're using JUnit 4.11, which doesn't include hamcrest. Add the hamcrest 1.3 library to your classpath.
    2) You've got hamcrest 1.3 on your classpath, but you've got another version of either junit or hamcrest on your classpath.
    For background, junit pre 4.11 included a cut down version of hamcrest 1.1. 4.11 removed these classes.

    @BeforeSuite annotated method represents an event before the suite starts, so all the @BeforeSuite methods will be executed before the first test declared within the test element is invoked.

    When we run testng.xml, we see the order in which the annotated methods are fired. The very first methods that run are the @BeforeSuite methods. Since each of the test classes have one @BeforeSuite, both are consecutively run and then only other annotated methods are fired starting with @BeforeTest method.
     public static void TestSuite() {
        TestNG testNG = new TestNG();
        List<Class> listnerClasses = new ArrayList<Class>();
        List<String> suiteNameList = new ArrayList<String>();
        Class[] classList = new Class[]{
    The class below is a placeholder for the suite annotations, no other implementation is required. Note the @RunWith annotation, which specifies that the JUnit 4 test runner to use isorg.junit.runners.Suite for running this particular test class. This works in conjunction with the @Suite annotation, which tells the Suite runner which test classes to include in this suite and in which order.
    public class FeatureTestSuite {
      // the class remains empty,
      // used only as a holder for the above annotations
    • Avoid any_instance in rspec-mocks and mocha. Prefer dependency injection.
    • Avoid itsspecify, and before in RSpec.
    • Avoid let (or let!) in RSpec. Prefer extracting helper methods, but do not re-implement the functionality of let.Example.
    • Avoid using subject explicitly inside of an RSpec it block. Example.
    • Avoid using instance variables in tests.
    • Disable real HTTP requests to external services with WebMock.disable_net_connect!.
    • Don't test private methods.
    • Test background jobs with a Delayed::Job matcher.
    • Use stubs and spies (not mocks) in isolated tests.
    • Use a single level of abstraction within scenarios.
    • Use an it example or test method for each execution path through the method.
    • Use assertions about state for incoming messages.
    • Use stubs and spies to assert you sent outgoing messages.
    • Use a Fake to stub requests to external services.
    • Use integration tests to execute the entire app.
    • Use non-SUT methods in expectations when possible.
    5. Set up Clean Environment for Each Test
    6. Use Mock Objects To Test Effectively
    7. Refactor Tests When You Refactor the Code
    8. Write Tests Before Fixing a Bug
    • Unit tests as opposed to say Integration tests, are usually meant to test a single Class.
    • Any class dependencies therefore should be removed from the equation (so that if a class dependency is buggy, the unit test won't be affected by it), which is done by mocking them away.
    • Mocks are "fake" objects, copying all the public methods / members of the original class, while returning "default" values such as null for an object, or 0 for an int.
    • Mocks can be told how they should behave, e.g. what a method should return when called on the mock. This is done with `when/then`.
    • Mocks allow you to test your code's interaction with a dependency, by mocking the dependency and using `verify`.
    • If the class under question is initializing a dependency directly (e.g calling `new Dependency()`), you won't be able to easily test interactions with that dependency. One easy way to fix that is to not initialize that dependency in the class, but rather receive it as a constructor argument, which then can be mocked in the test.
    • One way to make sure a test is good, meaning it actually tests what it's supposed to test, is to momentarily comment out / edit the original code under test in a way that should make the test fail, and make sure the test indeed fails, in an appropraite manner.
    • Naming unit tests properly is important and pretty easy, simply describe what is being tested, e.g. "throwsAnInvalidArgumentExceptionWhenIdIsIllegal"
    • Ideally each test should only have one way to fail, meaning it tests a single thing
     Test only one code unit at a time
    Don’t make unnecessary assertions
    Make each test independent to all the others
    Mock out all external services and state
    Don’t unit-test configuration settings
    Name your unit tests clearly and consistently
    You must name your testcases on what they actually do and test. Testcase naming convention which uses class names and method names for testcases name, is never a good idea. Every time you change the method name or class name, you will end up updating a lot of test cases as well.
    But, if your test cases names are logical i.e. based on operations then you will need almost no modification because most possibly application logic will remain same.
    E.g. Test case names should be like:
    1) TestCreateEmployee_NullId_ShouldThrowException
    2) TestCreateEmployee_NegativeId_ShouldThrowException
    3) TestCreateEmployee_DuplicateId_ShouldThrowException
    4) TestCreateEmployee_ValidId_ShouldPass
    Write tests for methods that have the fewest dependencies first, and work your way up
    All methods, regardless of visibility, should have appropriate unit tests
    Aim for each unit test method to perform exactly one assertion
    Create unit tests that target exceptions
    Use the most appropriate assertion methods.
    Put assertion parameters in the proper order
    Assert methods takes usually two parameters. One is expected value and second is original value. Pass them in sequence as they are needed. This will help in correct message parsing if something goes wrong.
    Ensure that test code is separated from production code
    Do not print anything out in unit tests
    Do not use static members in a test class
    Do not write your own catch blocks that exist only to fail a test
    Do not rely on indirect testing
    Integrate Testcases with build script
    Do not skip unit tests
    Capture results using the XML formatter
    @Test(timeout = 100) // in case we never get a notification

    Tip 3 - Run Tasks in the Same Thread as Your Test

    Typically you'll have an object that runs tasks in a thread pool. This means that your unit test might have to wait for the task to complete, but you're not able to know when it would complete. You might guess, for example:

        private final AtomicLong foo = new AtomicLong();
        public void incr() {
            executorService.submit(new Runnable() {
                public void run() {
        public long get() {
            return foo.get();

    Consider this test:

        private Foo sut; // system under test
        public void setUp() throws Exception {
            sut = new Foo();
        public void tearDown() throws Exception {
        public void testGivenFooWhenIncrementGetOne() throws Exception {
            Thread.sleep(1000); // yuk - a slow test - don't do this
            assertEquals("foo", 1, sut.get());

    An updated test that doesn't need to sleep:

        private Foo sut; // system under test
        private ExecutorService executorService;
        public void setUp() throws Exception {
            executorService = new SynchronousExecutorService();
            sut = new Foo(executorService);
        public void tearDown() throws Exception {
        public void testGivenFooWhenIncrementGetOne() throws Exception {
            assertEquals("foo", 1, sut.get());
    Tip 4 - Extract the Work from the Threading


    Review (572) System Design (334) System Design - Review (198) Java (189) Coding (75) Interview-System Design (65) Interview (63) Book Notes (59) Coding - Review (59) to-do (45) Linux (43) Knowledge (39) Interview-Java (35) Knowledge - Review (32) Database (31) Design Patterns (31) Big Data (29) Product Architecture (28) MultiThread (27) Soft Skills (27) Concurrency (26) Cracking Code Interview (26) Miscs (25) Distributed (24) OOD Design (24) Google (23) Career (22) Interview - Review (21) Java - Code (21) Operating System (21) Interview Q&A (20) System Design - Practice (20) Tips (19) Algorithm (17) Company - Facebook (17) Security (17) How to Ace Interview (16) Brain Teaser (14) Linux - Shell (14) Redis (14) Testing (14) Tools (14) Code Quality (13) Search (13) Spark (13) Spring (13) Company - LinkedIn (12) How to (12) Interview-Database (12) Interview-Operating System (12) Solr (12) Architecture Principles (11) Resource (10) Amazon (9) Cache (9) Git (9) Interview - MultiThread (9) Scalability (9) Trouble Shooting (9) Web Dev (9) Architecture Model (8) Better Programmer (8) Cassandra (8) Company - Uber (8) Java67 (8) Math (8) OO Design principles (8) SOLID (8) Design (7) Interview Corner (7) JVM (7) Java Basics (7) Kafka (7) Mac (7) Machine Learning (7) NoSQL (7) C++ (6) Chrome (6) File System (6) Highscalability (6) How to Better (6) Network (6) Restful (6) CareerCup (5) Code Review (5) Hash (5) How to Interview (5) JDK Source Code (5) JavaScript (5) Leetcode (5) Must Known (5) Python (5)

    Popular Posts