Sunday, July 12, 2015

Better Logging in Java



https://logback.qos.ch/manual/layouts.html
In PatternLayout, parenthesis can be used to group conversion patterns. It follows that the '(' and ')' carry special meaning and need to be escaped if intended to be used as literals.The special nature of parenthesis is further explained below.
[...]
If you need to treat the parenthesis character as a literal, it needs to be escaped by preceding each parenthesis with a backslash. As in, \(%d{HH:mm:ss.SSS} [%thread]\).
You'll need to escape the parenthesis with a \.
%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36}.%M\(%line\) - %msg%n
https://logback.qos.ch/manual/layouts.html
 L / line
Outputs the line number from where the logging request was issued.
Generating the line number information is not particularly fast. Thus, its use should be avoided unless execution speed is not an issue.
m / msg / message
Outputs the application-supplied message associated with the logging event.
M / method
Outputs the method name where the logging request was issued.
Generating the method name is not particularly fast. Thus, its use should be avoided unless execution speed is not an issue.
n
Outputs the platform dependent line separator character or characters.
This conversion word offers practically the same performance as using non-portable line separator strings such as "\n", or "\r\n". Thus, it is the preferred way of specifying a line separator.

https://henrikwarne.com/2014/01/21/session-based-logging/
if (context.isTraceActive()) {
   context.printTrace("MSISDN is missing or empty.");
}
https://www.loggly.com/blog/logging-tips-for-power-users-contextual-logging/

Thread Context Stack

Using the Thread Context Stack, you can place (push) and remove (pull) values to and from the top of the stack. The Thread Context is managed by the static ThreadContext class, meaning we can modify the Thread Context without having to initialize a ThreadContext object.

Thread Context Map

Like the Thread Context Stack, the Thread Context Map is managed through the static ThreadContext class. Instead of pushing and popping, use the put() and remove() methods to modify values in the Thread Context:

You can clear the Thread Context before the thread is terminated by using ThreadContext.clearStack()ThreadContext.clearMap(), or ThreadContext.clearAll()for the stack, the map, or the entire thread respectively.
ThreadContext.push(sessionID.toString());
ThreadContext.push(username);

ThreadContext.pop();
ThreadContext.pop();

ThreadContext.put("sessionID", sessionID.toString());
ThreadContext.put("username", username);

ThreadContext.remove("sessionID");
ThreadContext.remove("username");


https://henrikwarne.com/2014/01/27/a-bug-a-trace-a-test-a-twist/
The exception is including the information that the index was -3. This is better than a lot of other Java exceptions, but not as good as it could be. Had the index been too high (i.e. 4 for an array of length 4), you would not know from the exception how big the array was. A good exception or error message should include as much dynamic information as possible. For an ArrayIndexOutOfBoundsException that means both including the offending index, and including the valid array range. Not including all dynamic information is unfortunately a very common failing for many Java exceptions.
You also need to know what data the algorithm was operating on. Fortunately, in our system we have the option of using Trace on Error. This is a form of session-based logging (called tracing at work, to distinguish it from traditional logging).
The trace included with the error report showed the complete message received, as well as all key steps taken until the error occurred. Among other things, it included the destination MSISDN (phone number).
the hashCode-method can (of course) return negative values as well as positive values. Even though I had tested with several different phone numbers, I hadn’t picked one big enough to cause the hashCode-method to return a negative value

https://blog.mythsman.com/2018/02/04/1/
其实所谓的日志框架主要分两种:
  • 一种是日志框架门面(facade),用来统一一个共同的接口,方便我们用相同的代码支持不同的实现方法。主要包括commons-logging和slf4j两套规范。
  • 另一种就是日志框架的实现了,主要包括log4j、log4j2、logback、slf4j-simple和java.util.logging包。
这两类东西当然要一起用,显然属于apache的东西一般放在一起用(log4j log4j2 commons-logging),属于QOS的东西也一般放在一起用(logback slf4j)。不过话也不是绝对的,既然是门面嘛,肯定要支持的全一点了,如果门面跟实现的api不同的话呢,那就用适配器模式写一个adapter层做适配就好了。比如slf4j支持jul跟log4j就是通过适配器来做的。
这种通过门面来统一日志框架的好处是显而易见的,那就是我们在写代码的时候只需要知道门面的api就行了(这通常是比较简单而且一致的),不需要知道不同框架的实现细节。我们完全可以在运行时再指定我们使用哪一套日志框架,这样就做到了业务逻辑与日志系统的解耦。如果是从依赖关系的角度来讲就是,比如对与slf4j,我们只需要在编译时依赖slf4j-api.jar(编译依赖),这个jar里几乎只是定义了接口跟一些通用的工具;然后我们可以在运行时再去指定各种不同的实现,比如slf4j-simple.jar(运行依赖)。
这种使用方式与我们使用sql绑定驱动挺像的,其实他们都是采用的类似的思想。
但是这就带来一个问题,这种运行时进行服务发现的功能是怎么实现的呢?这其实至少有两种解决办法,一种是通过配置文件,将需要加载的类写到配置文件里,然后再通过ClassLoader去加载这个类,做到运行时的加载(比如commons-logging的配置);还有一种就更加方便了,不需要额外的配置文件,就能做到类的动态加载—SPI。
SPI

SPI 就是(Service Provider Interface),字面意思就是服务提供接口
  1. 服务发现者需要定义一个接口。
  2. 服务提供者要实现之前的接口,然后在 classpath 里的 META-INF/services 文件夹下新建一个文件,文件名是之前的接口的全类名,文件内容是实现类的全类名。
  3. 服务发现者保证会通过 ServiceLoader 在类路径内的所有jar包中搜索指定接口的实现类,进行实例化。
显然,一般来讲服务发现者一般就不能直接通过构造函数来构造这个接口的实现类,而是通过静态工厂方式封装实例化的过程。
然后创建resources/META-INF/services文件夹(resources文件夹已写入类路径),在文件夹下写入创建一个文件名为com.mythsman.test.Ispi,内容为com.mythsman.test.SpiImpl,用以注册这个服务。
public static Ispi getSpi() {
ServiceLoader<Ispi> loader = ServiceLoader.load(Ispi.class);
List<Ispi> ispiList = new ArrayList<>();
for (Ispi ispi : loader) {
ispiList.add(ispi);
}
if (ispiList.size() > 2) {
System.err.println("Mutiple implements found.");
return new SubstituteSpi();
} else if (ispiList.size() < 1) {
System.err.println("No implements found.");
return new SubstituteSpi();
} else {
return ispiList.get(0);
}
}

这里的ServiceLoader会去类路径中查找所有支持了Ispi接口的实现类,并返回一个迭代器,这个迭代器会实例化所有的实现类。当然我们可能会发现多个实现类或者没有发现实现类,这时为了保证代码的健壮我们通常会写一个默认的实现SubstituteSpi.java:
我们在入门的时候都学过用jdbc包,用的时候我们都被要求写一段类似下面的代码:



1
Class.forName("com.mysql.cj.jdbc.Driver");

其实这段代码没有任何实际意义,只是显式的加载了一个类,告诉我们记得添加这个jar包,实际上只要将这个jar包放在了类路径里面,这段话其实就没有必要了。
我们去查 mysql-connector-java 这个包就会发现,他用的就是spi的方法,将自己的 com.mysql.cj.jdbc.Driver 这个类注册给了 java.sql.Driver这个接口。加载的时候用的其实也是 ServiceLoader 。

Lombok

lombok的原理也是类似,他用自己写的 AnnotationProcessor 去实现 javax.annotation.processing.Processor ,从而做到在编译期进行注解处理。
slf4j通过上述的方法构建了自己的生态圈,在slf4j-api-xxx.jar的统一管理下容纳了多种实现:
  • slf4j-log4j12-xxx.jar
  • slf4j-jdk14-xxx.jar
  • slf4j-nop-xxx.jar
  • slf4j-simple-xxx.jar
  • slf4j-jcl-xxx.jar
  • logback-classic-xxx.jar
    其中slf4j-nop比较无聊,其实就是什么都没有实现,所有的log都不会处理;slf4j-simple比较小巧,基本能满足简单的使用,配置也很少;logback-classic是slf4j相同作者的作品;剩下其他的则都是相当于一个适配层,将slf4j与其他实现进行适配。
那么如果我们想写一个支持slf4j规范的框架应该怎么写呢,其实很简单,只需要实现 org.slf4j.spi.SLF4JServiceProvider 这个接口即可。
记得slf4j的接口里有一个MDC,这个东西是做什么的呢?其实从他的名字就可以猜到(Mapped Diagnostic Context),其实就是一个类似Map的上下文。他解决了我们可能会希望进行一些即时数据的保存与计算

其实说白了也就是一个ThreadLocal的ConcurrentHashMap。。。。
https://www.ezlippi.com/blog/2018/01/slf4j-log4j2.html
log.warn("hello: {}, {}", "A", "B", new RuntimeException("wtf"));
This will print the message AND the exception stacktrace

hello: A, B

java.lang.RuntimeException: wtf
....
log.warn("hello: {}, {}", "A", new RuntimeException("wtf"));



log.warn("hello: {}, {}", "A", new RuntimeException("wtf"));
- this will not pint exception stack trace, e is interpreted by the second {}.

hello: A, java.lang.RuntimeException: wtf


org.slf4j.helpers.MessageFormatter.arrayFormat(String, Object[])
Throwable throwableCandidate = getThrowableCandidate(argArray);


https://unix.stackexchange.com/questions/88808/empty-the-contents-of-a-file
cp /dev/null filename works.
As for efficient, the most efficient would be truncate -s 0 filename; see here: http://linux.die.net/man/1/truncate.
Otherwise, cp /dev/null filename or > filename are both fine. They both open and then close the file, using the truncate-on-open setting. cp also opens /dev/null, so that makes it marginally slower.
While your application is running (and Logback within your application has an open handle to the log file) if you delete the log file then Logback won't know that the the file has been deleted (since the Logback process still has an open file handle) but since the file has been deleted Logback cannot actually write anything to disk and this situation remains until Logback is re-initialised (and your FileAppender recreates the file). Typically, this will be done on application startup.
There's an open issue against Logback requesting a change in Logback's behaviour in this situation.
https://www.javacodegeeks.com/2012/07/how-to-change-logging-level-in-runtime.html
Changing the log logging level in runtime is important mainly in production environment where you might want to have debug logging for limited amount of time.
The trick is to remember get all the loggers in the system and change their logging level too.
01Logger root = Logger.getRootLogger();
02Enumeration allLoggers = root.getLoggerRepository().getCurrentCategories();
03
04//set logging level of root and all logging instances in the system
05if ('FATAL'.equalsIgnoreCase(logLevel)) {
06    root.setLevel(Level.FATAL);
07    while (allLoggers.hasMoreElements()){
08        Category tmpLogger = (Category) allLoggers.nextElement();
09        tmpLogger .setLevel(Level.FATALLogging, Enterprise Java, Log4j);
10    }
11}else if ('ERROR'.equalsIgnoreCase(logLevel)) {
12    root.setLevel(Level.ERROR);
13    while (allLoggers.hasMoreElements()){
14        Category tmpLogger = (Category) allLoggers.nextElement();
15        tmpLogger .setLevel(Level.ERROR);
16    }
17}

sfl4j
LEVEL enum
https://github.com/qos-ch/slf4j/blob/master/slf4j-api/src/main/java/org/slf4j/event/Level.java

https://kodejava.org/how-do-i-get-the-current-level-of-a-logger/
import java.util.logging.Logger;
import java.util.logging.Level;
 Level getLevel(Logger logger) {
        Level level = logger.getLevel();
        if (level == null && logger.getParent() != null) {
            level = logger.getParent().getLevel();
        }
        return level;
    }

logback multiple appenders different levels
http://stackoverflow.com/questions/18827633/how-to-create-2-different-root-loggers-with-logback
What you are looking for is how to fine-tune which events each of the appenders does log.
You will not ever have more than one root-logger, so your question is a bit misleading. What you are looking for is how to fine-tune which events each of the appenders does log.
And for that, you add a ThresholdFilter to each of the appenders:
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
  <level>INFO</level>
</filter>
Configure level INFO for the FILE-appender and DEBUG for STDOUT.
https://www.owasp.org/index.php/Poor_Logging_Practice

Logger Not Declared Static Final

Loggers should be declared to be static and final.
It is good programming practice to share a single logger object between all of the instances of a particular class and to use the same logger for the duration of the program.
The following statement errantly declares a non-static logger.
 private final Logger logger =     
    Logger.getLogger(MyClass.class);

https://akhikhl.wordpress.com/2013/07/11/programmatic-configuration-of-slf4jlogback/
Programmatic configuration of slf4j/logback
A program must open separate log file for each processed input file.
Solution for task
Instead of configuring logback via xml, the one needs to “manually” instantiate encoders, appenders and loggers, then configure and link them together.
Caveat 1
Logback goes crazy on attempt to share encoder (i.e. PatternLayoutEncoder) between appenders.
Solution for caveat 1
Create separate encoder for each appender.
Caveat 2
Logback refuses to log anything, if encoders and appenders are not associated with logging context.
Solution for caveat 2
Call setContext on each encoder and appender, passing LoggerFactory as a parameter.
Caveat 3
Logback refuses to log anything, if encoders and appenders are not started.
Solution for caveat 3
encoders and appenders need to be started in the correct order, i.e. first encoders, then appenders.
Caveat 4
RollingPolicy objects (i.e. TimeBasedRollingPolicy) produce strange error messages like “date format not recognized”, when they are not attached to the same context as appender.
Solutin for caveat 4
call setContext on RollingPolicy same way as on encoders and appenders.
LoggerContext logCtx = LoggerFactory.getILoggerFactory()

PatternLayoutEncoder logEncoder = new PatternLayoutEncoder()
logEncoder.setContext(logCtx)
logEncoder.setPattern('%-12date{YYYY-MM-dd HH:mm:ss.SSS} %-5level - %msg%n')
logEncoder.start()

ConsoleAppender logConsoleAppender = new ConsoleAppender()
logConsoleAppender.setContext(logCtx)
logConsoleAppender.setName('console')
logConsoleAppender.setEncoder(logEncoder)
logConsoleAppender.start()

logEncoder = new PatternLayoutEncoder()
logEncoder.setContext(logCtx)
logEncoder.setPattern('%-12date{YYYY-MM-dd HH:mm:ss.SSS} %-5level - %msg%n')
logEncoder.start()

RollingFileAppender logFileAppender = new RollingFileAppender()
logFileAppender.setContext(logCtx)
logFileAppender.setName('logFile')
logFileAppender.setEncoder(logEncoder)
logFileAppender.setAppend(true)
logFileAppender.setFile('logs/logfile.log')

TimeBasedRollingPolicy logFilePolicy = new TimeBasedRollingPolicy()
logFilePolicy.setContext(logCtx)
logFilePolicy.setParent(logFileAppender)
logFilePolicy.setFileNamePattern('logs/logfile-%d{yyyy-MM-dd_HH}.log')
logFilePolicy.setMaxHistory(7)
logFilePolicy.start()

logFileAppender.setRollingPolicy(logFilePolicy)
logFileAppender.start()

Logger log = logCtx.getLogger("Main")
log.additive = false
log.level = Level.INFO
log.addAppender(logConsoleAppender)
log.addAppender(logFileAppender)

slf4j bridge
http://shengwangi.blogspot.com/2015/09/using-log-mdc-in-multi-thread-helloworld-example.html
MDC is a map like structure, it can save information you want to output to the log, so you don’t need add that information to every logger.info(…) as parameter. 

MDC is thread-safe. No worrying for concurrency.
In multi-thread environment, if the child is create with the basic Thread + Runnable way, the child thread will automatically inherit parent’s MDC. But if  using executor as thread management. The MDC inheritance is not  warranted, as the logback document said:
A copy of the mapped diagnostic context can not always be inherited by worker threads from the initiating thread. This is the case when java.util.concurrent.Executors is used for thread management.
public class Child implements Runnable {
  private Logger logger = LoggerFactory.getLogger(Child.class);
   
  // contextMap is set when new Child() is called
  private Map<String,String> contextMap = MDC.getCopyOfContextMap();
   
  public void run() {
    MDC.setContextMap(contextMap);  // set contextMap when thread start
    logger.info("Running in the child thread");
  }
}
The Child.java is very simple, just implements the Runnable interface. One thing need to mention is how the MDC context is passed from parent thread to child thread. The Child object instance is created in parent thread, when new Child() is call immediately before executor create new thread. So the parent’s MDC context is duplicated to a variable called contextMap, then set back to MDC in child thread at the begin of run()method.



debug
<configuration debug="true"> 
https://logging.apache.org/log4j/2.0/faq.html
-Dorg.apache.logging.log4j.simplelog.StatusLogger.level=TRACE
http://stackoverflow.com/questions/4278438/slf4j-logging-with-jdk-how-to-enable-debug
    System.setProperty("org.slf4j.simpleLogger.defaultLogLevel", "trace");
http://logging.apache.org/log4j/1.2/manual.html
Problem: You cannot find out which log4j.properties file is used.
Try running VM with parameter
-Dlog4j.debug
   export TOMCAT_OPTS="-Dlog4j.debug -Dlog4j.configuration=foobar.xml"
log4j2
https://logging.apache.org/log4j/2.0/log4j-slf4j-impl/index.html
The Log4j 2 SLF4J Binding allows applications coded to the SLF4J API to use Log4j 2 as the implementation.
https://logging.apache.org/log4j/2.0/log4j-to-slf4j/index.html
The Log4j 2 to SLF4J Adapter allows applications coded to the Log4j 2 API to be routed to SLF4J. Use of this adapter may cause some loss of performance as the Log4j 2 Messages must be formatted before they can be passed to SLF4J. With Log4j 2 as the implementation these would normally be formatted only when they are accessed by a Filter or Appender.
http://www.ffbit.com/blog/2014/05/12/a-brand-new-log4j-2-and-its-slf4j-binding/
Here is a list of dependencies which should be added to your pom.xml if you want to use Log4j 2 in conjunction with SLF4J.
<!-- log4j 2 to slf4j bridge --> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-slf4j-impl</artifactId> <version>2.0-rc1</version> </dependency> <!-- core api log4j 2 dependencies --> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-api</artifactId> <version>2.0-rc1</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core</artifactId> <version>2.0-rc1</version> </dependency>

https://logging.apache.org/log4net/log4net-1.2.13/release/sdk/log4net.Layout.PatternLayout.html
https://logging.apache.org/log4j/2.x/manual/migration.html
http://slf4j.42922.n3.nabble.com/SLF4J-with-log4j-2-td4025498.html
After some research it seems the best option for a flexible and scalable logging environment is SLF4J and log4j 2.  As I understand it I use SLF4J as the interface (facade) and log4j 2 as the implementation, this also means I can change the implementation (for example to java.util.logging) in the future. 

In summary I needed the following for a simple Ant build: 

slf4j-api-1.7.5.jar 
slf4j-ext-1.7.5.jar 
log4j-slf4j-impl-2.0-beta4.jar 
log4j-api-2.0-beta4.jar 
log4j-core-2.0-beta4.jar 

In a Maven project, the dependencies also downloaded the following: 
http://www.javacodegeeks.com/2015/12/5-java-logging-rules.html
http://www.importnew.com/17315.html
Rule 1. Logging is for readers
For example, let’s consider a log message like the following ERROR: Save failure - SQLException .....
Saving what? This message could mean something for the developer but it is completely useless for the poor guy which is looking at the production problem.
Much better message is ERROR: Save failure- Entity=Person, Data=[id=123 surname="Mario"] - SQLException....
which explains what you wanted to save (here a Person, a JPA entity) and the relevantcontents of the Person instance. Please note the word relevant, instead of the world all: we should not clutter log files with useless info like the complete print of all entity fields. Entity name and its logical keys are usually enough to identify a record in a table.
Rule 2. Match the logging levels with the execution environment
Logging level filtering should depends on which stage of the development is your code: logging level in the production should not be the same as in test/integrations environments.
Moreover, logging level should also depends on the code owner. In general our own application code should have a more detailed logging compared to any third party library we are using. There is no big meaning to see, for example, Apache Commons debug messages in our log files.
I usually configure the logging as following:
  • Production: INFO level for my code and WARN for third party libraries.
  • Test / Integration: DEBUG level for my code and WARN (or INFO if needed) for third party libraries.
  • Development: whatever make sense
Rule 3. Remove coding help logging before the commit.
once you have finished to develop, remove all temporary and unnecessary logging messages just before committing the code to the SCM system (git, svn..) in use.
This rule does not require to remove all DEBUG messages but only the ones that do not have meaning once the application is completed and released; in other words when we are reasonably sure that the application is working properly.
Rule 4: Check log level before logging DEBUG messages
1if ( LOGGER.isDebugEnabled() ) {
2        LOGGER.debug (…….)
3    }
LOGGER.info(“Person name is {}“, person.getName());
http://www.javacodegeeks.com/2013/01/effective-logging-in-javajee-mapped-diagnostic-context.html
MDC stands for Mapped Diagnostic Context. It helps you to distinguish inter-leaving logs from multiple sources. 

A non-invasive way of logging is to use MDC. With MDC, you put the user-id in a context-map which is attached to the thread (of each user request) by the logger. MDC is thread-safe and uses a Map internally to store the context information.

NDC stands for Nested Diagnostic Context. It is a stack-based implementation of attaching context information. For all purposes, use MDC over NDC, as MDC is memory efficient. 

For all new application development, use logbackl

Used to output the MDC (mapped diagnostic context) associated with the thread that generated the logging event. The X conversion character must be followed by the key for the map placed between braces, as in%X{clientNumber} where clientNumber is the key. The value in the MDC corresponding to the key will be output.

http://www.kdgregory.com/index.php?page=java.logging
Do log decision pointsDon't log in short loops
Only log exceptions if you're going to handle themlogger.warn("an exception occurred!", e);
Don't log exception messages, pass the exception object to the logger
logger.debug("in method foo()", new Exception("stack trace")); // for debug


SLF4J parameterized logging using varargs method
http://www.slf4j.org/faq.html
logger.debug("The new entry is {}. It replaces {}.", entry, oldEntry);
logger.error("Failed to format {}", s, e);
 If a given error requires attention beyond that allocated for ordinary errors, simply mark the logging statement with a specially designated marker which can be named "FATAL" or any other name to your liking.
    Marker fatal = MarkerFactory.getMarker("FATAL");
      logger.error(fatal, "Failed to obtain JDBC connection", e);

http://stackoverflow.com/questions/17716381/slf4j-parameterized-logging-using-varargs-method
the only way to get a compile error for
logger.error("I wanna log {} and {} and {} with backtrace", x, y, 3, e);
and not for
logger.error("I wanna log {} and {} and {} with backtrace", new Object[]{x, y, 3, e
http://www.javacodegeeks.com/2011/01/10-tips-proper-application-logging.html
4) Avoid side effects
==> may throw NPE 
    log.trace("Id=" + request.getUser().getId() + " accesses "+ manager.getPage().getUrl().toString())
5) Be concise and descriptive
9) Log exceptions properly

16    log.error("Error reading configuration file", e);       //L

Tools
Perf4J is to System.currentTimeMillis() as log4j is to System.out.println()
LogStash
https://pfichtner.wordpress.com/2010/08/12/why-you-should-not-use-log-isdebugenabled/
logger.debug(“Sum of {} and {} is {}.”, sum, a, b);
What will happen here? The compiler will transform this under the hood into:
logger.debug(“Sum of {} and {} is {}.”, new Object[] { Integer.valueOf(sum), Integer.valueOf(a), Integer.valueOf(b) });
So there are 4 objects created you won’t see in your source code (3 Integers and one Object[])
Let me repeat myself:  The code should be wrapped by isXXXEnabled to prevent the Object[] creation.
https://wiki.base22.com/display/btg/Java+Logging+Standards+and+Guidelines
Miscs
Async logging

Logback
One novel feature of Logback is SiftingAppender (JavaDoc). In short it's a proxy appender that creates one child appender per each unique value of a given runtime property. Typically this property is taken from MDC
http://www.nurkiewicz.com/2013/04/siftingappender-logging-different.html
    <appender name="SIFT" class="ch.qos.logback.classic.sift.SiftingAppender">
        <discriminator>
            <key>userid</key>
            <defaultValue>unknown</defaultValue>
        </discriminator>
        <sift>
            <appender name="FILE-${userid}" class="ch.qos.logback.core.FileAppender">
                <file>user-${userid}.log</file>
                <layout class="ch.qos.logback.classic.PatternLayout">
                    <pattern>%d{HH:mm:ss:SSS} | %-5level | %thread | %logger{20} | %msg%n%rEx</pattern>
                </layout>
            </appender>
        </sift>
    </appender>
Notice that the <file> property is parameterized with ${userid} property. Where does this property come from? It has to be placed in MDC.
Running your web application with this configuration will quickly create several log files like user-alice.loguser-bob.log and user-unknown.log in case of MDC property not set.
Another use case is using thread name rather than MDC property.
https://www.mkyong.com/logging/logback-different-log-file-for-each-thread/

https://www.mkyong.com/logging/logback-duplicate-log-messages/
https://www.mkyong.com/logging/logback-xml-example/
This is caused by the “Appenders accumulate” problem, read this article.
To fix it, add a additivity="false" to the application logger
<logger name="com.mkyong.test" level="debug" additivity="false">
 <appender-ref ref="STDOUT" />
</logger>

<root level="error">
 <appender-ref ref="STDOUT" />
</root>
http://logback.qos.ch/manual/configuration.html#cumulative

By default, appenders are cumulative: a logger will log to the appenders attached to itself (if any) as well as all the appenders attached to its ancestors. Thus, attaching the same appender to multiple loggers will cause logging output to be duplicated.

Appender additivity is not intended as a trap for new users. It is quite a convenient logback feature. For instance, you can configure logging such that log messages appear on the console (for all loggers in the system) while messages only from some specific set of loggers flow into a specific appender.

http://logback.qos.ch/manual/layouts.html
c{length}
lo{length}
logger{length
Outputs the name of the logger at the origin of the logging event.
This conversion word takes an integer as its first and only option. The converter's abbreviation algorithm will shorten the logger name, usually without significant loss of meaning. Setting the value of length option to zero constitutes an exception. It will cause the conversion word to return the sub-string right to the rightmost dot character in the logger name. The next table provides examples of the abbreviation algorithm in action.
C{length}
class{length
Outputs the fully-qualified class name of the caller issuing the logging request.

http://examples.javacodegeeks.com/enterprise-java/logback/logback-rollingfileappender-example/
fileNamePatternStringYesIt defines the name of the rolled-over (archived) log files. Its value should consist of the name of the file, plus a suitably placed %d conversion specifier which may contain a date-and-time pattern. The date-and-time pattern, as found within the accolades of %d{} follow java.text.SimpleDateFormat conventions. The forward slash ‘/’ or backward slash ‘\’ characters anywhere within the fileNamePattern property or within the date-and-time pattern will be interpreted as directory separators. The rollover period is inferred from the value of fileNamePattern.
maxHistoryintNoIt controls the maximum number of archive files to keep, deleting older files. For example, if you specify monthly rollover, and set maxHistory to 8, then 8 months worth of archives files will be kept with files older than 8 months deleted.
TimeBasedRollingPolicy is possibly the most popular rolling policy. It defines a rollover policy based on time. 
fileNamePatternRollover ScheduleDescription
/logs/example.%dDaily rollovertime and date pattern for the %d token specifier is omitted. So the default pattern is “yyyy-MM-dd”, which corresponds to daily rollover.
/logs/example.%d{yyyy-MM}.logMonthly rolloverThe files that rolled over look like: example.2015-03.log, example.2015-04.log
/logs/example.%d{yyyy-MM-dd_HH}.logHourly rolloverThe files that rolled over look like: example.2015-05-17_09.log, example.2015-05-17_10.log
/logs/%d{yyyy/MM}/example.logMonthly rolloverPlease notice that date and time pattern is in the directory name. The current log file path is “logs/example.log”. The directories of the rolled over files are “logs/2015/04/example.log”, “logs/2015/03/example.log”, …
/logs/example.%d{yyyy-ww}.zipWeekly rolloverPlease notice that file pattern ends with “zip”. Thus, the rolled over files are compressed. Logback applies automatic file compression if the value of the fileNamePattern option ends with .gz or .zip.
%X{client} N:%X{number}

<property name="USER_HOME" value="/home/sebastien" />
<file>${USER_HOME}/myApp.log</file>

<pattern><![CDATA[]]></pattern>

Include and Share Common Configuration
<configuration>
  <include file="src/main/java/chapters/configuration/includedConfig.xml"/>

  <root level="DEBUG">
    <appender-ref ref="includedConsole" />
  </root>
</configuration>

<included>
  <appender name="includedConsole" class="ch.qos.logback.core.ConsoleAppender">
    <encoder>
      <pattern>"%d - %m%n"</pattern>
    </encoder>
  </appender>

</included>

Include: using resource

ch.qos.logback.core.joran.action.IncludeAction.getInputURL(InterpretationContext, Attributes)
    String fileAttribute = attributes.getValue("file");
    String urlAttribute = attributes.getValue("url");

    String resourceAttribute = attributes.getValue("resource");


<configuration>
  <include file="includedFile.xml" />

  <!-- STDOUT appender stuff -->

  <root level="${root.level:-DEBUG}">
    <appender-ref ref="STDOUT" />
  </root>
<configuration>
Log4j
https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/DailyRollingFileAppender.html
http://www.codejava.net/coding/configure-log4j-for-creating-daily-rolling-log-files
Schedule
DatePattern
Example of log file’s name
Minutely
'.'yyyy-MM-dd-HH-mm
app.log.2012-11-04-21-54
Hourly
'.'yyyy-MM-dd-HH
app.log.2012-11-04-22
Half-daily
'.'yyyy-MM-dd-a
app.log.2012-11-04-AM
app.log.2012-11-04-PM
Daily
'.'yyyy-MM-dd
app.log.2012-11-04
Weekly
'.'yyyy-ww
app.log.2012-45
app.log.2012-46
Monthly
'.'yyyy-MM
app.log.2012-10
app.log.2012-11
if you want to add literal text to the date pattern, you must escape it in a pair of single quotes
log4j.appender.Appender2.DatePattern=’_’yyyy-MM-dd’.log’

log4j Layout ConversionPattern
http://www.codejava.net/coding/common-conversion-patterns-for-log4js-patternlayout
What to print
Conversion character
Performance
Category name (or logger name)
c
Fast
Fully qualified class name
C
Slow
Date and time
d
d{format}
Slow if using JDK’s formatters.
Fast if using log4j’s formatters.
File name of Java class
F
Extremely slow
Location (class, method and line number)
l
Extremely slow
Line number only
L
Extremely slow
Log message
m
Fast
Method name
M
Extremely slow
Priority (level)
p
Fast
New line separator
n
Fast
Thread name
t
Fast
Time elapsed (milliseconds)
r
Fast
Thread’s nested diagnostic context
x
Fast
Thread’s mapped diagnostic context
X
Fast
Percent sign
%%
Fast

Per request debugging with Log4j 2 filters
    <DynamicThresholdFilter key="x-debug-enabled" onMatch="ACCEPT" onMismatch="NEUTRAL">
        <KeyValuePair key="true" value="TRACE"/>
    </DynamicThresholdFilter>
Great Programmers Write Debuggable Code

Session-based Logging
https://blog.oio.de/2010/11/09/logging-additional-information-like-sessionid-in-every-log4j-message/
Because the MDC is thread-local and every request in a web application has it’s own thread, you have to put the sessionID into the MDC at the beginning of each request. You can do this by using a servlet filter.

%X{sessionId}
MDC.put("sessionId", sessionId)

Change Log Level Programmatically
How to change logging level in runtime
The trick is to remember get all the loggers in the system and change their logging level too.
Logger root = Logger.getRootLogger();
Enumeration allLoggers = root.getLoggerRepository().getCurrentCategories();
//set logging level of root and all logging instances in the system
if ("FATAL".equalsIgnoreCase(logLevel)) {
    root.setLevel(Level.FATAL);
    while (allLoggers.hasMoreElements()){
        Category tmpLogger = (Category) allLoggers.nextElement();
        tmpLogger .setLevel(Level.FATAL);
    }
}else if ("ERROR".equalsIgnoreCase(logLevel)) {
    root.setLevel(Level.ERROR);
    while (allLoggers.hasMoreElements()){
        Categhttp://news.oneapm.com/high-performance-java/ory tmpLogger = (Category) allLoggers.nextElement();
        tmpLogger .setLevel(Level.ERROR);
    }
}
How do I change my Log4J settings while my web application is running?

How Logback works
http://skyao.github.io/2014/09/11/logback-process/
http://kyfxbl.iteye.com/blog/1170283
logback-Filter机制
Logback: How to configure loggers in runtime?

Better Performing Non-Logging Logger Calls in Log4j2
  1. logger.debug("PARAMETERIZED: {}", slow);  
  2. logger.debug("{}", expensiveOperation());  
When the parameterized logging example above is executed with a log level more specific than DEBUG, the implicit toString() on the "slow" variable will not be attempted thanks to the parameterized logging. However, parameterized logging cannot help the other logging situation as the methodexpensiveOperation() will be invoked despite the parameterized logging. Note also that while parameterized logging helps in the case of implicit toString() calls, it does not help with explicittoString() calls. Even if the logging level is more specific than DEBUG, a call to slow.toString() in a logger statement would still incur a performance hit.
  1. logger.debug("LAMBDA: ", () -> slow.toString());  
  2. logger.debug("{}"this::expensiveOperation);  
Log4j 2.4 introduces a Lambda-based mechanism that can be used to defer invocation of methods passed to a logger call such that they don't need to be executed at all if the statement is being logged at a level less specific than the current log level.

http://codepub.cn/2016/04/15/Log4j-print-exception-stack-information-to-the-log/
  1. 使用log.error(e);除了输出异常信息外,还能输出异常类型,但是同样不包括异常堆栈,该方法doc说明为:Logs a message object with the ERROR level.显然并不会记录异常堆栈信息
  2. 其实在log4j中只需要这样调用,就可以获得异常及堆栈信息log.error(Object var1, Throwable var2);,该方法doc说明为:Logs a message at the ERROR level including the stack trace of the Throwable t passed as parameter.
http://logging.apache.org/log4j/2.x/manual/api.html
private static final Logger logger = LogManager.getLogger();

Logback:
        <if condition='!property("app.environment").equals("dev") && 
!property("app.environment").equals("build")'> 
You need to write &amp;&amp; instead of &&. That's all... 

日志是一种 I/O 密集型活动,的确会对应用性能产生不良影响,特别是传统的日志方式还会写入应用线程环境的 FileAppender 中。不仅如此,日志代码还会造成大量的堆消耗和垃圾收集,比如:
 if (logger.isDebugEnabled())
    
     logger.debug("name "+person.name+" age "+person.age+" address "+person.address);
除此之外,内部调用日志到 Log4J Appender 的 doappend() 方法,会与线程安全同步。这也意味着,应用线程不仅在同步地进行大量的磁盘 I/O 操作,还会在写入日志时互相阻塞!在某电子政务门户网站上最严峻的性能情形之一就是,线程转储显示在日志记录写入到单个文件 appender 时
代码审查必须提高效率。当高级开发者或团队领导审查日志时,必须确保删除掉无用的日志



避免重复使用长堆栈跟踪来记录错误
如果可能的话,日志中应该有足够的信息显示错误发生的位置,并尽可能避免巨大的堆栈跟踪。当然,这不是一个像 NullPointerException 那样的特例。但它可以为一些容易识别的特定应用错误进行记录。此外,当经常性问题长期发生时,如与 Email 、短信或数据库服务器的连接问题,日志记录也会每隔5分钟地记录该问题,而不是每隔几秒就用巨大的堆栈跟踪填充日志。
不要盲目使用 AOP 注入记录,尤其在生产过程中
对于新手来说,最基本的 AOP 教材案例就是日志。因为日志本身就是一个横切关注点,新手可以在进入方法之前或退出方法之后注入日志。在应用于生产有价值的应用之前,应该严肃地考虑这个示例或观点。对于以上已经建立的示例,日志记录可不是一件小事,它值得像大多数其他非功能性需求( NFRs )一样进行详细布局规划。
别把日志当作其他监控手段的替代品
使用映射诊断环境和嵌套诊断环境
Log4j 的映射诊断环境( MDC )和嵌套诊断环境( NDC )使用 ThreadLocal 储存环境特定信息。它们可以存储如用户名、事务 ID 这样的信息,来识别特定用户或事务所做的全部操作。这就不需要为了日志记录,在类和方法中传递特定环境信息。利用 PatternLayout 的 %X 或 X { key } ,存储的值将在日志中呈现。
规划日志的生命周期和维护
这包括规划日志滚动更新之前的日志文件大小和最大数量。为什么需要规划呢?因为日志文件常常大到用文本编辑器都打不开!正如脚本会定期对数据库进行备份一样,也应该有脚本来备份和归档日志。当超出磁盘空间限制时,压缩日志文件也是不错的想法,这样远程传输起来会更加容易。
使用短码表示错误的做法,几乎在所有的主流产品中都很常见,如 Oracle 、 WebSphere 、 Microsoft 。例如,在微软的 Office 应用出现错误时,所反馈的错误对话框就是一个难以读懂的整数代码,然后会发送给微软用于诊断。
在查看错误时,可以将各种错误代码翻译成完整的字符串进一步解读。
这样做的好处有如下几点:
  • 能节省磁盘空间,从而延长日志的保质期和缩减文件大小
  • 应用内部设计和执行的一些安全措施不会被日志暴露。但在脱机查看时,日志可以通过使用翻译机来翻译全文。
  • 避免构造或传输长字符串,进一步减少内存使用。

Labels

Review (572) System Design (334) System Design - Review (198) Java (189) Coding (75) Interview-System Design (65) Interview (63) Book Notes (59) Coding - Review (59) to-do (45) Linux (43) Knowledge (39) Interview-Java (35) Knowledge - Review (32) Database (31) Design Patterns (31) Big Data (29) Product Architecture (28) MultiThread (27) Soft Skills (27) Concurrency (26) Cracking Code Interview (26) Miscs (25) Distributed (24) OOD Design (24) Google (23) Career (22) Interview - Review (21) Java - Code (21) Operating System (21) Interview Q&A (20) System Design - Practice (20) Tips (19) Algorithm (17) Company - Facebook (17) Security (17) How to Ace Interview (16) Brain Teaser (14) Linux - Shell (14) Redis (14) Testing (14) Tools (14) Code Quality (13) Search (13) Spark (13) Spring (13) Company - LinkedIn (12) How to (12) Interview-Database (12) Interview-Operating System (12) Solr (12) Architecture Principles (11) Resource (10) Amazon (9) Cache (9) Git (9) Interview - MultiThread (9) Scalability (9) Trouble Shooting (9) Web Dev (9) Architecture Model (8) Better Programmer (8) Cassandra (8) Company - Uber (8) Java67 (8) Math (8) OO Design principles (8) SOLID (8) Design (7) Interview Corner (7) JVM (7) Java Basics (7) Kafka (7) Mac (7) Machine Learning (7) NoSQL (7) C++ (6) Chrome (6) File System (6) Highscalability (6) How to Better (6) Network (6) Restful (6) CareerCup (5) Code Review (5) Hash (5) How to Interview (5) JDK Source Code (5) JavaScript (5) Leetcode (5) Must Known (5) Python (5)

Popular Posts