Python Logging

//TODO: [ ] warnings.warn, for example, from elasticsearch package https://docs.python.org/3/howto/logging.html#when-to-use-logging Default handler for child logger: When no handler set to child logger and you call logger.info() and so on, there is a last resort: logging.lastResort:

1
2
3
4
5
6
7
8
9
10
11
12
13
import logging

# effective level inherits from root logger: warning
logger = logging.getLogger(__name__)
# where do they go? see below
logger.warning("hello")
logger.warning("world")

# <_StderrHandler <stderr> (WARNING)>
print(logging.lastResort)
# set it to Nono will get error when you call
# above logger.warning()
logging.lastResort = None

From experience, for complex application logging:

  • inherit logging.Logger class to creat customized logger class
  • create log output folder if does not exist
  • update default logging dict config and applied
  • set custom logger class
  • get logger and return to module

My logging framework for complex application, for simple application, just use code configuration to set up module’s logger layout.

Introduction

Official document, read through carefully.

Default output: by default, no destination is set for any logging messages. They will check to see if no destination is set; and if one is not set, they will set a destination of the console (sys.stderr).

First, know when to use logging.

Logging flow

Multiple calls to getLogger() with the same name will return a reference to the same logger object (singleton).

Loggers have a concept of effective level. If a level is not explicitly set on a logger, the level of its parent is used instead as its effective level. If the parent has no explicit level set, its parent is examined, and so on. When deciding whether to process an event, the effective level of the logger is used to determine whether the event is passed to the logger’s handlers.

Child loggers propagate messages up to the handlers associated with their ancestor loggers. Because of this, it is unnecessary to define and configure handlers for all the loggers an application uses. It is sufficient to configure handlers for a top-level logger and create child loggers as needed. (You can, however, turn off propagation by setting the propagate attribute of a logger to False.) 注意,propagate 不会受到 parent logger level 的影响, 都会收到下级的信息.

Thread safety: It achieves this though using threading locks; there is one lock to serialize access to the module’s shared data, and each handler also creates a lock to serialize access to its underlying I/O.

Cookbook highlights:

  1. logger name can be chained, logging.getLogger: apple, apple.pear, apple.pear.peach, child log will pass to parent.
  2. support threads.
  3. one logger can have multiple handler and formatter: Sometimes it will be beneficial for an application to log all messages of all severities to a text file while simultaneously logging errors or above to the console. The root logger level should <= handler level, otherwise handler will not receive messages.
  4. console message may no need to contain timestamp, but log file needs.
  5. To dynamic change log configuration, you can use signal handler or using log config server, it is listening on port to receive new configuration.
  6. pass log level to cli application

Logging level: DEBUG, INFO, WARNING, ERROR, CRITIAL, in increasing order of severity. 设置这个的好处是可以根据level输出对应的logging信息,在设定level之下的 logging不会被记录, 比如当前level 设定为INFO, 则logging.debug()不会被输出。

The default level is WARNING, which means that only events of this level and above will be tracked, unless the logging package is configured to do otherwise.

Basic Usage

See Basic Logging Tutorial. This is not good for practical use as set basicConfig will impact other imported modules, but for simple usage it is fine.

logging.basicConfig only initializes root logger.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
import logging
import sys
import pprint

# see logging record attributes
# https://docs.python.org/3/library/logging.html#logrecord-attributes
FORMAT = "[%(threadName)s, %(asctime)s, %(levelname)s] %(message)s"

# filename: write log to a file
logging.basicConfig(filename='/opt/logfile.log',
level=logging.DEBUG,
format=FORMAT)
# default the log will append to file
# if you want to overwrite, set filemode='w'
logging.basicConfig(filename='/opt/logfile.log',
filemode='w',
level=logging.DEBUG,
format=FORMAT,
# custom timestamp
datefmt='%m/%d/%Y %H:%M:%S')

# stream: write to system stdout via stream
logging.basicConfig(stream=sys.stdout,
level=logging.DEBUG,
format=FORMAT)

# use lazy %s formatting in logging functions
logging.debug("%s", "something")
logging.info("%s", "something")
logging.warning("%s", "something")
logging.error("%s", "something")
logging.critical("%s", "something")

# print pretty
logging.info(pprint.pformat([json or yaml]))

For json format log, there is a open source module: https://github.com/madzak/python-json-logger

NOTE: The desired logging.basicConfig() should come at very first and it only take effect once.

Advanced Usage

More advanced usage please see: https://docs.python.org/3.8/howto/logging.html#advanced-logging-tutorial

Module-Level Functions

Configuring separated logger for different modules.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
import logging
import elasticsearch
import traceback

# for current module
logger = logging.getLogger(__name__)
# have to set handler for logger otherwise only logging.WARNING level will work
logger.setLevel(logging.DEBUG)

# set a stream_handler, you can choose other handlers
stream_handler = logging.StreamHandler(sys.stdout)
formatter = logging.Formatter(
'[%(threadName)s, %(asctime)s, %(levelname)s] %(message)s')
stream_handler.setFormatter(formatter)
# if handler logging level > logger logging level, the level in gap will not show
# if handler logging level <= logger logging level, show everything logger allowed
stream_handler.setLevel(logging.DEBUG)
logger.addHandler(stream_handler)

# use logger instead of logging
# use lazy % formatting in logger function
logger.debug("%s", "something")

# reset different level for imported elasticsearch module
# elasticsearch.logger is found by dir(elasticsearch)
es_logger = elasticsearch.logger
# you can enrich or suppress log from elasticsearch
es_logger.setLevel(elasticsearch.logging.INFO)
es_logger.addHandler(stream_handler)

# logging exception detail
try:
a = [1,2]
b = a[10]
except IndexError as e:
logger.error(e, exc_info=True)
# if you don't know the exception type
try:
pass
except:
logger.error("Exception is %s", traceback.format_exc())

Log Rotating

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# size based
from logging.handlers import RotatingFileHandler
# time based
from logging.handlers import TimedRotatingFileHandler

logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
# rotating log file when size > 1000 bytes
# keep 2 backup logs: app.log.1, app.log.2
file_r = RotatingFileHandler('file_app.log', maxBytes=1000, backupCount=2)
# rotating every 2 days with 5 backups
time_r = TimedRotatingFileHandler('time_app.log', when='d', interval=2,
backupCount=5)

logger.addHandler(file_r)
logger.addHandler(time_r)

for _ in range(1000):
logger.info("this is %d", _)

Log Config File

For complex logging configuration, we can write a config file, the format can be ini-style or dict-style. This way we don’t need to hard code or change the configuration in the code.

1
2
3
4
5
6
7
8
9
10
11
import logging
import logging.config

logging.config.fileConfig('logging.ini')
logging.config.dictConfig('logging.dict')
# or input a dict object with content
LOGGING_CONFIG = {}
logging.config.dictConfig(LOGGING_CONFIG)

# 'simplelogger' is defined in config file
logger = logging.getLogger('simplelogger')

Signal Trap

In production we can use signals to change logging level dynamically, for example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
import signal

# Note to include signal_num and frame marked them as unused and del if no use.
def switch_logging_level(unused_signal_num, unused_frame):
'''Set logging level to INFO when receives SIGUSR1

For example: kill -10 <PID>
'''
del unused_signal_num
del unused_frame

if logger.isEnabledFor(logging.DEBUG):
logger.setLevel(logging.INFO)
logger.info('Disable logging.DEBUG level')
else:
logger.setLevel(logging.DEBUG)
logger.info('Enable logging.DEBUG level')

signal.signal(signal.SIGUSR1, switch_logging_level)

It depends on how you initializes the logger, you can also update the handler’s logging level:

1
2
3
4
5
6
7
import logging

logger = logging.getLogger()
logger.setLevel(level)
...
for handler in logger.handlers:
handler.setLevel(logging.DEBUG)

For docker container please run signal script as init process or use tini to forward signal.

0%