As a developer, it is essential to stay on top of the changes in your application during development and production. Logging enables you to track events, features, errors and when they occur.
This tutorial will take you through setting up basic logging to advanced concepts and will cover:
what logging is
why logging is important
The python logging module
logging levels
creating loggers
storing logs
formatting logs
customising logs using the Colorlog package
Prerequisites
To follow along seamlessly, you may need to have the following:
a code editor such as VScode or an online IDE such as replit
basic python programming experience
python and pip installed if you're using a code editor
What is logging?
Logging is the process of recording information about an application's execution. This information may include and is not limited to:
events such as user actions, errors
performance metrics such as response times and memory usage
security events such as login attempts etc.
Why logging is important
Logging contributes significantly towards application health and here's why it should be configured:
Error tracking. Logging provides detailed information about errors, where and when they occur in your application making it easy for you to track and fix them easily.
Debugging. You can identify and fix bugs faster and more efficiently when you log information about your application's execution.
Auditing and monitoring. Logs provide information about how your application is being used and by whom.
Security. Logging information on user actions, system events and other security-related information enables you to notice security breaches and respond to them in a timely fashion.
Compliance. Regulatory and compliance requirements may require companies to provide a record of activity and changes to their systems which can be referenced using the application's logs
The Logging module.
The Logging module in Python provides classes and methods required to set up logging functionality for your applications. It is included as part of the Python standard library and can be used without installing it. To use the logging module, import it as follows:
#import the logging module
import logging
The above code avails the different methods and attributes in the logging module for you to use. You can explore these attributes and methods by calling the dir()
method on the logging module and printing it to the console as shown below:
#print the logging module diretory
print(dir(logging))
The above code outputs a list containing log levels, methods and classes to the console as shown in the following image:
You can use the following criteria to make sense of the returned list:
Logging levels are indicated by uppercase letters. These are generally considered constants for example
DEBUG, ERROR
,CRITICAL
etcclasses are written in pascal's case for example
RootLogger
,LoggerAdapter
etcmethods are written in camel case for example
getLogger
,getLevelName
etcprivate attributes are indicated by preceding underscores for example
_showwarning
,_srcfile
etcspecial methods are indicated by preceding and proceeding double underscores.
While there's so much to work with in the logging module, this tutorial will use the logging levels, some of the classes and methods to show you how to configure logging for your application.
logging levels
Python uses logging levels to indicate the severity of a log message. This allows you to filter log messages based on their importance for example isolating critical logs to a different destination. The following table shows the log levels defined in the logging module, their numeric values and when you should use them.
Level | numeric value | use case |
CRITICAL | 50 | Indicating a severe error |
ERROR | 40 | Recording an error |
WARNING | 30 | Indicating potential issues |
INFO | 20 | Confirming expected behaviour |
DEBUG | 10 | testing and debugging in development |
NOTSET | 0 |
In addition to the above constants, the logging module provides methods that can be used to indicate the severity of a log. These are lowercase versions of the above levels and are used by calling the method name on the logging object with a message as a parameter as shown in the snippet below:
logging.critical('This is a critical message')
logging.error('This is an error message')
logging.warning('This is a warning message')
logging.info('This is an info message')
logging.debug('This is a debugging message')
Running the above code produces the following result in the console
From the result in the console, you can observe that:
The
INFO
andDEBUG
logs do not appear in the console. This is because the logging level is set to 30 -the numeric value forWARNING
by default. The logging module only outputs logs for logging levels greater than or equal to the set level.The logs are populated in the format level-logger name-message. The logger name is set to
root
by default -implying the root logger for cases where a logger name is not provided.The output logs don't provide information on when the message was logged
The logs are output to the console because a log destination is not set.
The logs shown above don't provide much context into what is happening. The default behaviour can be improved by properly configuring the loggerfo
Configuring logging
The key steps involved in configuring standard logging include:
creating a logger
storing the logs
formatting the logs
creating a logger
Creating a logger provides a basis for which other configurations for the logger are built. The getLogger()
method calls an existing logger by name or creates one if non-existent. Just below your import statement, copy and paste the following code snippet:
#create a logger
logger=logging.getLogger('basic_logger')
print(logger) #print the logger
The above code creates a logger and stores it in the logger
variable. It also confirms the logger is created by printing the logger to the console. This should be your console result:
A basic_logger
has been created as confirmed by the console result. However, the result also shows that the default log level is set to WARNING -implying only log levels greater than or equal to WARNING
will be printed. For illustration purposes, we'll print all the log levels. To change the current set level to DEBUG
, call the setLevel()
function on the logger as seen in this snippet:
#create a logger
logger=logging.getLogger('basic_logger')
#set the current level to DEBUG
logger.setLevel(logging.DEBUG)
print(logger) #print the logger
print(logger.level) #print the level
Running the above code should confirm that the log level is set to DEBUG
and this should be the result:
Now that you've successfully created the logger, you can now add additional configurations to the created logger
variable
storing logs
The logging module provides the Handler class for determining where logs are stored. Several destinations may be used to store logs depending on the application and developer's needs. Some of these destinations include and are not limited to:
http: logs may be moved via POST request to a database
email: some developers may choose to have critical logs sent to an email
file: a file can be created on the server to which all the logs are populated.
A handler is created by initialising one or more destination handler classes and supplying it with a log level. A handler will only output logs for the specified log level or maintain the default level.
This example illustrates how to store logs in a file using the Filehandler class. Copy this snippet and paste it below the logger you created.
#create a file handler
file_handler = logging.FileHandler(filename='D:\\python\\pythonLogs.log',
mode='w',
encoding='utf-8')
#set the log level of the file handler
file_handler.setLevel(logging.DEBUG)
# add file handler to logger
logger.addHandler(file_handler)
Breaking down the code snippet:
A file handler is created by initializing the
FileHandler
class with a relative path to the preferred destination for the logs and assigned to thefile_handler
variable. In this example, the file handler writes the logs in write mode. if you'd like to preserve the history of your logs and not overwrite them, you can safely ignore the mode and maintain the default append mode.The log level of the handler is set to
DEBUG
-meaning this handler will store all the log levels in this same file. if you'd like to store different log levels separately, you can create different handlers with different destinations for each of the log levels.The handler is added to the logger
With the handler added to the logger, add the following test logs to your editor and run:
#print some example logging messages
logger.critical('This is a critical message')
logger.error('This is an error message')
logger.warning('This is a warning message')
logger.info('This is an info message')
logger.debug('This is a debugging message')
The result should be a pythonLogs.log file created at the specified destination with the logging messages as seen in the screenshot below:
Formatting logs
Up until this point, the printed logs only show the messages, however, information on the Log level and when the log was created would be useful.
The logging module provides the Formatter class for specifying the appearance of logs. The Formatter class is initialized with a format string created using the accepted log Record Attributes and passed to the handler to print the logs in the specified format, in the destination specified by the handler as shown below:
# create a formatter
formatter = logging.Formatter(
'%(levelname)s- %(name)s-%(asctime)s - %(message)s')
# add formatter to file_handler
file_handler.setFormatter(formatter)
# add file handler to logger
logger.addHandler(file_handler)
In the above code, the formatter
is created using the log level names, the log name, the time and the message. The formatter
is added to the handler and the handler is then added to the logger.
The following code snippet uses a Dog
class to demonstrate the functionality of the logging system we've created. The class is initialized using a dog name and logs the number of times a dog has been barked. If the dog is made to mark more than 3 times, an exception is raised. Add it to your editor and run
class Dog:
def __init__(self, dogname):
self.dogname = dogname
self.count = 0
def bark(self):
try:
self.count += 1
logger.info(f"{self.dogname} barked {self.count} times")
if self.count > 3:
raise Exception(f"{self.dogname} cannot bark more than 3 times.")
else:
print("Woof!")
except Exception as e:
logger.error(e)
raise
logger.info('initialising the Dog class')
mendy=Dog('Mendy')
mendy.bark()
mendy.bark()
mendy.bark()
mendy.bark()
You should have the following results in your console and pythonLogs.log file respectively:
in the pythonLogs.log file, you can observe a change in the format of the logs and they now appear in the Level-log name-time-message format.
still, on Formatting, you may have noticed that python logs appear in a uniform colour and it may be difficult for a developer to distinguish between the logs, especially during development. A third-party package called colorlog improves this behaviour.
colorlog package
The colorlog package improves the built-in python logging module by enabling colour configurations for different log levels. It uses handers and formatters that work in a similar way to those in the logging module. The colorlog package is a third-party library and has to be installed to use it as follows:
pip install colorlog
after you've installed colorlog, go ahead and import it in your file as shown below:
import colorlog
with the colorlog methods now being available to use, copy and paste the colorlog configurations below into your editor:
# Create a colored formatter
formatter = colorlog.ColoredFormatter(
'%(log_color)s%(levelname)s:%(name)s:%(message)s',
log_colors={
'DEBUG': 'cyan',
'INFO': 'green',
'WARNING': 'yellow',
'ERROR': 'red',
'CRITICAL': 'red',
}
)
# Create formatted logger
log = colorlog.getLogger('formatted_logger')
log.setLevel("DEBUG")
# Add a handler to the formatted_logger
handler = colorlog.StreamHandler()
#add formatter to handler
handler.setFormatter(formatter)
#add handler to logger
log.addHandler(handler)
# Log some messages
log.debug("This is a debug message")
log.info("This is an info message")
log.warning("This is a warning message")
log.error("This is an error message")
log.critical("This is a critical message")
The code snippet above creates a formatter and adds log colours to the format string as alog_colors
attribute. The log_colors
dictionary contains key value pairs of the log levels and the colors they should appear in. A StreamHandler
is initialized to create the logs in the console. The formatter is added to the handler -and the handler is then added to the logger. when you run this code, you should have coloured logs appear in your console as in the screenshot below:
Conclusion
You've learnt how to configure logging using the logging module and how to add colour formatting to your logs using the colorlog package. The article illustrates only one of the ways logging can be configured using the logging module. You can challenge yourself to use a YAML file where you want to use multiple handlers or the basicConfig()
method where you need just a single handler. Also, you can explore using the various handlers to send logs to destinations that were not illustrated in this article. You can check out the python docs for more on logging and find the source code used for this article on replit here.