How to setup an Elasticsearch cluster with Logstash on Ubuntu 12.04

Hey there !

I’ve recently hit the limitations of a one node elasticsearch cluster in my ELK setup, see my previous blog post: Centralized logging with an ELK stack (Elasticsearch-Logback-Kibana) on Ubuntu

After more researchs, I’ve decided to upgrade the stack architecture and more precisely the elasticsearch cluster and the logstash integration with the cluster.

I’ve been using the following software versions:

  • Elasticsearch 1.4.1
  • Logstash 1.4.2

Setup the Elasticsearch cluster

You’ll need to apply this procedure on each elasticsearch node.

Java

I’ve decided to install the Oracle JDK in replacement of the OpenJDK using the following PPA:

$ sudo add-apt-repository ppa:webupd8team/java
$ sudo apt-get update && sudo apt-get install oracle-java7-installer

In case you’re missing the add-apt-repository command, make sure you have the package python-software-properties installed:

$ sudo apt-get install python-software-properties

Install via Elasticsearch repository

$ wget -O - http://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -
$ echo "deb http://packages.elasticsearch.org/elasticsearch/1.4/debian stable main" | sudo tee -a /etc/apt/sources.list.d/elasticsearch.list
$ sudo apt-get update && sudo apt-get install elasticsearch

You can also decide to start the elasticsearch service on boot using the following command:

$ sudo update-rc.d elasticsearch defaults 95 10

Configuration

You’ll need to edit the elasticsearch configuration file in /etc/elasticsearch/elasticsearch.yml and update the following parameters:

  • cluster.name: my-cluster-name

I suggest to update the default cluster name with a defined cluster name. Especially if you want to have another cluster on your network with multicast enabled.

  • index.number_of_replicas: 2

This will ensure a copy of your data on every node of your cluster. Set this property to N-1 where N is the number of nodes in your cluster.

  • gateway.recover_after_nodes: 2

This will ensure the recovery process will start after at least 2 nodes in the cluster have been started.

  • discovery.zen.mininum_master_nodes: 2

Should be set to something like N/2 + 1 where N is the number of nodes in your cluster. This is to avoid the “split-brain” scenario.

See this post for more information on this scenario: http://blog.trifork.com/2013/10/24/how-to-avoid-the-split-brain-problem-in-elasticsearch/

Disabling multicast

Multicast is not recommended in production, disabling it will allow more control over your cluster:

  • discovery.zen.ping.multicast.enabled: false
  • discovery.zen.ping.unicast.hosts: [“host-1”, “host-2”]

Of course, you’ll need to specify the 2 others hosts for each node in your cluster:

  • host-1 will communicate with host-2 & host-3
  • host-2 will communicate with host-1 & host-3
  • host-3 will communicate with host-1 & host-2

Cluster overview via Marvel

It’s free for development use ! See marvel’s homepage for more info.

Install it:

$ /usr/share/elasticsearch/bin/plugin -i elasticsearch/marvel/latest

Restart the elasticsearch service:

$ sudo service elasticsearch start

Now you can access the marvel UI via your browser on any of your elasticsearch nodes.
For example the first node: http://elasticsearch-host-a:9200/_plugin/marvel

Automatic index cleaning via Curator

This tool can be installed on node only.

You can use the curator program to delete indexes. See more information in the github repository: https://github.com/elasticsearch/curator

You’ll need pip in order to install curator:

$ sudo apt-get install python-pip

Once it’s done, you can install curator:

$ sudo pip install elasticsearch-curator

Now, it’s easy to setup a cron to delete the indexes older than 30 days in /etc/cron.d/elasticsearch_curator:

@midnight     root        curator delete --older-than 30 >> /var/log/curator.log 2>&1

Setup the Logstash node

Java

Logstash is using Java, you need to ensure you’ve got a JDK installed on your system. Use either OpenJDK or Oracle JDK.

Install via repository

$ wget -O - http://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -
$ echo "deb http://packages.elasticsearch.org/logstash/1.4/debian stable main" | sudo tee -a /etc/apt/sources.list.d/elasticsearch.list
$ sudo apt-get update && sudo apt-get install logstash

Generate a SSL certificate

Use the following command to generate a SSL certificate request and private key in /etc/ssl:

$ openssl req -x509 -newkey rsa:2048 -keyout /etc/ssl/logstash.key -out /etc/ssl/logstash.pub -nodes -days 1095

Configuration

I’ll skip the configuration for inputs, filter and specify only the output configuration for the communication with the elasticsearch cluster.

We’re not going to specify a elasticsearch host anymore, instead we will specify that this logstash instance needs to communicate with the cluster.

/etc/logstash/conf.d/10_output.conf

output {
       elasticsearch { }
}

Then we’ll edit the init script of logstash in /etc/init/logstash.conf and update the JAVA_OPTS var with:

LS_JAVA_OPTS="-Djava.io.tmpdir=${LS_HOME} -Des.config=/etc/logstash/elasticsearch.yml"

And create the file /etc/logstash/elasticsearch.yml with the following content:

cluster.name: my-cluster-name
node.name: logstash-indexer-01

If you’ve disabled multicast, then you’ll need to add the following line:

discovery.zen.ping.unicast.hosts: "elasticsearch-node-1", "elasticsearch-node-2", "elasticsearch-node-3"]

And start logstash:

$ sudo service logstash start

The logstash node will automatically be added to your elasticsearch cluster.
You can verify that by checking the node count and version in the marvel UI.

Advertisements

Logstash – Debug configuration

This post is a continuation of my previous post about the ELK stack setup, see here: how to setup an ELK stack.

I’ll show you how I’m using the logstash indexer component to start a debug process in order to test the logstash filters.

The aim is to start the indexer to parse the stdin so you can try inputs on the command line and see directly the result on stdout.

Update: I’ve recently created a tool to start a volatile ELK stack, you can also use it to test your filters: check it here.

Configuration

You’ll need to setup a configuration equivalent to the default one in /etc/logstash/conf.d, let’s say /etc/logstash/debug.d with the following files:

  • 00_input.conf
  • 02_filter_debug.conf
  • 10_output.conf

Input section

We are gonna told logstash to use stdin as it’s input.

/etc/logstash/conf.d/00_input.conf

input {
    stdin { }
}

Filter section

In the 02_filter_debug.conf file, you’ll define the filters you want to test.

filter {
 grok,mutate,drop...
}

Output section

The output will be stdout, so you can see the result (in JSON) of the filter processing directly in the console. We will also told logstash to duplicate the output into a file.

/etc/logstash/conf.d/10_output.conf

output {
    stdout {
        codec => "json"
    }
    file {
        codec => "json"
        path => "/tmp/debug-filters.json"
    }
}

Debugging

Now that the configuration is done, you’ll need to start the logstash binary with the debug configuration folder as a parameter:

$ /opt/logstash/bin/logstash -f /etc/logstash/debug.d -l /var/log/logstash/logstash-debug.log

The agent will take a few seconds to start, and then you’re ready for debug ! All you’ve got to do is copy your text in the command line and logstash will apply the filters defined in the filter section to it, then it will output the result on the command line.

To interrupt the logstash process, you’ll need to type the following commands: Ctrl+C and then Ctrl+D.

Have fun with your logs !

Logstash recipe – MySQL slow log

I’ll describe here how to use logstash and logstash-forwarder to harvest the mysql slow log on a database server so you can centralize it in elasticsearch and kibana.

It is recommended to check my previous post for the software setup : Centralized logging with an ELK stack (Elasticsearch-Logback-Kibana) on Ubuntu

Setup logstash-forwarder

You can follow the steps described in my previous post to install logstash-forwarder on your web server.

Once installed, you’ll need to create the configuration file /etc/logstash-forwarder with the following content:

{
        "network": {
                "servers": [ "logstash-node:5001" ],
                "ssl ca": "/etc/ssl/logstash.pub",
                "timeout": 15
        },
        "files": [
                {
                        "paths": [
                        "/var/log/mysql/slow_query.log",
                        ],
                        "fields": { "type": "mysql" }
                }
        ]
}

Process the logs in the logstash indexer

Add the following input definition to the configuration file:

/etc/logstash/conf.d/00_input.conf

lumberjack {
           port => 5001
           tags => ["mysql_slow_log"]
           ssl_certificate => "/etc/ssl/logstash.pub"
           ssl_key => "/etc/ssl/logstash.key"
}

This configuration will told logstash to listen and receive events via the lumberjack protocol. It will listen on port 5001 to reiceve events and automatically add a tag mysql_slow_log to each.

See http://logstash.net/docs/1.4.1/inputs/lumberjack for more information on this input.

/etc/logstash/conf.d/02_filter_mysql.conf

filter {
    if "MYSQL_SLOW_LOG" in [tags] {

        if [message] =~ "^# Time:.*$" {
                  drop {}
        }


        multiline {
                  pattern => "^# User@Host:.*$"
                  negate => true
                  what => "previous"
        }

        grok {
             match => [
                      "message", "(?m)^# User@Host: %{GREEDYDATA:user}\[%{GREEDYDATA}\] @ \[%{IP:client_ip}\]\s*# Query_time: %{NUMBER:query_time:float} Lock_time: %{NUMBER:query_lock_time:float} Rows_sent: %{NUMBER:query_rows_sent:int} Rows_examined: %{NUMBER:query_rows_examined:int}\s*SET timestamp=%{NUMBER:log_timestamp};\s*%{GREEDYDATA:query}$",
                      "message", "(?m)^# User@Host: %{GREEDYDATA:user}\[%{GREEDYDATA}\] @ \[%{IP:client_ip}\]\s*# Query_time: %{NUMBER:query_time:float} Lock_time: %{NUMBER:query_lock_time:float} Rows_sent: %{NUMBER:query_rows_sent:int} Rows_examined: %{NUMBER:query_rows_examined:int}\s*use %{GREEDYDATA:database};\s*SET timestamp=%{NUMBER:log_timestamp};\s*%{GREEDYDATA:query}$",
                      "message", "(?m)^# User@Host: %{GREEDYDATA:user}\[%{GREEDYDATA}\] @ %{GREEDYDATA:client_ip} \[\]\s*# Query_time: %{NUMBER:query_time:float} Lock_time: %{NUMBER:query_lock_time:float} Rows_sent: %{NUMBER:query_rows_sent:int} Rows_examined: %{NUMBER:query_rows_examined:int}\s*SET timestamp=%{NUMBER:log_timestamp};\s*%{GREEDYDATA:query}$"
                      ]
             }

        date {
             match => ["log_timestamp", "UNIX"]
        }

        mutate {
               remove_field => "log_timestamp"
        }
    }
}

This configuration file will apply some filters on events tagged as mysql_slow_log.

Now, you need to restart logstash to apply the changes:

$ sudo service logstash restart

	

Logstash recipe – Apache access log

I’ll describe here how to use logstash and logstash-forwarder to harvest the apache access logs on a web server so you can centralize it in elasticsearch and kibana.

It is recommended to check my previous post for the software setup : Centralized logging with an ELK stack (Elasticsearch-Logback-Kibana) on Ubuntu

Apache log format

Logstash is able to parse JSON by using a specific input codec, we’ll define a new logging format in JSON for the apache access logs so we don’t have to define grok patterns in the indexer.

Create the following file on your webserver /etc/apache2/conf.d/apache-json-logging and add the following content:

LogFormat "{ \
            \"@timestamp\": \"%{%Y-%m-%dT%H:%M:%S%z}t\", \
            \"@version\": \"1\", \
            \"tags\":[\"apache\"], \
            \"message\": \"%h %l %u %t \\\"%r\\\" %>s %b\", \
            \"clientip\": \"%a\", \
            \"duration\": %D, \
            \"status\": %>s, \
            \"request\": \"%U%q\", \
            \"urlpath\": \"%U\", \
            \"urlquery\": \"%q\", \
            \"bytes\": %B, \
            \"method\": \"%m\", \
            \"site\": \"%{Host}i\", \
            \"referer\": \"%{Referer}i\", \
            \"useragent\": \"%{User-agent}i\" \
           }" apache_access_json

See http://httpd.apache.org/docs/2.2/mod/mod_log_config.html for more information on log format variables.

Now you need to told one or more of your defined VHost to use this custom log format. Edit one of your available sites and add the following line:

CustomLog       ${APACHE_LOG_DIR}/my_site.com_access.log.json apache_access_json

Restart the web server:

$ sudo service apache2 restart

Setup logstash-forwarder

You can follow the steps described in my previous post to install logstash-forwarder on your web server.

Once installed, you’ll need to create the configuration file /etc/logstash-forwarder with the following content:

{
    "network": {
        "servers": [
            "logstash-node:5000"
        ],
        "ssl ca": "/etc/ssl/logstash.pub",
        "timeout": 15
    },
    "files": [
        {
            "paths": [
                "/var/log/apache2/my_site.com_access.log.json"
            ],
            "fields": {
                "type": "apache"
            }
        }
    ]
}

Restart the service to reload the configuration:

$ sudo service logstash-forwarder restart

Process the logs in the logstash indexer

Add the following input definition to the configuration file:

/etc/logstash/conf.d/00_input.conf

lumberjack {
           port => 5000
           tags => ["apache_access_json"]
           codec => "json"
           ssl_certificate => "/etc/ssl/logstash.pub"
           ssl_key => "/etc/ssl/logstash.key"
}

This configuration will told logstash to listen and receive events via the lumberjack protocol. It will listen on port 5000, use the json codec to process events and automatically add a tag apache_access_json to each.

See http://logstash.net/docs/1.4.1/inputs/lumberjack for more information on this input.

/etc/logstash/conf.d/02_filter_apache.conf

filter {
       if "apache_access_json" in [tags] {

          if [useragent] != "-" and [useragent] != "" {
             useragent {
                       add_tag => [ "UA" ]
                       source => "useragent"
                       prefix => "UA-"
             }
          }

          mutate {
                 convert => ['duration', 'float']
          }

          ruby {
               code => "event['duration']/=1000000"
          }

          if [bytes] == 0 { mutate { remove_field => "[bytes]" } }
          if [urlquery]              == "" { mutate { remove_field => "urlquery" } }
          if [method]    =~ "(HEAD|OPTIONS)" { mutate { remove_field => "method" } }
          if [useragent] == "-"              { mutate { remove_field => "useragent" } }
          if [referer]   == "-"              { mutate { remove_field => "referer" } }

          if "UA" in [tags] {
             if [device] == "Other" { mutate { remove_field => "device" } }
             if [name]   == "Other" { mutate { remove_field => "name" } }
             if [os]     == "Other" { mutate { remove_field => "os" } }
          }
       }
}

This configuration file will apply some filters on events tagged as apache_access_json.

You’ll notice the fields such as bytes, useragent, duration… The fields are automatically setted by logstash during the event reception using the json codec.

Now, you need to restart logstash to apply the changes:

$ sudo service logstash restart

Centralized logging with an ELK stack (Elasticsearch-Logstash-Kibana) on Ubuntu

Update 22/12/2015

I’ve reviewed the book Learning ELK stack by Packt Publishing, it’s available online for 5$ only: https://www.packtpub.com/big-data-and-business-intelligence/learning-elk-stack/?utm_source=DD-deviantonywp&utm_medium=referral&utm_campaign=OME5D2015

I’ve recently setup an ELK stack in order to centralize the logs of many services in my company, and it’s just amazing !

I’ve used the following versions of the softwares on Ubuntu 12.04 (also works on Ubuntu 14.04):

  • Elasticsearch 1.4.1
  • Kibana 3.1.2
  • Logstash 1.4.2
  • Logstash-forwarder 0.3.1

About the softwares

Elasticsearch

Elastichsearch is a RESTful distributed search engine using a NoSQL database and based on the Apache Lucene engine. Developped by the Elasticsearch company which also owns Kibana and Logstash.

Elasticsearch Homepage

Logstash

Logstash is a tool used to harvest and filter logs, it’s developed in Java under Apache 2.0 license.

Logstash Homepage

Logstash-forwarder

Logstash-forwarder (previously named Lumberjack) is one of the many log shippers compliants with Logstash.

It has the following advantages:

  • a light footprint (written in Go, no need for a Java Virtual Machine to harvest logs)
  • uses a data compression algorithm
  • uses encryption to send data over the network

Logstash-forwarder Homepage

Kibana

Kibana is a web UI allowing to search and display data stored by Logstash in Elasticsearch.

Kibana Homepage

Architecture

Here is a simple schema of the expected architecture : We will use logstash-forwarder (using the lumberjack protocol) on each server where we want to harvest the logs. These nodes will send data to the indexer : logstash. This one will process them using filters and send the formatted data to elasticsearch.

Kibana, the UI, will allow to display and compile the data. This architecture is scalable, you can quickly add more indexers nodes by adding logstash instances. Same for elasticsearch, which works as a cluster of one node by default.

Setup the Elasticsearch node

NOTE: A one node elasticsearch cluster is not recommended for production, I’ve added another blog post describing the setup of a three node cluster on Ubuntu 12.04, see How to setup an Elasticsearch cluster with Logstash on Ubuntu 12.04.

Requirements

elasticsearch is using Java, you need to ensure you’ve got a JDK installed on your system and that it is available in the PATH.

Install via repository

$ wget -O - http://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -
$ echo "deb http://packages.elasticsearch.org/elasticsearch/1.4/debian stable main" | sudo tee -a /etc/apt/sources.list.d/elasticsearch.list
$ sudo apt-get update && sudo apt-get install elasticsearch

You can also decide to start the elasticsearch service on boot using the following command:

$ sudo update-rc.d elasticsearch defaults 95 10

Configuration

You must specify the path to the Java JDK in the file /etc/default/elasticsearch to start the service by adding the following variable:

JAVA_HOME=/path/to/java/JDK

If you want to tune your elasticsearch installation, the configuration is available in the file /etc/elasticsearch/elasticsearch.yml.

You can now start the service:

$ sudo service elasticsearch start

Automatic index cleaning via Curator

You can use the curator program to delete indexes. See more information in the github repository: https://github.com/elasticsearch/curator

You’ll need pip in order to install curator:

$ sudo apt-get install python-pip

Once it’s done, you can install curator:

$ sudo pip install elasticsearch-curator

Now, it’s easy to setup a cron to delete the indexes older than 30 days in /etc/cron.d/elasticsearch_curator:

@midnight     root        curator delete --older-than 30 >> /var/log/curator.log 2>&1

Cluster overview via Marvel

NOTE: marvel needs to be installed on each node of an elasticsearch cluster in order to supervise the whole cluster.

See marvel’s homepage for more info.

Install it:

$ /usr/share/elasticsearch/bin/plugin -i elasticsearch/marvel/latest

Restart the elasticsearch service:

$ sudo service elasticsearch restart

Now you can access the marvel UI via your browser on : http://elasticsearch-host:9200/_plugin/marvel

Setup the Logstash node

Requirements

Logstash is using Java, you need to ensure you’ve got a JDK installed on your system and that it is available in the PATH.

Install via repository

$ wget -O - http://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -
$ echo "deb http://packages.elasticsearch.org/logstash/1.4/debian stable main" | sudo tee -a /etc/apt/sources.list.d/elasticsearch.list
$ sudo apt-get update && sudo apt-get install logstash

Generate a SSL certificate

Use the following command to generate a SSL certificate request and private key in /etc/ssl:

$ openssl req -x509 -newkey rsa:2048 -keyout /etc/ssl/logstash.key -out /etc/ssl/logstash.pub -nodes -days 1095

Configuration

The logstash configuration is based in the /etc/logstash/conf.d directory by default. As the configuration can become quite messy with the time, I’ve managed to split the configuration in multiple files:

  • 00_input.conf
  • 02_filter_*.conf
  • 10_output.conf

This allows you to define separated sections for the logstash configuration:

Input section

You’ll define here all the inputs for the indexer, an input is a source on which logstash will read events. It can be file, a messaging queue connection… We are going to use the lumberjack input to communicate with the logstash-forwarder harvesters.

See: http://logstash.net/docs/1.4.2/inputs/lumberjack for more information.

Filter section

Filters are processing methods you will apply to the received events. For example, you can aplpy a calculation method on some numeric value, drop a specific event based on it’s text value… There is a LOT of filters you can use with logstash, see the documentation for more information.

I recommend using a specific configuration file for each service you want to process: 02_filter_apache.conf, 02_filter_mysql.conf

Output section

NOTE: There is another way to configure the logstash integration with an elasticsearch cluster, it’s more adaptable if have more than a node in your cluster, see How to setup an Elasticsearch cluster with Logstash on Ubuntu 12.04.

The output will define where will logstash send the processed events. We are going to use elasticsearch as the output destination:

/etc/logstash/conf.d/10_output.conf

output {
       elasticsearch {
                     host => "elasticsearch-node"
                     protocol => "http"
       }
}

Optional: Logstash contrib plugins

Starting from the version 1.4 of logstash, some plugins have been separated of the project. A new project was born : logstash-contrib, gathering a lot of plugins inside one bundle.

See http://logstash.net/docs/1.4.2/contrib-plugins for more information. It’s also available on Github: https://github.com/elasticsearch/logstash-contrib.

Installation:

$ /opt/logstash/bin/plugin install contrib

Setup the Kibana node

Requirements

You’ll need a web server to use kibana, I’ve chosen apache:

$ sudo apt-get install apache2

Installation

You can retrieve the latest archive for kibana here: http://www.elasticsearch.org/overview/kibana/installation/

Download and extract the archive in your webserver root (replace VERSION with the right version):

$ wget https://download.elasticsearch.org/kibana/kibana/kibana-VERSION.tar.gz
$ tar xvf kibana-*.tar.gz -C /var/www
$ sudo mv /var/www/kibana-VERSION /var/www/kibana

Now setup the default dashboard:

$ cp /var/www/kibana/app/dashboards/default.json /var/www/kibana/app/dashboards/default.json.bak
$ mv /var/www/kibana/app/dashboards/logstash.json /var/www/kibana/app/dashboards/default.json

Update the elasticsearch value in /var/www/kibana/config.js to match your elasticsearch node:

      elasticsearch: "http://elasticsearch-host:9200",

You can now access the kibana UI: http://kibana-host/kibana

Setup the Logstash forwarder on a node

Install via repository

$ wget -O - http://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -
$ echo "deb http://packages.elasticsearch.org/logstashforwarder/debian stable main" | sudo tee -a /etc/apt/sources.list.d/elasticsearch.list
$ sudo apt-get update && sudo apt-get install logstash-forwarder

Configuration

You’ll need to copy the public key you’ve generated previously on the logstash node in the same directory : /etc/ssl/logstash.pub

The configuration for the logstash-forwarder is defined in the file /etc/logstash-forwarder. You can see examples in my logstash recipes posts.

Logstash recipes

Here is a list of posts I’ve made for logstash recipes, they contain both logstash and logstash-forwarder configuration samples:

You can also check my post on how to debug your logstash filters.

Enjoy your logs !

Manage logging with Logback and Apache Tomcat

This article will help the people who wants to abstract the logging part of their web application in the servlet container Apache Tomcat. We will put all the required libraries in the libraries of Tomcat and tweak a few configurations files in order to deliver a WAR file without logging libraries (you may still need log4j-over-slf4j though).

Required libraries

You can retrieve all the last versions of the required libraries using the following links :

Installation procedure

  • Install the libraries into the ${CATALINA_HOME}/lib folder
$ sudo mv slf4j-api-*.jar $CATALINA_HOME/lib/
$ sudo mv jul-to-slf4j-*.jar $CATALINA_HOME/lib/
$ sudo mv logback-classic-*.jar $CATALINA_HOME/lib/
$ sudo mv logback-core-*.jar $CATALINA_HOME/lib/
$ sudo chown -R tomcat:tomcat $CATALINA_HOME/lib
  • Update the file ${CATALINA_HOME}/conf/logging.properties 

You need to replace the content of the logging.properties file.

$ sudo cp $CATALINA_HOME/conf/logging.properties $CATALINA_HOME/conf/logging.properties-bak
$ echo "handlers = org.slf4j.bridge.SLF4JBridgeHandler" | sudo tee -a $CATALINA_HOME/conf/logging.properties
$ sudo chown tomcat:tomcat $CATALINA_HOME/conf/logging.properties
$ sudo chmod 600 $CATALINA_HOME/conf/logging.properties
  • Create a logback configuration folder in ${CATALINA_HOME}/conf

Next step is to create a folder for the logback configuration.

$ sudo mkdir $CATALINA_HOME/conf/logback
$ sudo chown tomcat:tomcat $CATALINA_HOME/conf/logback

You can now add your logback.xml file into that folder.

  • Create a setenv.sh file

The file needs to have the following content:

CLASSPATH=$CATALINA_HOME/lib/jul-to-slf4j-VERSION.jar:$CATALINA_HOME/lib/slf4j-api-VERSION.jar:$CATALINA_HOME/lib/logback-classic-VERSION.jar:$CATALINA_HOME/lib/logback-core-VERSION.jar:$CATALINA_HOME/conf/logback/

Then put this file into the $CATALINA_HOME/bin folder.

$ sudo mv setenv.sh $CATALINA_HOME/bin
$ sudo chown tomcat:tomcat $CATALINA_HOME/bin/setenv.sh
$ sudo chmod 755 $CATALINA_HOME/bin/setenv.sh
  • Restart the Tomcat server

Now you just need to restart the server and voilà ! The logback.xml file located in $CATALINA_HOME/conf/logback will be loaded.