Operational Logs

Saagie generates logs for all components so that IT teams can monitor component behavior.

The logs are generated from the standard output of a component. The IT team in charge of maintaining your Kubernetes cluster can collect these logs.

Saagie JVM components use the following tools to format logs:

  • Logback logging framework, with an XML file to configure common patterns.

  • Specific tools from the ElasticSearch suite, but you can use other similar tools.

JVM components use an additional configuration to modify the log levels via an XML file.

The project-k8s-controller pod uses Golang and shell script. There are no tools to format logs, but generated logs follow the same format as other components, except for {THREAD}.

Logs are in Coordinated Universal Time (UTC) exclusively.

GDPR Compliance

Saagie complies with the General Data Protection Regulation (GDPR). Operational logs collect specific personal data from users to ensure the security and traceability of the product.

Saagie operational logs collect usernames. The data collected can be viewed by the logs of each microservice.

Log Levels

Log levels are divided by level of criticality.

Table 1. Log Levels
Level Description

DEBUG

A general debugging event providing precision to a log, such as object details for Saagie support.

INFO
Default setting

Monitors inputs, outputs and instructions from the main service to track the execution of components.

WARN

An event that could result in an error and require intervention.

ERROR

An error in the component.

Log Patterns

Saagie’s operational logs follow a specific pattern.

Saagie’s operational logs specific pattern is as follows:

[OPERATIONAL-{LOG_VERSION}] {DATEFORMAT_PATTERN_UTC} {LOG_LEVEL} [{THREAD}] - {COMPONENT_NAME}[{PACKAGE.CLASS}:{LINE_NUMBER}] {LOG_MESSAGE} -[{LOG_METADATA}]- {EXCEPTION}

Where:

  • {LOG_VERSION} is the current version of the log.

  • {DATEFORMAT_PATTERN_UTC} is the time of the log. As a reminder, all log times are in UTC.

  • {LOG_LEVEL} can be INFO, DEBUG, WARN, or ERROR. For more information, see Log Levels.

  • {THREAD} is the thread name.

  • {COMPONENT_NAME} is the component for which you retrieve the logs.

  • {PACKAGE.CLASS}:{LINE_NUMBER} is the package and class, followed by the line number.

  • {LOG_MESSAGE} describes the ongoing activity.

  • {LOG_METADATA} describes the metadata, such as realm, id, and action.

  • {EXCEPTION} gets stack trace of an error. Present only in case of error.

Regardless of the logging level, sensitive information such as passwords is not logged for security reasons.

Retrieving Logs

You can manually retrieve the operation logs generated by a component.

To retrieve your operation logs generated by a component, run the following kubectl command:

# See the logs generated by component's pods.
kubectl -n <namespace> logs <pod_name> | grep "OPERATIONAL" (1) (2)

Where:

1 <namespace> must be replaced with the name of your namespace.
2 <pod_name> must be replaced with the name of the pod for which you want to retrieve logs.
Components can be run on multiple pods simultaneously. Review the logs for each pod for complete information about your component.

Working with Logs

There are several external tools that facilitate automatic log retrieval and make it easier for you to use the logs.

Examples

Here are a few examples that must be adapted to your needs.

  • Parse Logs

  • Multiline Logs

You can retrieve parsed logs using Logstash. This pattern can be used by third party applications to retrieve relevant information:

\[%{WORD:log_type}-%{WORD:log_type_version}\] %{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:log_level} ?( )\[%{DATA:thread}\] - %{NOTSPACE:component_name}\[%{DATA:class}\] (?m)%{GREEDYDATA:message} -\[%{DATA:logs_metadata}\]-( (?m)%{GREEDYDATA:exception})?

We use a Logstash plugin called kv to generate key and value pairs from the payload named logs_metadata:

  kv{
    source => "logs_metadata"
    value_split => "="
    trim_value => ","
  }

Some logs can be multiline, such as stack traces of exceptions. The following example shows how to retrieve logs, as well as multiline logs using a Filebeat configuration:

  filebeat.yml: |-
    filebeat.inputs:
    - type: container
      paths:
        - "/var/log/containers/*_<installationId>_*.log" (1)
      include_lines: ['^\[AUDIT-V[0-9]+]', '^\[OPERATIONAL-V[0-9]+']
      multiline.pattern: '^[[:space:]]+(at|\.{3})[[:space:]]+\b|^Caused by:'
      multiline.negate: false
      multiline.match: after

    output.logstash:
      hosts: ["logstash:8080"]
      ssl.enabled: false

Where:

1 <installationId> must be replaced with your installation ID. It must match the prefix you have determined for your DNS entry.

You can now parse these logs using a tool like Logstash.

You are not limited to Elasticsearch tools. Feel free to replace them with your favorite tools. For example, you can also use Kibana, or a similar tool to view and use your logs.

Modifying Log Configuration

Modifying Log Level

The XML files are mounted in the Kubernetes ConfigMap named <component_name>-config.

  1. Find your ConfigMap by running the following command line:

    # List all ConfigMaps with settings to modify log levels.
    kubectl -n <namespace> get configmap (1)

    Where:

    1 <namespace> must be replaced with the name of your namespace.
  2. Find your ConfigMap on the generated list.

    ConfigMaps are listed in the format <component_name>-config.
  3. Open your ConfigMap file.

    # Open the ConfigMap to modify it.
    kubectl -n <namespace> edit configmap <component_name>-config (1) (2)

    Where:

    1 <namespace> must be replaced with the name of your namespace.
    2 <component_name> must be replaced with the name of your component.
    Example

    Here is an example with the namespace saagie-common and the component projects-and-jobs:

    kubectl -n <installationId> edit configmap saagie-common-projects-and-jobs-config

    Where:

  4. Modify your ConfigMap's log level. Your XML file should look like the following:

      <logger name="io.saagie" level="info" additivity="false"> (1)
        <appender-ref ref="OPERATIONAL"/>
      </logger>

    Where:

    1 The value of the level component is info.
  5. Change the level value to a different level, like debug.

  6. Save your changes.

  7. Restart the pod.

Configuring Logs According to the Package or Class

Logs are highly customizable. Here are two examples of customized configurations.

Example 1. Retrieving logs only for specific packages and classes.
  <logger name="io.saagie.projectsandjobs.infra.adapter.primary.graphql.resolver" level="debug" additivity="false"> (1)
    <appender-ref ref="OPERATIONAL"/>
  </logger>
1 You only retrieve the debug level logs for the package io.saagie.
Example 2. Setting different default log levels for the logs of different packages and classes.
<loggers>
  <logger name="io.saagie" level="info" additivity="false"> (1)
    <appender-ref ref="OPERATIONAL"/>
  </logger>

  <logger name="io.saagie.projectsandjobs.infra.adapter.primary.graphql.resolver" level="debug" additivity="false"> (2)
    <appender-ref ref="OPERATIONAL"/>
  </logger>
</loggers>
1 You retrieve all info level logs for the entire platform.
2 You also retrieve debug level logs for the specific package.
You must restart the pod to apply these changes.
For more information about possible configurations, see the Logback documentation.