Operational Logs
The logs are generated from the standard output of a component. The IT team in charge of maintaining your Kubernetes cluster can collect these logs.
Saagie JVM components use the following tools to format logs:
- 
Logback logging framework, with an XML file to configure common patterns. 
- 
Specific tools from the ElasticSearch suite, but you can use other similar tools. 
JVM components use an additional configuration to modify the log levels via an XML file.
The project-k8s-controller pod uses Golang and shell script.
There are no tools to format logs, but generated logs follow the same format as other components, except for {THREAD}.
| Logs are in Coordinated Universal Time (UTC) exclusively. | 
GDPR Compliance
Saagie operational logs collect usernames. The data collected can be viewed by the logs of each microservice.
Log Levels
| Level | Description | 
|---|---|
| 
 | A general debugging event providing precision to a log, such as object details for Saagie support. | 
| 
 | Monitors inputs, outputs and instructions from the main service to track the execution of components. | 
| 
 | An event that could result in an error and require intervention. | 
| 
 | An error in the component. | 
Log Patterns
Saagie’s operational logs specific pattern is as follows:
[OPERATIONAL-{LOG_VERSION}] {DATEFORMAT_PATTERN_UTC} {LOG_LEVEL} [{THREAD}] - {COMPONENT_NAME}[{PACKAGE.CLASS}:{LINE_NUMBER}] {LOG_MESSAGE} -[{LOG_METADATA}]- {EXCEPTION}Where:
- 
{LOG_VERSION}is the current version of the log.
- 
{DATEFORMAT_PATTERN_UTC}is the time of the log. As a reminder, all log times are in UTC.
- 
{LOG_LEVEL}can beINFO,DEBUG,WARN, orERROR. For more information, see Log Levels.
- 
{THREAD}is the thread name.
- 
{COMPONENT_NAME}is the component for which you retrieve the logs.
- 
{PACKAGE.CLASS}:{LINE_NUMBER}is the package and class, followed by the line number.
- 
{LOG_MESSAGE}describes the ongoing activity.
- 
{LOG_METADATA}describes the metadata, such as realm, id, and action.
- 
{EXCEPTION}gets stack trace of an error. Present only in case of error.
| Regardless of the logging level, sensitive information such as passwords is not logged for security reasons. | 
Retrieving Logs
To retrieve your operation logs generated by a component, run the following kubectl command:
# See the logs generated by component's pods.
kubectl -n <namespace> logs <pod_name> | grep "OPERATIONAL" (1) (2)Where:
| 1 | <namespace>must be replaced with the name of your namespace. | 
| 2 | <pod_name>must be replaced with the name of the pod for which you want to retrieve logs. | 
| Components can be run on multiple pods simultaneously. Review the logs for each pod for complete information about your component. | 
Working with Logs
Examples
Here are a few examples that must be adapted to your needs.
You can retrieve parsed logs using Logstash. This pattern can be used by third party applications to retrieve relevant information:
\[%{WORD:log_type}-%{WORD:log_type_version}\] %{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:log_level} ?( )\[%{DATA:thread}\] - %{NOTSPACE:component_name}\[%{DATA:class}\] (?m)%{GREEDYDATA:message} -\[%{DATA:logs_metadata}\]-( (?m)%{GREEDYDATA:exception})?We use a Logstash plugin called kv to generate key and value pairs from the payload named logs_metadata:
  kv{
    source => "logs_metadata"
    value_split => "="
    trim_value => ","
  }Some logs can be multiline, such as stack traces of exceptions. The following example shows how to retrieve logs, as well as multiline logs using a Filebeat configuration:
  filebeat.yml: |-
    filebeat.inputs:
    - type: container
      paths:
        - "/var/log/containers/*_<installationId>_*.log" (1)
      include_lines: ['^\[AUDIT-V[0-9]+]', '^\[OPERATIONAL-V[0-9]+']
      multiline.pattern: '^[[:space:]]+(at|\.{3})[[:space:]]+\b|^Caused by:'
      multiline.negate: false
      multiline.match: after
    output.logstash:
      hosts: ["logstash:8080"]
      ssl.enabled: falseWhere:
| 1 | <installationId>must be replaced with your installation ID. It must match the prefix you have determined for your DNS entry. | 
You can now parse these logs using a tool like Logstash.
| You are not limited to Elasticsearch tools. Feel free to replace them with your favorite tools. For example, you can also use Kibana, or a similar tool to view and use your logs. | 
Modifying Log Configuration
Modifying Log Level
- 
Find your ConfigMapby running the following command line:# List all ConfigMaps with settings to modify log levels. kubectl -n <namespace> get configmap (1)Where: 1 <namespace>must be replaced with the name of your namespace.
- 
Find your ConfigMapon the generated list.ConfigMapsare listed in the format<component_name>-config.
- 
Open your ConfigMapfile.# Open the ConfigMap to modify it. kubectl -n <namespace> edit configmap <component_name>-config (1) (2)Where: 1 <namespace>must be replaced with the name of your namespace.2 <component_name>must be replaced with the name of your component.ExampleHere is an example with the namespace saagie-commonand the componentprojects-and-jobs:kubectl -n <installationId> edit configmap saagie-common-projects-and-jobs-configWhere: - 
<installationId>must be replaced with your installation ID. It must match the prefix you have determined for your DNS entry.
 
- 
- 
Modify your ConfigMap's log level. Your XML file should look like the following:<logger name="io.saagie" level="info" additivity="false"> (1) <appender-ref ref="OPERATIONAL"/> </logger>Where: 1 The value of the levelcomponent isinfo.
- 
Change the levelvalue to a different level, likedebug.
- 
Save your changes. 
- 
Restart the pod. 
Configuring Logs According to the Package or Class
Logs are highly customizable. Here are two examples of customized configurations.
  <logger name="io.saagie.projectsandjobs.infra.adapter.primary.graphql.resolver" level="debug" additivity="false"> (1)
    <appender-ref ref="OPERATIONAL"/>
  </logger>| 1 | You only retrieve the debuglevel logs for the packageio.saagie. | 
<loggers>
  <logger name="io.saagie" level="info" additivity="false"> (1)
    <appender-ref ref="OPERATIONAL"/>
  </logger>
  <logger name="io.saagie.projectsandjobs.infra.adapter.primary.graphql.resolver" level="debug" additivity="false"> (2)
    <appender-ref ref="OPERATIONAL"/>
  </logger>
</loggers>| 1 | You retrieve all infolevel logs for the entire platform. | 
| 2 | You also retrieve debuglevel logs for the specific package. | 
| You must restart the pod to apply these changes. | 
| For more information about possible configurations, see the Logback documentation. |