?

Log in

No account? Create an account

Previous Entry

In OpenShift Aggregated Logging https://github.com/openshift/origin-aggregated-logging the Fluentd pipeline tries very hard to ensure that the data is correct, because it depends on having clean data in the output section in order to construct the index names for Elasticsearch. If the fields and values are not correct, then the index name construction will fail with an unhelpful error like this:

2017-09-28 13:22:22 -0400 [warn]: temporarily failed to flush the buffer. next_retry=2017-09-28 13:22:23 -0400 error_class="NoMethodError"
error="undefined method `[]' for nil:NilClass" plugin_id="object:1c0bd1c"
2017-09-28 13:22:22 -0400 [warn]: /opt/app-root/src/gems/fluent-plugin-elasticsearch-1.9.5.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:240:in `eval'
2017-09-28 13:22:22 -0400 [warn]: /opt/app-root/src/gems/fluent-plugin-elasticsearch-1.9.5.1/lib/fluent/plugin/out_elasticsearch_dynamic.rb:240:in `eval'

There is no context about what field might be missing, what tag is matching, or even which plugin it is, the operations output or the applications output (although you do get the plugin_id, which could be used to look up the actual plugin information, if the Fluentd monitoring is enabled).
One solution is to just edit the logging-fluentd ConfigMap, and add a stdout filter in the right place:
## matches
          <filter **>
            @type stdout
          </filter>
          @include configs.d/openshift/output-pre-*.conf
          ...

and dump the time, tag, and record just before the outputs. The problem with this is that it will cause a feedback loop, since Fluentd is reading from its own pod log. The solution to this is to also throw away Fluentd pod logs.
## filters
          @include configs.d/openshift/filter-pre-*.conf
          @include configs.d/openshift/filter-retag-journal.conf
          <match kubernetes.journal.container.fluentd kubernetes.var.log.containers.fluentd**>
            @type null
          </match>

This must come after the filter-retag-journal.conf which identifies and tags Fluentd pod log records. Then restart Fluentd (oc pod delete $fluentd_pod, oc label node, etc.). The Fluentd pod log will now contain data like this:
2017-09-28 13:44:47 -0400 output_tag: {"type":"response","@timestamp":"2017-09-28T17:44:19.524989+00:00","pid":8,"method":"head","statusCode":200,
"req":{"url":"/","method":"head","headers":{"user-agent":"curl/7.29.0","host":"localhost:5601","accept":"*/*"},"remoteAddress":"127.0.0.1","userAgent":"127.0.0.1"},
"res":{"statusCode":200,"responseTime":2,"contentLength":9},
"message":"HEAD / 200 2ms - 9.0B",
"docker":{"container_id":"e1cc1b22d04683645b00de53c0891e284c492358fd2830142f4523ad29eec060"},
"kubernetes":{"container_name":"kibana","namespace_name":"logging","pod_name":"logging-kibana-1-t9tvv",
"pod_id":"358622d8-a467-11e7-ab9a-0e43285e8fce","labels":{"component":"kibana","deployment":"logging-kibana-1",
"deploymentconfig":"logging-kibana","logging-infra":"kibana","provider":"openshift"},
"host":"ip-172-18-0-133.ec2.internal","master_url":"https://kubernetes.default.svc.cluster.local",
"namespace_id":"9dbd679c-a466-11e7-ab9a-0e43285e8fce"},...

Now, if you see a record that is missing @timestamp, or a record from a pod that is missing kubernetes.namespace_name or kubernetes.namespace_id, you know that the exception is caused by one of these missing fields.

Comments