Log in

No account? Create an account

Previous Entry | Next Entry


This has been fixed in openshift-elasticseach-plugin, which is used with 1.4.1/3.4.1 version of OpenShift. If you are using an earlier version, the warning below applies.

There is no actual authentication check performed on each request. Do not use the below if you need actual security and authentication - for throwaway dev environments only. If you need a secure method, you must instead use a passthrough route, and use mutual (i.e. client cert) authentication.


The Elasticsearch deployed with OpenShift aggregated logging is not accessible externally, outside the logging cluster, by default. The intention is that Kibana will be used to access the data, and the various ways to deploy/install OpenShift with logging allow you to specify the externally visible hostname that Kibana (including the separate operations cluster) will use. However, there are many tools that want to access the data from Elasticsearch. This post describes how to enable a route for external access to Elasticsearch.

You will first need an FQDN for the Elasticsearch (and a separate FQDN for the Elasticsearch ops instance if using the separate operations cluster). I am testing with an all-in-one (OpenShift master + node + logging components) install on an OpenStack machine, which has a private IP and hostname, and a public (floating) IP and hostname. In a real deployment, the public IP addresses and hostnames for the elasticsearch services will need to be added to DNS.
private host, IP: host-192-168-78-2.openstacklocal,
public host, IP: run-logging-source.oshift.rmeggins.test.novalocal, 10.x.y.z 

I have done the following on my local machine and in the all-in-one machine, by hacking /etc/hosts. All-in-one machine:   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.x.y.z run-logging-source.oshift.rmeggins.test.novalocal kibana.run-logging-source.oshift.rmeggins.test kibana-ops.run-logging-source.oshift.rmeggins.test es.run-logging-source.oshift.rmeggins.test es-ops.run-logging-source.oshift.rmeggins.test

My local machine:
10.x.y.z run-logging-source.oshift.rmeggins.test.novalocal run-logging-source.oshift.rmeggins.test kibana.run-logging-source.oshift.rmeggins.test kibana-ops.run-logging-source.oshift.rmeggins.test es.run-logging-source.oshift.rmeggins.test es-ops.run-logging-source.oshift.rmeggins.test

I set up a router after installing OpenShift:
$ oc create serviceaccount router -n default
$ oadm policy add-scc-to-user privileged system:serviceaccount:default:router
$ oadm policy add-cluster-role-to-user cluster-reader system:serviceaccount:default:router
$ oadm router --create --namespace default --service-account=router \
     --credentials $MASTER_CONFIG_DIR/openshift-router.kubeconfig

$ oc get pods -n default
NAME                      READY     STATUS    RESTARTS   AGE
docker-registry-1-7z0gq   1/1       Running   0          35m
router-1-8bp88            1/1       Running   0          24m

$ oc logs -n default router-1-8bp88
I1010 19:57:57.815578       1 router.go:161] Router is including routes in all namespaces
I1010 19:57:57.922277       1 router.go:404] Router reloaded:
 - Checking HAProxy /healthz on port 1936 ...
 - HAProxy port 1936 health check ok : 0 retry attempt(s).

Logging setup should have already created services for Elasticsearch:
$ oc project logging
$ oc get svc
NAME                     CLUSTER-IP       EXTERNAL-IP   PORT(S)  AGE
logging-es         none          9200/TCP 33m
logging-es-ops    none          9200/TCP 33m

The route is a reencrypt route. The --dest-ca-cert argument value below is the CA cert for the CA that issued the Elasticsearch server cert, used to re-encrypt the connection from the router to Elasticsearch. In this case, it is the same as the admin-ca cert, so we can just use that (using the method to extract it from the previous posting). By default, the route will use the server cert created by the OpenShift master CA. If you want to have a real server cert with the actual external Elasticsearch hostname, you will need to create one. An example of how to do this with OpenShift is described below (marked #optional). The route allows us to use username/password/token authentication to Elasticsearch - the auth is proxied through the router to SearchGuard/Elasticsearch.
$ ca=`mktemp`
$ cert=`mktemp`
$ key=`mktemp`
$ oc get secret logging-elasticsearch \
    --template='{{index .data "admin-ca"}}' | base64 -d > $ca
#optional - MASTER_CONFIG_DIR e.g. /etc/origin/master
$ openshift admin ca create-server-cert --key=es.key \
          --cert=es.crt --hostnames=es.fqdn.hostname \
          --signer-cert=$MASTER_CONFIG_DIR/ca.crt \
          --signer-key=$MASTER_CONFIG_DIR/ca.key \
#optional - MASTER_CONFIG_DIR e.g. /etc/origin/master
$ openshift admin ca create-server-cert --key=es-ops.key \
          --cert=es-ops.crt --hostnames=es-ops.fqdn.hostname \
          --signer-cert=$MASTER_CONFIG_DIR/ca.crt \
          --signer-key=$MASTER_CONFIG_DIR/ca.key \
$ oc create route -n logging reencrypt --service logging-es \
                        --port 9200 --hostname es.fqdn.hostname \
                        --dest-ca-cert=$ca \
                        #optional --ca-cert=$MASTER_CONFIG_DIR/ca.crt --cert=es.crt --key=es.key
$ oc create route -n logging reencrypt --service logging-es-ops \
                         --port 9200 --hostname es-ops.fqdn.hostname \
                         --dest-ca-cert=$ca \
                         #optional --ca-cert=$MASTER_CONFIG_DIR/ca.crt --cert=es-ops.crt --key=es-ops.key

I'm using the AllowAll identity provider so I can just create users/passwords with oc login (for testing):
$ more /tmp/openshift/origin-aggregated-logging/openshift.local.config/master/master-config.yaml
  - challenge: true
    login: true
    mappingMethod: claim
    name: anypassword
      apiVersion: v1
      kind: AllowAllPasswordIdentityProvider

I create a user called "kibtest" (I also use this user for kibana testing) that has cluster admin rights:
$ oc login --username=system:admin
$ oc login --username=kibtest --password=kibtest
$ oc login --username=system:admin
$ oadm policy add-cluster-role-to-user cluster-admin kibtest

I get the username and token for kibtest:
$ oc login --username=kibtest --password=kibtest
$ test_token="$(oc whoami -t)"
$ test_name="$(oc whoami)"
$ test_ip=""
$ oc login --username=system:admin

Now I can use curl like this:
$ curl -s -k -H "X-Proxy-Remote-User: $test_name" -H "Authorization: Bearer $test_token" -H "X-Forwarded-For:" https://es.fqdn.hostname
  "name" : "Sugar Man",
  "cluster_name" : "logging-es",
  "version" : {
    "number" : "2.3.5",
    "build_hash" : "90f439ff60a3c0f497f91663701e64ccd01edbb4",
    "build_timestamp" : "2016-07-27T10:36:52Z",
    "build_snapshot" : false,
    "lucene_version" : "5.5.0"
  "tagline" : "You Know, for Search"

$ curl -s -k -H "X-Proxy-Remote-User: $test_name" -H "Authorization: Bearer $test_token" -H "X-Forwarded-For:" https://es-ops.fqdn.hostname/.operations.*/_search?q=message:centos | python -mjson.tool | more
    "_shards": {
        "failed": 0,
        "successful": 1,
        "total": 1
    "hits": {
        "hits": [
                "_id": "AVewK5inAJ6n02oOdaIc",
                "_index": ".operations.2016.10.10",
                "_score": 11.1106205,
                "_source": {
                    "@timestamp": "2016-10-10T19:46:43.000000+00:00",
                    "hostname": "host-192-168-78-2.openstacklocal",
                    "ident": "docker-current",
                    "ipaddr4": "",
                    "ipaddr6": "fe80::42:acff:fe11:5",
                    "message": "time=\"2016-10-10T19:46:43.564686094Z\" level=in....."

Works the same from my local machine.


( 2 comments — Leave a comment )
Oct. 30th, 2017 06:52 am (UTC)
PUT operation doesn't work
Hi Rich, nice post and it was really helpful in creating external route for elasticsearch cluster. But somehow few operation fails such as - get the list of indices. put repository etc.

I am using cluster admin user and token, still I get the error as:

{"error":{"root_cause":[{"type":"security_exception","reason":"no permissions for cluster:admin/repository/put"}],"type":"security_exception","reason":"no permissions for cluster:admin/repository/put"},"status":403}

Could you help please ?
Oct. 30th, 2017 09:12 pm (UTC)
Re: PUT operation doesn't work
What is the exact operation you are attempting? What is the URL and POST/PUT arguments?
( 2 comments — Leave a comment )