?

Log in

Fluentd has a monitor input plugin: http://docs.fluentd.org/articles/monitoring

Unfortunately, the documentation is pretty scant, and some of the useful, interesting endpoints and options are not documented. I've captured some of that missing information below, and shown how it can be used to monitor the Elasticsearch output plugin.

Endpoints

/api/plugins

Provides information about each plugin in a text based columnar format:
$ curl -s http://localhost:24220/api/plugins
plugin_id:object:1dce4b0        plugin_category:input   type:monitor_agent
output_plugin:false     retry_count:
plugin_id:object:11b4120        plugin_category:input   type:systemd    output_p
lugin:false     retry_count:
plugin_id:object:19fb914        plugin_category:output  type:rewrite_tag_filter
output_plugin:true      retry_count:
...

/api/plugins.json

Same as /api/plugins except in JSON format:
$ curl -s http://localhost:24220/api/plugins.json | python -mjson.tool
{
    "plugins": [
        {
            "config": {
                "@type": "monitor_agent",
                "bind": "0.0.0.0",
                "port": "24220"
            },
            "output_plugin": false,
            "plugin_category": "input",
            "plugin_id": "object:1dce4b0",
            "retry_count": null,
            "type": "monitor_agent"
        },
...

/api/config

Provides basic fluentd configuration information in text format:
$ curl -s http://localhost:24220/api/config
pid:19  ppid:1  config_path:/etc/fluent/fluent.conf     pid_file:       plugin_dirs:["/etc/fluent/plugin"]      log_path:

/api/config.json

Provides basic fluentd configuration information in JSON format:
$ curl -s http://localhost:24220/api/config.json | python -mjson.tool
{
    "config_path": "/etc/fluent/fluent.conf",
    "log_path": null,
    "pid": 19,
    "pid_file": null,
    "plugin_dirs": [
        "/etc/fluent/plugin"
    ],
    "ppid": 1
}

Query String Options

debug

For plugins, this will print all of the instance variables:
$ http://localhost:24220/api/plugins.json\?debug=1 | python -mjson.tool
{
    "plugins": [
        {
            "config": {
                "@type": "monitor_agent",
                "bind": "0.0.0.0",
                "port": "24220"
            },
            "instance_variables": {
                "bind": "0.0.0.0",
                "emit_config": false,
                "emit_interval": 60,
...

@type

Search for plugin by @type:
$ http://localhost:24220/api/plugins.json\?@type=monitor_agent | python -mjson.tool
{
    "plugins": [
        {
            "config": {
                "@type": "monitor_agent",
                "bind": "0.0.0.0",
                "port": "24220"
            },
            "output_plugin": false,
            "plugin_category": "input",
            "plugin_id": "object:1dce4b0",
            "retry_count": null,
            "type": "monitor_agent"
        }
    ]
}

@id

Search for plugin by @id. For example, in the above output, there is "plugin_id": "object:1dce4b0". Once you have identified the id, you can use that to display only the information for that particular id:
$ http://localhost:24220/api/plugins.json\?@id=object:1dce4b0 | python -mjson.tool
{
    "plugins": [
        {
            "config": {
                "@type": "monitor_agent",
                "bind": "0.0.0.0",
                "port": "24220"
            },
            "output_plugin": false,
            "plugin_category": "input",
            "plugin_id": "object:1dce4b0",
            "retry_count": null,
            "type": "monitor_agent"
        }
    ]
}

tag

Match the tag and get the info from the matched output plugin. Only works on output plugins. I unfortunately don't have an example, but I suppose you could use something like this to find the output plugins which have a match block which has a match for **_sendtoforwarder_**:
$ http://localhost:24220/api/plugins.json\?tag=prefix_sendtoforwarder_suffix | python -mjson.tool
{
    "plugins": [
        {
...

Debugging the Fluentd Elasticsearch plugin


First, identify the output plugin in question to get the plugin id:
$ http://localhost:24220/api/plugins.json\?@type=elasticsearch_dynamic | python -mjson.tool
{
    "plugins": [
        {
            "buffer_queue_length": 0,
            "buffer_total_queued_size": 0,
            "config": {
                "@type": "elasticsearch_dynamic",
...
                "index_name": ".operations.${record['@timestamp'].nil? ? Time.at
(time).getutc.strftime(@logstash_dateformat) : Time.parse(record['@timestamp']).
getutc.strftime(@logstash_dateformat)}",
...
            "plugin_id": "object:1b4cc64",
...

This is the one I'm looking for, which has a plugin id of object:1b4cc64. Next, I can use the @id parameter in conjunction with the debug one to get some interesting statistics:
$ http://localhost:24220/api/plugins.json\?@id=object:1b4cc64\&debug=1 | \
  python -mjson.tool | \
  egrep 'buffer_total_queued_size|emit_count'
            "buffer_total_queued_size": 0,
                "emit_count": 3164,

I can even put this in a simple loop to see how the queue size and emit count change over time:
$ while true ; do
  date
  http://localhost:24220/api/plugins.json\?@id=object:1b4cc64\&debug=1 | \
    python -mjson.tool | egrep 'buffer_total_queued_size|emit_count'
  sleep 1
done
Wed Dec  7 23:56:18 UTC 2016
            "buffer_total_queued_size": 0,
                "emit_count": 3318,
Wed Dec  7 23:56:21 UTC 2016
            "buffer_total_queued_size": 1654,
                "emit_count": 3322,
Wed Dec  7 23:56:23 UTC 2016
            "buffer_total_queued_size": 2146,
                "emit_count": 3324,
Wed Dec  7 23:56:25 UTC 2016
            "buffer_total_queued_size": 0,
                "emit_count": 3326,

This tells me that the plugin is working, the queues are being flushed regularly, and the emit count (roughly, the number of times fluentd flushes the queued outputs, the number of times a request is made to Elasticsearch) is steadily increasing.
The Elasticsearch deployed with OpenShift aggregated logging is not accessible externally, outside the logging cluster, by default. The intention is that Kibana will be used to access the data, and the various ways to deploy/install OpenShift with logging allow you to specify the externally visible hostname that Kibana (including the separate operations cluster) will use. However, there are many tools that want to access the data from Elasticsearch. This post describes how to enable a route for external access to Elasticsearch.


You will first need an FQDN for the Elasticsearch (and a separate FQDN for the Elasticsearch ops instance if using the separate operations cluster). I am testing with an all-in-one (OpenShift master + node + logging components) install on an OpenStack machine, which has a private IP and hostname, and a public (floating) IP and hostname. In a real deployment, the public IP addresses and hostnames for the elasticsearch services will need to be added to DNS.
private host, IP: host-192-168-78-2.openstacklocal, 192.168.78.2
public host, IP: run-logging-source.oshift.rmeggins.test.novalocal, 10.x.y.z 

I have done the following on my local machine and in the all-in-one machine, by hacking /etc/hosts. All-in-one machine:
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.x.y.z run-logging-source.oshift.rmeggins.test.novalocal kibana.run-logging-source.oshift.rmeggins.test kibana-ops.run-logging-source.oshift.rmeggins.test es.run-logging-source.oshift.rmeggins.test es-ops.run-logging-source.oshift.rmeggins.test

My local machine:
10.x.y.z run-logging-source.oshift.rmeggins.test.novalocal run-logging-source.oshift.rmeggins.test kibana.run-logging-source.oshift.rmeggins.test kibana-ops.run-logging-source.oshift.rmeggins.test es.run-logging-source.oshift.rmeggins.test es-ops.run-logging-source.oshift.rmeggins.test

I set up a router after installing OpenShift:
$ oc create serviceaccount router -n default
$ oadm policy add-scc-to-user privileged system:serviceaccount:default:router
$ oadm policy add-cluster-role-to-user cluster-reader system:serviceaccount:default:router
$ oadm router --create --namespace default --service-account=router \
     --credentials $MASTER_CONFIG_DIR/openshift-router.kubeconfig

$ oc get pods -n default
NAME                      READY     STATUS    RESTARTS   AGE
docker-registry-1-7z0gq   1/1       Running   0          35m
router-1-8bp88            1/1       Running   0          24m

$ oc logs -n default router-1-8bp88
I1010 19:57:57.815578       1 router.go:161] Router is including routes in all namespaces
I1010 19:57:57.922277       1 router.go:404] Router reloaded:
 - Checking HAProxy /healthz on port 1936 ...
 - HAProxy port 1936 health check ok : 0 retry attempt(s).
...

Logging setup should have already created services for Elasticsearch:
$ oc project logging
$ oc get svc
NAME                     CLUSTER-IP       EXTERNAL-IP   PORT(S)  AGE
logging-es               172.30.76.153    none          9200/TCP 33m
logging-es-ops           172.30.128.108   none          9200/TCP 33m

The route is a reencrypt route. TLS is terminated at the router, then reencrypted using client cert auth to Elasticsearch - SearchGuard is configured to require client cert auth. We use the admin cert/key (using the method to extract them from the previous posting. This allows us to use username/password/token authentication to Elasticsearch - the auth is proxied through the router to SearchGuard/Elasticsearch.
$ ca=`mktemp`
$ cert=`mktemp`
$ key=`mktemp`
$ oc get secret logging-elasticsearch \
    --template='{{index .data "admin-ca"}}' | base64 -d > $ca
$ oc get secret logging-elasticsearch \
    --template='{{index .data "admin-cert"}}' | base64 -d > $cert
$ oc get secret logging-elasticsearch \
    --template='{{index .data "admin-key"}}' | base64 -d > $key
$ oc create route -n logging reencrypt --service logging-es \
                        --port 9200 --hostname es.run-logging-source.oshift.rmeggins.test \
                        --dest-ca-cert=$ca --ca-cert=$ca --cert=$cert --key=$key
$ oc create route -n logging reencrypt --service logging-es-ops \
                         --port 9200 --hostname es-ops.run-logging-source.oshift.rmeggins.test \
                         --dest-ca-cert=$ca --ca-cert=$ca --cert=$cert --key=$key

I'm using the AllowAll identity provider so I can just create users/passwords with oc login (for testing):
$ more /tmp/openshift/origin-aggregated-logging/openshift.local.config/master/master-config.yaml
...
oauthConfig:
  identityProviders:
  - challenge: true
    login: true
    mappingMethod: claim
    name: anypassword
    provider:
      apiVersion: v1
      kind: AllowAllPasswordIdentityProvider

I create a user called "kibtest" (I also use this user for kibana testing) that has cluster admin rights:
$ oc login --username=system:admin
$ oc login --username=kibtest --password=kibtest
$ oc login --username=system:admin
$ oadm policy add-cluster-role-to-user cluster-admin kibtest

I get the username and token for kibtest:
$ oc login --username=kibtest --password=kibtest
$ test_token="$(oc whoami -t)"
$ test_name="$(oc whoami)"
$ test_ip="127.0.0.1"
$ oc login --username=system:admin

Now I can use curl like this:
$ curl -s -k -H "X-Proxy-Remote-User: $test_name" -H "Authorization: Bearer $test_token" -H "X-Forwarded-For: 127.0.0.1" https://es.run-logging-source.oshift.rmeggins.test
{
  "name" : "Sugar Man",
  "cluster_name" : "logging-es",
  "version" : {
    "number" : "2.3.5",
    "build_hash" : "90f439ff60a3c0f497f91663701e64ccd01edbb4",
    "build_timestamp" : "2016-07-27T10:36:52Z",
    "build_snapshot" : false,
    "lucene_version" : "5.5.0"
  },
  "tagline" : "You Know, for Search"
}

$ curl -s -k -H "X-Proxy-Remote-User: $test_name" -H "Authorization: Bearer $test_token" -H "X-Forwarded-For: 127.0.0.1" https://es-ops.run-logging-source.oshift.rmeggins.test/.operations.*/_search?q=message:centos | python -mjson.tool | more
{
    "_shards": {
        "failed": 0,
        "successful": 1,
        "total": 1
    },
    "hits": {
        "hits": [
            {
                "_id": "AVewK5inAJ6n02oOdaIc",
                "_index": ".operations.2016.10.10",
                "_score": 11.1106205,
                "_source": {
                    "@timestamp": "2016-10-10T19:46:43.000000+00:00",
                    "hostname": "host-192-168-78-2.openstacklocal",
                    "ident": "docker-current",
                    "ipaddr4": "172.17.0.5",
                    "ipaddr6": "fe80::42:acff:fe11:5",
                    "message": "time=\"2016-10-10T19:46:43.564686094Z\" level=in....."
...

Works the same from my local machine.
For example, let's say your OpenShift secret has been created like this:
$ oc secrets new logging-elasticsearch \
        key=$dir/keystore.jks truststore=$dir/truststore.jks \
        searchguard.key=$dir/searchguard_node_key \
        searchguard.truststore=$dir/searchguard_node_truststore \
        admin-key=$dir/${admin_user}.key admin-cert=$dir/${admin_user}.crt \
        admin-ca=$dir/ca.crt \
        admin.jks=$dir/${admin_user}.jks

Now you want to extract the CA cert:
$ oc get secret logging-elasticsearch --template='{{.data.admin-ca}}'
error: error parsing template {{.data.admin-ca}}, template: output:1: bad character U+002D '-'

It doesn't like the - character in the field name. You can work around this using index like so:
$ oc get secret logging-elasticsearch --template='{{index .data "admin-ca"}}' |base64 -d > ca
$ openssl x509 -in ca -text|more
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 1 (0x1)
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=logging-signer-20160915173520
        Validity
            Not Before: Sep 15 17:35:19 2016 GMT
            Not After : Sep 14 17:35:20 2021 GMT
        Subject: CN=logging-signer-20160915173520
        Subject Public Key Info:
setdefault is a very useful Python Dict method.
>python
Python 2.7.11 (default, Jul  8 2016, 19:45:00) 
[GCC 5.3.1 20160406 (Red Hat 5.3.1-6)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> dd = {}
>>> dd.setdefault('a', {}).setdefault('b', {})['c'] = 'd'
>>> dd
{'a': {'b': {'c': 'd'}}}
>>> dd.setdefault('a', {}).setdefault('b', {})['e'] = 'f'
>>> dd
{'a': {'b': {'c': 'd', 'e': 'f'}}}
>>> dd.setdefault('g', {}).setdefault('b', {})['e'] = 'f'
>>> dd
{'a': {'b': {'c': 'd', 'e': 'f'}}, 'g': {'b': {'e': 'f'}}}

You can do the same thing in ruby with a little hackery.
>irb
irb(main):001:0> dd = {}
=> {}
irb(main):002:0> ((dd['a'] ||= {})['b'] ||= {})['c'] = 'd'
=> "d"
irb(main):003:0> dd
=> {"a"=>{"b"=>{"c"=>"d"}}}
irb(main):004:0> ((dd['a'] ||= {})['b'] ||= {})['e'] = 'f'
=> "f"
irb(main):005:0> dd
=> {"a"=>{"b"=>{"c"=>"d", "e"=>"f"}}}
irb(main):006:0> ((dd['g'] ||= {})['b'] ||= {})['e'] = 'f'
=> "f"
irb(main):007:0> dd
=> {"a"=>{"b"=>{"c"=>"d", "e"=>"f"}}, "g"=>{"b"=>{"e"=>"f"}}}
Using ruby 2.2.5p319 (2016-04-26 revision 54774) [x86_64-linux]
gem2rpm 0.11.3
gem 2.4.8

I'm trying to convert gems to rpms. Unfortunately, gem2rpm -d does not separate/classify the dependencies. What I really need is a separate list of run-time dependencies. I can get this with gem spec --ruby. For example:
$ gem spec --ruby systemd-journal-1.2.2.gem
# -*- encoding: utf-8 -*-
# stub: systemd-journal 1.2.2 ruby lib

Gem::Specification.new do |s|
  s.name = "systemd-journal"
  s.version = "1.2.2"
...
  if s.respond_to? :specification_version then
    s.specification_version = 4

    if Gem::Version.new(Gem::VERSION) >= Gem::Version.new('1.2.0') then
      s.add_runtime_dependency(%q<ffi>, ["~> 1.9.0"])
      s.add_development_dependency(%q<rspec>, ["~> 3.1"])
      s.add_development_dependency(%q<simplecov>, ["~> 0.9"])
      s.add_development_dependency(%q<rubocop>, ["~> 0.26"])
      s.add_development_dependency(%q<rake>, ["~> 10.3"])
      s.add_development_dependency(%q<yard>, ["~> 0.8.7"])
      s.add_development_dependency(%q<pry>, ["~> 0.10"])
    else

So I need to add Requires: rubygem(ffi) to the spec.

Tags:

I've been having this problem on Fedora 23 with docker 1.9.1 build ee06d03/1.9.1.  When I would use docker pull, it would give me a cert error:
 # docker pull some/image:tag
 Trying to pull repository docker.io/some/image ... failed
 Error while pulling image: Get https://index.docker.io/v1/repositories/some/image/images: x509: certificate signed by unknown authority
Not sure why docker can't just use the system cert bundle. Looking at the code: https://github.com/docker/docker/blob/1061c56a5fc126a76344ea9dca9aa5f5e75eb902/registry/registry.go#L102 docker looks for /etc/docker/certs.d/$hostname and looks for a CA cert bundle in that directory. So I just did this:
 # cd /etc/docker/certs.d
 mkdir docker.io
 cd docker.io
 ln -s /etc/pki/tls/certs/ca-bundle.crt
 ln -s /etc/pki/tls/certs/ca-bundle.trust.crt
 systemctl restart docker
Now docker pull works fine for the Dockerhub repo.
ViaQ - https://github.com/ViaQ

Modern environments become more and more complex every year.  When many applications and services collaborate together to perform a single task finding a cause of a problem is similar to looking for a needle in a haystack. Good tools are needed to help. There are some that do a very good job of collecting logs, alerts or notifications but they focus on a specific problem and not on the problem space as a whole. Collecting just logs, alerts or statistical data is not enough. There needs to be a way to combine the data together and let it speak, so that data from many different applications can be correlated from end-to-end, and from high to low levels.  ViaQ is a new project that aims at creating a framework for connecting data aggregation, processing, and analytic technologies that already exist into a coherent and flexible solution adaptable to multiple use cases.

There are some efforts that we want to leverage:

  • OpenShift has begun shipping an EFK stack as containers - we want to leverage this work to provide our solution as containers, but perhaps not dependent on OpenShift

  • There has been a lot of investigation of collecting event data such as logs using a message bus and feeding that data into analysis tools such as Apache Storm and Apache Spark - we would like to use a message bus based approach so that we can not only feed data to an EFK stack but at the same time feed data to an analytics tool, data warehouses, or any other application requiring a live stream of data

  • There has been a lot of work done to describe a common data format so that logs from OpenStack (all of the various components and log formats if different from oslo logging), Ceph/Gluster, and syslog can be correlated together (e.g. timestamps, hostnames, node identifiers, etc.)

  • Use the new CentOS infrastructure to build upstream images based on CentOS, use the CentOS CI, and eventually use the CentOS container image build and repository systems

Please check out https://github.com/ViaQ/efk-atomicapp for an example application using atomic, or https://github.com/ViaQ/integration-tests for a simple-test.sh shell script which uses just plain Docker containers.

Demonstration



This is a demonstration about how to use RHEL Identity Management to automatically join VMs created with OSP7 (OpenStack) Nova, to automatically assign new VMs to hostgroups, and to automatically create DNS records when a floating IP address is assigned to a VM.

NOTE: The demo shows the ipaotp in the server instance metadata.  The latest code at https://github.com/richm/rdo-vm-factory/blob/master/rdo-ipa-nova uses the inject_files method to inject a file into the new VM containing the OTP, which means the OTP is not available to be queried, and the VM can erase it as soon as possible.


How it works


OpenStack Nova provides hooks http://docs.openstack.org/developer/nova/hooks.html which allow developers to create custom code using the internal Nova APIs to perform actions based on Nova actions.  The demonstration makes use of the build_instance and the instance_network_info hooks.  Here is the source of the hook implementation: https://github.com/richm/rdo-vm-factory/blob/master/rdo-ipa-nova/novahooks.py.  The build_instance.pre hook calls Identity Management with the host-add command.  This will essentially "reserve" a slot for the new host, but the new host will not be fully joined (i.e. able to use Kerberos, SSH, SSSD, etc.) until the ipa-client-install completes. The build_instance.pre hook then creates the parameters that it needs to specify as arguments for the host-add command.  It generates a One Time Password (OTP), and stores the OTP as a file named "/tmp/ipaotp" in the list of injected files in the new VM.  This allows the VM to specify the OTP as the -w argument of ipa-client-install, then delete the OTP after it has been used.  The OTP is used as the userpassword parameter for the host-add call. The ipaclass metadata item was set by using the --property argument with openstack server create.  The value of that item is set to be the value of the userclass parameter for the host-add call, which in the demo is used to automatically assign the new VM to a hostgroup. The fully qualified hostname is constructed by using the VM name as the leftmost component of the FQDN, and the domain used is the Nova dhcp_domain setting if available, or an IPA specific domain configuration parameter.  The force parameter is set to True because we want host-add to add the host even though we don't have a "real" public IP address yet, only the private IP address assigned by OpenStack networking.  The other parameters are provided to show what options are available when calling host-add.

The VM image provided in the demo uses cloud-init, and Nova has been set up to provide certain data for the VM to use with cloud-init to call ipa-client-install with the OTP.  The demo sets up the Nova vendordata_jsonfile_path with a JSON file containing the list of Identity Management client packages to install in the VM, and a runcmd to run a shell script that will run ipa-client-install.  The build_instance.pre hook has been configured to add that shell script in the list of injected files in the new VM.  The shell script extracts the OTP from /tmp/ipaotp, erases the file, then runs ipa-client-install -w $ipaotp -U.  Once this command completes successfully, the VM is fully joined to Identity Management, and users can SSH into the new machine.

The instance_network_info.post is called after Nova handles networking related events.  If the hook detects that there is a floating IP assignment, it calls dnsrecord-add to add the record for the floating IP address to the host in Identity Management.

Configuration


The hook uses a file called /etc/nova/ipaclient.conf to store its configuration.  It requires the following configuration parameters:

  • service_name - The name of the Kerberos principal of the Identity Management HTTP JSON API service

  • url - The URL of the Identity Management HTTP JSON API service

  • cacert - The name of the file containing the certificate of the CA of the Identity Management HTTP JSON API service

  • keytab - The hook requires a user account in Identity Management that has the ability to add hosts and create DNS records.  The hook must be provided with a keytab file for this user.

  • connect_retries - How many times the hook will retry an API call

  • json_rpc_version - The version of the Identity Management HTTP JSON API that the hook is using

  • inject_files - Files to inject into the VM. The format is "/localpath/to/file[ /path/to/file/invm]".  If /path/to/file/invm is not given, then the path in the VM is assumed to be the same as the path in the local machine.

New project: Nunc Stans

The poll() based event framework for 389 is being revamped to use a new framework based on the project Nunc Stans.  Nunc Stans is a thread pool based event framework wrapper.  That is, it provides a layer on top of event frameworks like libevent, tevent, libev, etc., or directly on top of epoll() (if we didn't want to use an event framework).  Nunc Stans has some features that are interesting to 389:

  • It provides thread safe access to the event framework, allowing the use of non-thread safe event frameworks.

  • It provides timed I/O events to facilitate idle timeouts.

  • It provides the ability to run event callbacks in a thread pool or in the main event loop thread.

  • It uses lock free data structures to reduce thread contention.

The project is hosted at http://fedorahosted.org/nunc-stans
Information about how 389 plans to use nunc-stans is here http://www.port389.org/docs/389ds/design/nunc-stans.html

Tags:

By default, the Red Hat/Fedora/RDO openstack-keystone package provides a systemd service file openstack-keystone.service for managing the eventlet-based Keystone daemons. If you configure Keystone to use HTTPD via Apache modwsgi, the openstack-keystone.service no longer works, and you get strange errors if you try to use it:
 # systemctl status openstack-keystone
 Failed: error ....
 # systemctl start openstack-keystone
 Failed: port is in use: 35357
 Failed: port is in use: 5000
 ... other errors ...

See https://bugzilla.redhat.com/show_bug.cgi?id=1213149 for more details.

The service doesn't know that Keystone is no longer a standalone service, but a webapp controlled by httpd. You can use this "trick" to "alias" openstack-keystone to httpd:
 # ln -s /usr/lib/systemd/system/httpd.service /etc/systemd/system/openstack-keystone.service
 # systemctl daemon-reload

This will override the /usr/lib/systemd/system/openstack-keystone.service provided with the openstack-keystone package. Now, when you execute a command related to openstack-keystone, it will instead be redirected to httpd, and you will see the status of httpd instead.