A company in the Streaming & Media sector needed a new Monitoring System for its Oracle Database.
PREREQUISITES
- The Kubernetes cluster is up and running.
- Prometheus operator set up and working.
- Database working on a separate VM, bare metal, or some cloud provider service offering and some kind of connectivity from K8s to the DB.
- Ssh access to the DB host.
THE CHALLENGE
We all know what a Prometheus operator is, and more or less how to use it in order to monitor different services in our Kubernetes cluster, but what if we have a database running outside the cluster on bare metal or a separate virtual machine?
So, here are some lines on how to accomplish this.
THE SOLUTION
First, ssh to the database and download the latest Prometheus node exporter:
and extract it:
# tar zxvf node_exporter-1.0.1.linux-amd64.tar.gz
prepare a systemd service:
# cat /etc/systemd/system/node_exporter.service
# [Unit]
# Description=Node Exporter
# Wants=network-online.target
# After=network-online.target
#
# [Service]
# User=root
# ExecStart=/opt/node_exporter/node_exporter
#
# [Install]
# WantedBy=default.target
#systemctl daemon-reload
Start and enable the service:
# systemctl start node_exporter
# systemctl enable node_exporter
So far, so good. In some cloud providers, if the firewall is not configured correctly, your service might be exposed to the public internet. Check if you can access the service, if so, think about how to secure it, probably you do not want it exposed, try with firewall rules (from the cloud provider or the host itself). Example command to add a rule from the host:
# iptables -I INPUT -p tcp -s [kubernetes_ip_in_cidr] -m tcp --dport 9100 -j ACCEPT
# service iptables save
# service iptables reload
Now we would need to prepare a service that Prometheus could check. But where is the underwater stone hidden? What service we should use as our database is running outside the Kubernetes cluster and how to access it. Here comes the Kubernetes service resource which is ‘ExternalName’ type with a combination of ‘Endpoints’ together with standard Prometheus ‘service monitor’.
# ---
# apiVersion: v1
# kind: Service
# metadata:
# name: node-exporter
# namespace: monitoring
# labels:
# k8s-app: node-exporter
# spec:
# type: ExternalName
# externalName: [ip_address_of_the_db]
# clusterIP: ""
# ports:
# - name: metrics
# port: 9100
# protocol: TCP
# targetPort: 9100
# ---
# apiVersion: v1
# kind: Endpoints
# metadata:
# name: node-exporter
# labels:
# k8s-app: node-exporter
# subsets:
# - addresses:
# - ip: [ip_address_of_the_db]
# ports:
# - name: metrics
# port: 9100
# protocol: TCP
# ---
# apiVersion: monitoring.coreos.com/v1
# kind: ServiceMonitor
# metadata:
# labels:
# k8s-app: node-exporter
# release: prometheus
# name: node-exporter
# namespace: monitoring
# spec:
# endpoints:
# - honorLabels: true
# interval: 5s
# path: /metrics
# port: metrics
# relabelings:
# - action: labelmap
# regex: __meta_kubernetes_service_label_(.+)
# scheme: http
# scrapeTimeout: 3s
# jobLabel: node-exporter
# namespaceSelector:
# matchNames:
# - monitoring
# selector:
# matchLabels:
# k8s-app: node-exporter
The important part here is that the Service name and endpoint name should match, so Kubernetes ‘knows’ how to direct the traffic using that service. It makes sense to create those resources in the same Kubernetes namespace where Prometheus is running (monitoring by default). Access your Prometheus dashboard, and you should see your new database metrics in a while. Those are the host metrics, but what about the DB metrics? We can use some of the official DB exporters from Prometheus, like: https://github.com/iamseth/oracledb_exporter. Let’s go and see how it’s going.
# ---
# apiVersion: apps/v1
# kind: Deployment
# metadata:
# name: oracledb-exporter
# namespace: monitoring
# spec:
# selector:
# matchLabels:
# app: oracledb-exporter
# strategy:
# type: Recreate
# template:
# metadata:
# labels:
# app: oracledb-exporter
# release: oci-exporter
# spec:
# containers:
# - env:
# - name: DATA_SOURCE_NAME
# value: [dbusername]/[dbpassword]@//[service_addres_of_the_database]:1521/[dbservice]
# image: iamseth/oracledb_exporter:latest
# imagePullPolicy: IfNotPresent
# name: oracldb-exporter
# ports:
# - containerPort: 9161
# name: scrape
# protocol: TCP
# ---
# apiVersion: v1
# kind: Service
# metadata:
# labels:
# app: oracledb-exporter
# name: oracledb-exporter
# namespace: monitoring
# spec:
# ports:
# - name: scrape
# port: 9161
# protocol: TCP
# targetPort: 9161
# selector:
# app: oracledb-exporter
# release: oci-exporter
# type: ClusterIP
# ---
# apiVersion: monitoring.coreos.com/v1
# kind: ServiceMonitor
# metadata:
# labels:
# release: prometheus
# name: oracledb-exporter
# namespace: monitoring
# spec:
# endpoints:
# - honorLabels: true
# interval: 15s
# path: /metrics
# port: scrape
# relabelings:
# - action: labelmap
# regex: __meta_kubernetes_service_label_(.+)
# scheme: http
# scrapeInterval: 15s
# scrapeTimeout: 10s
# jobLabel: oracledb-exporter
# namespaceSelector:
# matchNames:
# - monitoring
# selector:
# matchLabels:
# app: oracledb-exporter
For all Prometheus ‘ServiceMonitor’ it is important to match the ‘release: Prometheus’ as this is the label for which the operator is expecting in order to take action. If you would like to, you can import Grafana some dashboards to use with both exporters, the node exporter might work as is, it will add the database host IP and it should be selectable in the default Prometheus Grafana dashboard about the nodes. For the database exporter, you would need a separate dashboard and some fine-tuning of the queries which are built into the default docker image, but with a little DBA help, you would have full working DB queries. In this case, the setup is fully automated via a helm chart for easy deployment updates.
THE CONCLUSION
Please follow up on our monitoring procedures here. If you have questions on any step of our configuration, you may contact us and we’ll get back to you with the answers.