Sed viverra ipsum nunc aliquet bibendum enim facilisis gravida. Diam phasellus vestibulum lorem sed risus ultricies. Magna sit amet purus gravida quis blandit. Arcu cursus vitae congue mauris. Nunc mattis enim ut tellus elementum sagittis vitae et leo. Semper risus in hendrerit gravida rutrum quisque non. At urna condimentum mattis pellentesque id nibh tortor. A erat nam at lectus urna duis convallis convallis tellus. Sit amet mauris commodo quis imperdiet massa. Vitae congue eu consequat ac felis.
Vestibulum lorem sed risus ultricies. Magna sit amet purus gravida quis blandit. Arcu cursus vitae congue mauris. Nunc mattis enim ut tellus elementum sagittis vitae et leo. Semper risus in hendrerit gravida rutrum quisque non.
Eget aliquet nibh praesent tristique magna sit amet purus. Consequat id porta nibh venenatis cras sed felis. Nisl rhoncus mattis rhoncus urna neque viverra justo nec. Habitant morbi tristique senectus et netus et malesuada fames ac. Et tortor consequat id porta nibh venenatis cras sed felis. Fringilla est ullamcorper eget nulla facilisi. Mi sit amet mauris commodo quis. Eget arcu dictum varius duis at consectetur lorem.Venenatis cras sed felis eget velit
Mattis molestie a iaculis at. Volutpat est velit egestas dui id. Suspendisse potenti nullam ac tortor vitae purus faucibus. Aliquet nibh praesent tristique magna sit amet purus gravida. Volutpat blandit aliquam etiam erat velit scelerisque in dictum. Potenti nullam ac tortor vitae purus faucibus ornare suspendisse sed. Aliquet bibendum enim facilisis gravida neque convallis. Malesuada nunc vel risus commodo viverra maecenas. Varius sit amet mattis vulputate enim.
“Arcu cursus vitae congue mauris mattis enim ut tellus elementum sagittis vitae et leo nullam ac tortor”
Egestas quis feugiat urna, tincidunt ut sem sit in ipsum ullamcorper etiam varius turpis tincidunt potenti amet id vel, massa purus arcu lectus scelerisque quisque velit cursus et tortor vel viverra iaculis ornare feugiat ut cursus feugiat est massa, blandit quam vulputate facilisis arcu neque volutpat libero sollicitudin sed ac cursus nulla in dui imperdiet eu non massa pretium at pulvinar tortor sollicitudin et convallis senectus turpis massa bibendum ornare commodo eu scelerisque tristique justo porttitor elit morbi scelerisque facilisis
In a Kubernetes environment, monitoring is essential for ensuring the health and performance of the cluster and its applications. Prometheus, a popular monitoring tool, relies on access to the Kubernetes API to function effectively. For Prometheus (or any similar monitoring tool) to perform its duties, it must be granted access to the Kubernetes API through a service account with the appropriate permissions.
Prometheus needs access to the Kubernetes API to dynamically discover resources, such as pods, services, and nodes, which it monitors. By querying the Kubernetes API, Prometheus can collect metadata about these resources, allowing it to identify where to scrape metrics. This capability is critical in a dynamic environment like Kubernetes, where resources are constantly being created, updated, or deleted.
When deploying Prometheus in a Kubernetes cluster, it typically uses a service account associated with specific permissions defined through Kubernetes Role-Based Access Control (RBAC) policies. However, the details of how the Prometheus pod internally communicates with the Kubernetes API are often not visible.
To gain a deeper understanding of how this communication works, let's explore the process of accessing the Kubernetes API from within a pod, using concrete examples.
Every pod in Kubernetes is automatically assigned a service account unless specified otherwise. This service account is linked to a token, which is automatically mounted inside the pod at the path `/var/run/secrets/kubernetes.io/serviceaccount/token`. This token is a JSON Web Token (JWT) that the pod uses to authenticate itself when communicating with the Kubernetes API.
Alongside the token, Kubernetes also mounts the cluster’s Certificate Authority (CA) certificate inside the pod at `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`. This CA certificate is used to verify the identity of the Kubernetes API server, ensuring that the communication between the pod and the API server is secure and trusted.
When the Prometheus pod needs to interact with the Kubernetes API, it makes HTTPS requests to the API server, typically accessible via the URL `https://kubernetes.default.svc`. This DNS name resolves to the IP address of the Kubernetes API server within the cluster.
The pod includes its service account token in the `Authorization` header of the request to authenticate itself. The communication is encrypted using the CA certificate to verify the identity of the API server. This ensures both authentication and secure communication.
Here’s an example of how this interaction can be executed using `curl`:
curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
--header "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
https://kubernetes.default.svc/api/v1/namespaces/default/pods
Let us understand the command
- `--cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt`: Specifies the CA certificate to verify the API server's identity.
- `--header "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)"`: Adds the service account token to the request, allowing the pod to authenticate with the API server.
- `https://kubernetes.default.svc/api/v1/namespaces/default/pods`: This is the endpoint in the Kubernetes API that lists the pods in the `default` namespace.
The result of this `curl` command would be a JSON response containing information about the pods in the `default` namespace.
In the above example, we assumed that the service account is granted the required permissions to query the API endpoint, however in the case of a monitoring application such as prometheus, you would create a custom cluster role that will allow the service account to fetch data on certain resources but not modify the resources. Typically, you would use a command like this:
oc create clusterrole prometheus-cluster-role \
--verb=get,list,watch \
--resource=pods,services,endpoints,nodes,namespaces