Skip to content

Scale Kubernetes pods based on domain events

Some Domain events can trigger many users hitting a website. For example when a notification about popular content is sent out to a large amount of devices. When the notifications are sent out, it is already clear that a scale-up is needed. Instead of waiting for users to hit the website and scaling-up then, why not scale before?

Domain events

Before a notification is sent out, count the number of devices and increase the notification_scheduled counter.


To be able to scale based on metrics, the metric needs to be made available for Kubernetes. Using Prometheus that can be done using the k8s-prometheus-adapter.

The Prometheus query is made available as notification_scheduled in the k8s-prometheus-adapter configuration.

- seriesQuery: '{__name__=~"^website:notification_scheduled$"}'
as: "notification_scheduled"
metricsQuery: 'website:notification_scheduled'

To check if it works, the raw external metrics can be retrieved like this:

kubectl get --raw /apis/ | jq .

This should list the new metric. If not, it can be that the metric is not available in Prometheus. The debug mode of the k8s-prometheus-adapter can help to find the exact query that is done. To enable the debug mode start the container with the argument -v10.

It can be that the metric is sometimes showing up and sometimes missing. Some programming languages like PHP are stateless and don't register metrics beforehand. When a new deployment happens, all metrics are lost and only show up when triggered again. To get around that issue, a rule can be defined in Prometheus which periodically evaluates a query and provides a default value if missing. Such a rule can be added in the Prometheus configuration:

- name: website
- name: website
interval: "1s"
- record: website:notification_scheduled
expr: sum(avg_over_time(notification_scheduled[1m])) or vector(0)


Kubernetes has a Horizontal Pod Autoscaler (HPA) which is responsible for scaling based on metrics. HPAs can use external metrics to scale.
A condition can be added to the HPA to scale when the target average value of a domain metrics is higher than x.

- type: External
metricName: notification_scheduled
# The targetAverageValue controls the scaling factor.
# Think of this like "how many notifications one pod can handle".
# Example: Consider a spike of 10 000 notifications. 10 000 / targetAverageValue pods will be started.
targetAverageValue: 600

The average will go down eventually, then pods can be scaled down if other conditions allow it.

Measure success

When there is a notification spike, response times should stay constant because more instances are started to handle the traffic. To make sure the changes work, these metrics can be put next to each other:

  • Request queuing duration
  • Requests per minute
  • Amount of pods started

See Optimizing auto-scaling on Kubernetes for more general optimizations of auto-scaling.



Links to this note

These notes are unpolished collections of thoughts, unfinished ideas, and things I want to remember later. In the spirit of learning in public, I'm sharing them here. Have fun exploring, if you want!
© 2021 by Adrian Philipp