Managing Configuration

Applications often need more than just code to run — they also need configuration data. In OpenShift, there are multiple ways to supply this configuration to your workloads in a secure, flexible, and dynamic way.

In this module, we’ll explore how you can use:

  • ConfigMaps to inject environment-specific data.

  • Secrets to safely store sensitive values like API keys or credentials.

  • Persistent Volumes to manage state and external configuration files.

These tools give you powerful ways to separate configuration from code — which is key for portability, reuse, and secure operations.

ConfigMaps

ConfigMaps are a way to provide non-sensitive configuration data to your applications. Think of them as key-value pairs you can inject into your workloads — without baking that data into your container image.

You can use a ConfigMap in two main ways:

  • As environment variables — values from the ConfigMap are exposed as environment variables inside your container.

  • As mounted files — the key-value pairs are mounted into the container’s filesystem, with each key becoming a filename and each value becoming the file’s contents.

Using ConfigMaps as Environment Variables

This is one of the most common approaches. For example, you might create a ConfigMap like this:

oc create configmap app-config \
  --from-literal=APP_MODE=production \
  --from-literal=LOG_LEVEL=debug

And then reference it in a deployment:

envFrom:
  - configMapRef:
      name: app-config

Inside your container, APP_MODE and LOG_LEVEL will be available just like any other environment variables.

But there’s a catch: environment variables are baked into the container at startup. That means if you update the ConfigMap later, running pods won’t see the new values. You’ll need to redeploy the pod to pick up the change.

Using ConfigMaps as Files

If you mount the ConfigMap as a volume, each key becomes a file. For example:

volumeMounts:
  - name: config-volume
    mountPath: /etc/config

volumes:
  - name: config-volume
    configMap:
      name: app-config

Inside the container, you’ll get files like /etc/config/APP_MODE and /etc/config/LOG_LEVEL.

In this case, OpenShift will update the files on disk if the ConfigMap changes — but not immediately. These updates depend on the Kubelet’s Sync Frequency, which by default is set to every minute. If the ConfigMap is updated in the cluster, the Kubelet will eventually detect the change and update the mounted files on the node.

You can find the current sync frequency with the following two commands:

In one terminal
# Don't forget to turn this off when you are done!
oc proxy --port 8001
In a second terminal
curl "http://localhost:8001/api/v1/nodes/'NAME_OF_NODE'/proxy/configz"

This field can be modified by an administrator using the kubeletConfig resource.


Here are three resources to test this behavior
Original ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
  name: foo
  namespace: default
data:
  foo: bar
binaryData: {}
immutable: false
Pod with Environment and File mounts
apiVersion: v1
kind: Pod
metadata:
  name: example
  labels:
    app: httpd
  namespace: default
spec:
  volumes:
    - name: configmap
      configMap:
        name: foo
  containers:
    - name: httpd
      image: 'image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest'
      envFrom:
        - configMapRef:
            name: foo
      ports:
        - containerPort: 8080
      volumeMounts:
        - mountPath: /mnt/configmap
          name: configmap
Updated ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
  name: foo
  namespace: default
data:
  foo: bar-bar
binaryData: {}
immutable: false

Despite this automatic sync, it’s still up to your application to watch for changes and reload them — OpenShift won’t restart the pod automatically.

When choosing between environment variables and files, consider how your application loads and refreshes configuration. For static config, environment variables are fine. For dynamic updates, file mounts give you more flexibility — if your app can react to changes.

Secrets

Despite the name, Kubernetes Secrets aren’t as secret as you might think.

By default, a Secret is just a base64-encoded blob stored in etcd — which means it’s not encrypted unless you explicitly enable encryption at rest. Anyone with the right permissions (or enough access to the cluster) can decode and read its contents.

That said, Secrets are still a useful tool for storing sensitive data like:

  • API keys

  • Database credentials

  • TLS certificates

Just like ConfigMaps, you can mount Secrets into a container in two main ways:

  • As environment variables — where each key becomes an environment variable.

  • As files via volume mounts — where each key is exposed as a file in a specified path.

But remember: just mounting a Secret doesn’t make it secure — especially if the pod or container has elevated access.

Better Secret Management

For production workloads, especially in regulated or security-conscious environments, it’s common to avoid storing long-lived secrets directly in the cluster.

Instead, you can pull secrets from more secure sources like:

  • HashiCorp Vault

  • AWS Secrets Manager

  • Azure Key Vault

  • Google Secret Manager

To bridge these external systems with Kubernetes, there are well-supported tools:

Using Kubernetes Secrets is fine for many basic use cases, but it’s a best practice to layer on a dedicated secrets management solution — especially if you’re dealing with sensitive, rotating, or regulated credentials.

Persistent Volumes

Not all configuration fits neatly into key-value pairs.

Sometimes, your application expects to read full files, entire directories, or even structured data that changes over time. In those cases, Persistent Volumes (PVs) can provide a more flexible approach.

A Persistent Volume is essentially a chunk of storage that’s provisioned for your workload. It can come from many backends — NFS, cloud block storage, local disks, or more — and it stays around even if the pod using it gets deleted.

Configuration Use Cases

While Persistent Volumes are often associated with stateful apps (like databases), they can also be used for configuration. For example:

  • Mounting large or structured config files that don’t fit into a ConfigMap.

  • Mounting shared configuration across multiple pods or nodes.

  • Supplying application plugins, rules, or custom scripts packaged in a volume.

A single ConfigMap has a maximum size of 1 MiB. If your configuration files are larger than that, or if you need to include binary data, Persistent Volumes are often a better fit.

Unlike ConfigMaps and Secrets, Persistent Volumes are not managed as Kubernetes-native objects by developers — they require additional coordination with the cluster admin or storage provider.

How Volume Claims Work

As a developer, you don’t create Persistent Volumes directly. Instead, you create a PersistentVolumeClaim (PVC), which describes how much storage you need and how you plan to use it.

Here’s a simple example:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: config-storage
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

When this claim is created, OpenShift looks for a StorageClass that defines how to provision the volume. A built-in controller called the CSI provisioner talks to the underlying storage system — like AWS EBS, Ceph, or a local storage driver — and creates a volume that satisfies the request.

A StorageClass acts like a blueprint for dynamic volume provisioning. It tells OpenShift:

  • Which storage backend to use (e.g., AWS EBS, Ceph, or local disk).

  • What parameters to apply (e.g., volume type, IOPS, encryption).

  • Whether volumes should be retained or deleted when the PVC is removed.

Different StorageClass objects can offer different performance levels, availability zones, or backup policies — making it easy to match the right storage to the right workload.

The volume is then bound to the PVC and can be mounted into your workload like this:

volumeMounts:
  - name: config
    mountPath: /etc/app/config

volumes:
  - name: config
    persistentVolumeClaim:
      claimName: config-storage

From the container’s perspective, this is just another directory. But under the hood, it’s backed by real storage, with durability, capacity, and IOPS defined by the backend.

Unlike Secrets and ConfigMaps, the data inside a Persistent Volume can be modified from within the container. Any changes your application makes to files on the volume will persist.

Also, depending on the access mode (like ReadWriteMany), a single volume can sometimes be mounted by multiple pods at the same time — which is useful for shared config, caches, or collaboration between services.

Knowledge Check

  • What are the two ways a ConfigMap can be mounted into a pod?

  • What happens when you update a ConfigMap that is used as an environment variable?

  • How does mounting a ConfigMap as a file behave differently from mounting it as an environment variable?

  • Why are Kubernetes Secrets not considered fully secure by default?

  • What tools can help integrate external secret management systems with OpenShift?

  • When might you use a Persistent Volume instead of a ConfigMap or Secret?

  • What is a PersistentVolumeClaim, and how does it get fulfilled in OpenShift?

  • What is the role of a StorageClass in dynamic volume provisioning?

  • Can data inside a Persistent Volume be modified by the pod? Why or why not?

  • Under what circumstances can multiple pods share the same volume?