Relying solely on native Kubernetes Secrets doesn’t work in all scenarios. So I’m sharing a different technique that we use at Adevinta to manage Secrets in Kubernetes.
Kubernetes Secrets
Kubernetes natively provides Secrets to store your credentials, whether they be database passwords, API keys or other confidential information that you want to keep.
They are stored securely in the database of Kubernetes which is etcd. You can just persist your Secrets in Base64 encoded text through Kubernetes API. However, if you have relevant RBAC permissions to the namespace where the Secrets are stored, you can retrieve them easily as they are only Base64 encoded.
# example of a secret object in Kubernetes, they are only base64 encoded
apiVersion: v1
kind: Secret
metadata:
name: dotfile-secret
data:
.seret-file: xxxxxxx= # this can be decoded easily
What do we miss using only native Kubernetes Secrets?
Of course, you can store your Secrets in Kubernetes natively, and from the use cases I am going to show you next, they are still built upon this native foundation of Kubernetes Secrets.
However, there are still some scenarios in which the native Secrets cannot entirely provide what we need.
Here are the scenarios that we will cover in this blog post.
Storing Secrets for your application in the remote source control
Nowadays, most of us manage our infrastructure by storing our applications in a reproducible manner. This means storing our application manifests in the Git repository and having some automation to deploy them to the cluster using the usual CI/CD workflow or a GitOps tool like ArgoCD.
As explained earlier, Kubernetes native Secrets are kept as Base64 encoded texts. So, we couldn’t store our Secrets along with the associated manifest (Deployment, Services, ConfigMap etc.) as this would allow anyone who can see the code to retrieve the Secrets.
This prevents us from having a fully reproducible infrastructure as we still need to have a separate mechanism to install those Secrets into a destination namespace in case we want to move/recreate/migrate our workloads into a different cluster.
Sealed Secrets to the rescue
Sealed-secrets is the project that can help us overcome this problem.
As the project description says:
So, here’s a typical scenario where you need to store Secrets or keys for your application:
- ACCESS_KEY_ID and SECRET_ACCESS_KEY for your Route53
- Token or API key for external services, for example, Datadog, Github, etc.
- Application keys used in the properties
Instead of manually putting them into a native Kubernetes Secret in a cluster, you can use Sealed Secrets to store these Secrets as code in your Git repository and deploy them together as a unit to recreate the whole stack of your application.
With Sealed Secrets, your application itself doesn’t need to change and can still be configured to use native Kubernetes Secrets as is.
How does it work?
To put it simply, we can use a Sealed Secrets client called `kubeseal` to encrypt our Secrets, for example:
echo -n "<secrets-text>" | kubectl create secret generic <your-secret-name> -n <your-namespace> --dry-run=client --from-file=foo=/dev/stdin -o json > secret.json
kubeseal --cert <your-public-key-location> --namespace <your-namespace> secret.json > my-sealedsecret.json
Sealed Secrets encrypts Secrets using a public key specified(follow the official document to set them up), creating a Sealed Secrets Kubernetes object that you can store securely in your Git repository.
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
annotations:
sealedsecrets.bitnami.com/namespace-wide: "true"
name: <secret>
namespace: <namespace>
spec:
encryptedData:
<key>: <encrypted-data>
template:
metadata:
annotations:
sealedsecrets.bitnami.com/namespace-wide: "true"
name: <secret>
namespace: <namespace>
type: Opaque
Then you deploy this along with your application manifest to your Kubernetes cluster that has the Sealed Secrets controller running.
The Sealed Secrets controller running in the Kubernetes cluster is the only entity that can decrypt the Secrets, using a private key that is securely stored on the cluster, into native Kubernetes Secrets.
This approach allows for secure and scalable management of Secrets across multiple clusters and teams.
We stored Secrets in a Secret management system — does this mean we need to keep it consistent with the code all the time?
This is an interesting scenario, as even though we have Sealed Secrets to rescue us and we have everything stored in code, this doesn’t cover every aspect of Secrets management.
However, There are cases that are not covered by using solely Sealed Secrets to manage our credentials.
- We store the real secrets in a Secret management system, such as Hashicorp’s Vault, AWS Secret manager etc. as a source of truth to have segregation of roles and responsibilities, but with a lot of secrets, I cannot keep it consistently aligned with Sealed Secrets
- We use a Secret management system to generate rotating credentials for better security. It rotates the Secrets every seven days, because manually managing them one by one would not be practical
Introducing External Secrets
External-secrets is a Kubernetes operator that integrates External Secret management systems like AWS Secrets Manager, HashiCorp Vault, Google Secrets Manager, Azure Key Vault from external APIs and automatically injects the values into a Kubernetes Secret.
How does it work?
External Secrets works by using Kubernetes controllers to automatically synchronise secrets between the External Secret store and Kubernetes. When a secret is added or updated in the External Secret store, the controller detects the change and updates the corresponding Kubernetes Secret object.
Let’s dig a bit deeper into how it works in the real world.
There are two main objects you work with when using External Secrets:
- SecretStore — this object defines “How” to sync the Secrets
- ExternalSecret — this object defines “What” Secrets need to be synced
First, we need a SecretStore. In this case, we use AWS SecretsManager as an example, so what we need is an AWS access key generated from an entity in AWS:
apiVersion: external-secrets.io/v1beta1
kind: SecretStore
metadata:
name: secretstore
spec:
provider:
aws:
service: SecretsManager
region: eu-west-1
auth:
secretRef:
accessKeyIDSecretRef:
name: aws-secret
key: access-key
secretAccessKeySecretRef:
name: aws-secret
key: secret-access-key
Then after you apply this SecretStore object, you can try to describe the object:
kubectl describe secretstore <your-secretstore-name>
You should see the status of the SecretStore as Valid:
Status:
Conditions:
Last Transition Time: xxxx
Message: store validated
Reason: Valid
Status: True
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Valid 4m58s secret-store store validated
Secondly, create an External Secret object defining which Secret name and key in SecretManager you want to sync, and a Kubernetes Secret name you want to be generated:
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: external-secret
spec:
refreshInterval: 12h # how frequent you want your secret to be refreshed
secretStoreRef:
name: secretstore-sample
kind: SecretStore
target:
name: kubernetes-secret # name of generated Kubernetes secret
creationPolicy: Owner
data:
- secretKey: kubernetes-secret-key # key name in Kubernetes secret
remoteRef:
key: aws-secret-1
property: SECRET_A
After applying, check by describing the object:
kubectl describe externalsecret <your-externalsecret-name>
You should see the status as Ready. Otherwise, please make sure you have the policy to fetch the Secrets:
Status:
Conditions:
Last Transition Time: xxxxxx
Message: Secret was synced
Reason: SecretSynced
Status: True
Type: Ready
Refresh Time: xxxxxx
Synced Resource Version: xxxxxx
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Updated 23s. external-secrets Updated Secret
Verify that your native Kubernetes Secret is synced and generated:
kubectl get secret kubernetes-secret -o yaml
apiVersion: v1
kind: Secret
metadata:
annotations:
reconcile.external-secrets.io/data-hash: xxxxxxxxx
name: kubernetes-secret
type: Opaque
data:
app-secret-1: xxxxxxxxx
immutable: false
Now, you can use this Secret natively with your application without any hassle.
Also, notice that you might have seen:
refreshInterval: 12h # how frequent you want your secret to be refreshed
This is a very cool capability from External Secrets that allows you to periodically sync your Secrets to ensure they’re being updated and it also helps with the use case where you are using rotated Secrets.
What else?
Looks like we found a solution for all of our scenarios, doesn’t it?
Actually, there is one last missing part that would glue our solutions together neatly, which is:
My Secret is now safely stored and synced from the External Secrets store, but how does my application know and use the newly refreshed Secrets?
As you know, once your application’s pod has started and it mounts the Secret to use, if we want to refresh it natively, we need to restart the pod again so it picks up the refreshed Secrets.
This requires us to re-trigger our deployment pipeline every time the Secrets are refreshed or work around it so the new Secrets can be used.
It doesn’t sound like this would integrate well with our desired scenario, where Secrets can be rotated anytime without human intervention.
Meet Stakater’s reloader — the reloader for your ConfigMap and Secrets
Stakater’s reloader can watch changes in ConfigMap and Secrets and do rolling upgrades on pods with their associated DeploymentConfigs, Deployments, Daemonsets and Statefulsets.
This tool helps us integrate the whole solution together so that when we have our credentials refreshed in the Secret management tool, External Secrets syncs them into native Kubernetes Secrets, then the reloader jumps in and rolls out your Deployment’s pod so the new Secret is immediately being used.
How does the reloader work?
From the user perspective, you simply apply this annotation on your Deployment objects:
kind: Deployment #or DaemonSet, StatefulSet
metadata:
annotations:
reloader.stakater.com/auto: "true" # this is where the magic happenss
#or you can refresh based on only specific secrets
secret.reloader.stakater.com/reload: "foo-secret,bar-secret,baz-secret"
spec:
template:
metadata: xxxx
Reloader watches changes in ConfigMaps and Secrets data, then it forwards these objects to an update handler which looks for a Deployment that has an environment variable matching the ConfigMap’s/Secret’s name.
If the environment variable is found, the reloader gets its value and compares it with the new ConfigMap’s hash value (SHA1). If the old value in the environment variable is different from the new hash value, then the reloader updates the environment variable. If the environment variable does not exist, then it creates a new environment variable with the latest hash value from ConfigMap and updates the relevant Deployment, Daemonset or Statefulset.
Wrap up
To summarise briefly, we have covered two main scenarios:
- Storing Secrets for your application in the remote source control
- Managing your Secrets from External Secrets provider and syncing them into Kubernetes
Both scenarios can be achieved with the help of Sealed Secrets and External Secrets, which will make your Secret management in the Kubernetes ecosystem more robust and secure.
Additionally, there is the the reloader tool that helps you integrate the solution completely with your workloads.
If you have any questions or techniques that you would like to share, feel free to post in a comment.