- You can use Valkey instead of Redis as caching server for Ghost
- Kustomize project folders for Ghost and Valkey
- Valkey configuration file
- Valkey secrets
- Valkey persistent storage claim
- Valkey StatefulSet
- Valkey Service
- Valkey Kustomize project
- Do not deploy this Valkey project on its own
- Relevant system paths
- References
- Navigation
This second part of the Ghost deployment procedure is where you begin working with the Kustomize project for the whole platform's setup. In particular, you will prepare the Kustomize subproject of the caching server for Ghost. The official Ghost documentation mentions Redis, but it is possible to use Valkey instead.
You need a main Kustomize project for the deployment of your Ghost platform. It will contain the subprojects for its components like the Valkey cache server. Start by executing the following mkdir command to create the necessary project folder structure for this part:
$ mkdir -p $HOME/k8sprjs/ghost/components/cache-valkey/{configs,resources,secrets}The main folder for the Valkey Kustomize subproject, cache-valkey, is named following the pattern <component function>-<software name> which this guide also uses to name the main directories for the remaining component subprojects. Furthermore, there is a configs, a resources and a secrets subfolder to better differentiate between the YAML manifests declaring the Kubernetes resources from those related to configurations or secret.
You need to fit Valkey to your needs, and the best way is by setting its parameters in a configuration file:
-
In the
configssubfolder of the Valkey project, create avalkey.conffile:$ touch $HOME/k8sprjs/ghost/components/cache-valkey/configs/valkey.confThe name
valkey.confis the default one for the Valkey configuration file. -
Enter the custom configuration for Valkey in the
configs/valkey.conffile:# Custom Valkey configuration bind 0.0.0.0 protected-mode no port 6379 maxmemory 64mb maxmemory-policy allkeys-lru aclfile /etc/valkey/users.acl dir /dataThe parameters above mean the following:
-
bind
To make the Valkey server listen in specific interfaces. With0.0.0.0it listens to all available ones.[!NOTE] Do not specify here the cluster IP you chose for the Valkey service
It is better to leave this parameter with a "flexible" value to avoid worrying about putting a particular IP in several places. -
protected-mode
Security option for restricting Valkey from listening in interfaces other than localhost. Enabled by default, is disabled with valueno. This way, Valkey can listen through the interface that will have the corresponding service's cluster IP assigned. -
port
The default Valkey port is6379, specified here just for clarity. -
maxmemory
Limits the memory used by the Valkey server. When the limit is reached, it tries to remove keys accordingly to the eviction policy set in themaxmemmory-policyparameter. -
maxmemory-policy
Policy for evicting keys from memory when themaxmemorylimit is reached. Here is set toallkeys-lru, enabling it to remove any key accordingly to an LRU (Least Recently Used) algorithm. -
aclfile
Path to the file containing the Access Control List (ACL) specifying the users authorized to use this Valkey instance. The path specified is the default one, but it is specified here as a reminder of where it is. -
dir
This is the working directory of Valkey where it stores its own database and logs (if configured to be stored). The/datapath is the one already set in the container image of Valkey. It is specified in this configuration as a reminder of where this working directory is.
[!NOTE] The Valkey configuration parameters are described in the official example configuration file
Each Valkey release has its own examplevalkey.conffile, and the version this guide deploys is the 9.0 one. -
To secure the access to this Valkey instance, you need to create a couple of users stored in a secret resource of your Kubernetes cluster.
Note
Your K3s Kubernetes cluster encrypts secrets automatically
Remember that your K3s cluster's server node has the option for encrypting secrets at rest(secrets-encryption) enabled already, avoiding having them stored as clear text within the cluster.
Valkey comes with a default user that you could use, but it is better to declare a more specific one for Ghost. Since Valkey supports Access Control Lists, you can declare the users you need in an ACL file:
-
Create a new
users.aclfile in theconfigsfolder:$ touch $HOME/k8sprjs/ghost/components/cache-valkey/secrets/users.acl -
In
secrets/users.aclenter the ACL rules redefining thedefaultuser and specifiying the user for Ghost:user default on ~* &* +@all >P4s5W0rd_FOr_7h3_DeF4u1t_uSEr user ghostcache on ~ghost:* &* allcommands >pAS2wORT_f0r_T#e_Gh05T_Us3REach rule declared above have a different purpose in this setup:
-
user default
Valkey'sdefaultuser comes enabled with no password by default:-
on
Enables thedefaultuser to allow authenticating with it in the Valkey instance. This user will be used by the Prometheus metrics exporter module running in the same pod together with your Valkey instance. -
~*
Indicates that thisdefaultuser can access all the keys stored in the Valkey instance. -
&*
Allows thedefaultuser to access all channels existing in the Valkey instance. -
+@all
Enables thedefaultuser to use all commands. -
>P4s5W0rd_FOr_7h3_DeF4u1t_uSEr
A clear string specifying the password for thedefaultuser. This user does not have a password by default, so it is better to harden it with one.Also notice the initial
>character: it is not part of the password string, is just the indication that the string is the user's password in the rule.
-
-
user ghostcache
Declares a specificghostcacheuser meant only for the Ghost platform:-
on
Enables theghostcacheuser for authentication in the Valkey instance. -
~ghost:*
Restricts what keys theghostcacheuser can access to only those having theghost:prefix. -
&*
Allows theghostcacheuser to access all channels existing in the Valkey instance. -
allcommands
Alias for the+@alloption. Enables theghostcacheuser to use all commands. -
>pAS2wORT_f0r_T#e_Gh05T_Us3R
A clear string specifying the password for theghostcacheuser.Remember that the initial
>character is not part of the password, it is just indicating that the following string is the user's password within the rule.
-
[!WARNING] The passwords in this
secrets/users.aclfile are plain unencrypted strings
Be careful of who can access thisusers.aclfile. -
Running in the same pod as the Valkey server, there is going to be a Prometheus metrics exporter module using the default Valkey user to access the metrics from the Valkey instance. The problem is that you have to duplicate the default user's credentials to make them available as environment variables for this exporter module:
-
Create a
default_user_env.propertiesunder thesecretsfolder:$ touch $HOME/k8sprjs/ghost/components/cache-valkey/secrets/default_user_env.properties -
Enter the
defaultuser's name and password insecrets/default_user_env.propertiesas environment variables:REDIS_USER=default REDIS_PASSWORD=P4s5W0rd_FOr_7h3_DeF4u1t_uSEr
This file declares two environment variables that the Prometheus metrics exporter module recognizes:
-
REDIS_USER
The name of the user, in this case Valkey'sdefaultone. -
REDIS_PASSWORD
The user's password string, in this case it must be the same one previously specified for thedefaultuser in the ACL file.
[!WARNING] The password in this
secrets/default_user_env.propertiesfile is a plain unencrypted string
Be careful of who can access thisdefault_user_env.propertiesfile. -
Storage in Kubernetes has two sides: enabling storage as persistent volumes (PVs), and the claims (PVCs) on each of those persistent volumes. For your Ghost's Valkey instance you need one persistent volume (to be declared in the last part of this Ghost deployment procedure), and the claim on that particular PV. See next how to declare the PersistentVolumeClaim resource for your Valkey instance:
-
A persistent volume claim is a resource. Create a
cache-valkey.persistentvolumeclaim.yamlfile under theresourcesfolder:$ touch $HOME/k8sprjs/ghost/components/cache-valkey/resources/cache-valkey.persistentvolumeclaim.yaml -
Declare Valkey's
PersistentVolumeClaimin theresources/cache-valkey.persistentvolumeclaim.yamlfile:# Ghost Valkey claim of persistent storage apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cache-valkey spec: accessModes: - ReadWriteOnce storageClassName: local-path volumeName: ghost-ssd-cache resources: requests: storage: 2.8G
There are a few details to understand from the PVC above:
-
The
spec.accessModesis specified. This is mandatory in a claim and it cannot demand a mode that is not enabled in the persistent volume (PV) itself. -
The
spec.storageClassNameis a parameter that indicates what storage profile (a particular set of properties) to use with the PV. K3s comes with just thelocal-pathincluded by default, something you can check out on your own K3s cluster withkubectl:$ kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 63d -
The
spec.volumeNameis the name of the persistent volume this claim binds itself to. -
In a claim is also mandatory to specify how much storage is requested, hence the need to put the
spec.resources.requests.storageparameter there. Be careful of not requesting more space than what is truly available in the volume.
-
The next thing to do is setting up the StatefulSet declaration for deploying Valkey in your K3s cluster. It has to be a StatefulSet rather than a Deployment because stateful sets are the resources meant for deploying in Kubernetes apps or services that persist data (their state) in a persistent storage. Valkey could be run purely on memory, but it would force it to repopulate its database every time, leading to some delay when booting up:
-
Create a
cache-valkey.statefulset.yamlfile under theresourcessubfolder:$ touch $HOME/k8sprjs/ghost/components/cache-valkey/resources/cache-valkey.statefulset.yaml -
Declare the
StatefulSetresource for the Valkey instance inresources/cache-valkey.statefulset.yaml:# Ghost Valkey StatefulSet for a sidecar server pod apiVersion: apps/v1 kind: StatefulSet metadata: name: cache-valkey spec: replicas: 1 serviceName: cache-valkey template: spec: containers: - name: server image: valkey/valkey:9.0-alpine command: - valkey-server - "/etc/valkey/valkey.conf" ports: - name: server containerPort: 6379 resources: requests: cpu: "0.5" memory: 64Mi volumeMounts: - name: valkey-storage mountPath: /data - name: valkey-config readOnly: true subPath: valkey.conf mountPath: /etc/valkey/valkey.conf - name: valkey-acl readOnly: true subPath: users.acl mountPath: /etc/valkey/users.acl - name: metrics image: oliver006/redis_exporter:v1.80.0-alpine envFrom: - secretRef: name: cache-valkey-exporter-user resources: requests: cpu: "0.25" memory: 16Mi ports: - name: metrics containerPort: 9121 volumes: - name: valkey-storage persistentVolumeClaim: claimName: cache-valkey - name: valkey-config configMap: name: cache-valkey-config defaultMode: 444 items: - key: valkey.conf path: valkey.conf - name: valkey-acl secret: secretName: cache-valkey-acl defaultMode: 444 items: - key: users.acl path: users.acl
This
StatefulSetresource describes the template for the pod containing the Valkey server and its Prometheus metrics exporter service, each running in their own containers:-
replicas
Given the limitations of the cluster, only one replica of the Valkey pod is requested. -
serviceName
Links the pod deployed by thisStatefulSetto the specified headlessService.[!IMPORTANT] The pod gets a predictable hostname within the cluster
Check out the section about the correspondingServiceresource for more information. In particular, read about thespec.clusterIPparameter to understand how the pod's predictable hostname looks like. -
template
Describes how the pod resulting from thisStatefulSetshould be:-
spec.containers
This pod template has two containers running in it, arranged in what is known as a sidecar pattern:-
Container
server
Container that runs the Valkey server itself:-
The container
imageused is the Alpine Linux variant of the most recent 9.0 version. -
In the
commandsection you can see how the configuration file path is directly specified to the service. -
The
containerPortis the same as theportset in thevalkey.conffile. ThiscontainerPorthas anamethat allows invoking it by name rather than by port number directly.[!NOTE] The
containerPortdeclaration is mostly informative
ThecontainerPortdeclarations do not actually determine what ports are opened in a pod. That's up to the applications or services running within the pod. The optionalnameattribute is what makes thecontainerPortuseful, because it allows you to call the port by name rather than by number. This enables changing the port number when necessary without affecting theServiceresource that calls the port by name.Check this thread to know more about this technicality.
-
The
resources.requestsdeclares a minimum requirement of CPU and memory resources to grant to the container when it starts. If the container needs more resources, the Kubernetes control plane takes care of assigning them if they are available.
[!NOTE] It is better to set minimum requirements, not upper limits
According to this article, setting upper usage limits affects negatively the performance of apps or services and leads to a waste of unused resources. It is better to leave the Kubernetes control plane to handle the bursts of activity that may happen in the cluster.-
The
volumeMountsindicate which volumes are to be mounted in the Valkey container:-
valkey-storageenables the storage volume for Valkey's/dataworking directory. -
valkey-configenables the volume containing Valkey's configuration file. -
valkey-aclenables the file where the Valkey users are declared in an ACL.
Notice how in the
valkey-configandvalkey-aclentries there is areadOnlyoption enabled to ensure those configuration files remain unchanged in the container. -
-
-
Container
metrics
Container that runs a service specialized in getting statistics from the Valkey server in a format that a Prometheus server can read:-
The Docker
imageis an Alpine Linux variant of this exporter's 1.80 version. -
By default, this exporter tries to connect to
redis://localhost:6379, which fits the configuration applied to the Valkey service. -
In the
envFromsection, thecache-valkey-exporter-userSecretresource contains thedefault_user_env.propertiesfile where thedefaultusername and password are declared for this Prometheus metrics exporter. You will declare theSecretin the Kustomize declaration for this Ghost's Valkey subproject. -
This container also has minimum requirements of RAM and CPU
resources. ItscontainerPorthas anametoo, and its number is the one used by default by the exporter, matching the one you will declare in the next section within the corresponding Valkey'sServiceresource.
-
-
-
spec.volumes
This section declares the volumes that are to be mounted in the pod. In particular, here are enabled all the volumes mounted in the Valkey container:-
cache-valkey-storage
Invokes thecache-valkeyPersistentVolumeClaimdeclared earlier for enabling the persistent storage that will contain Valkey's working data. -
valkey-config
Enables thevalkey.conffile that is kept in a yet-to-be-definedcache-valkey-configConfigMapobject as a volume so it can be mounted by theservercontainer in itsvolumeMountssection. -
valkey-acl
Enables theusers.aclfile being kept in a yet-to-be-definedcache-valkey-aclSecretobject as a volume so it can be mounted by theservercontainer in itsvolumeMountssection.
Pay attention to the
defaultModeparameter in thevalkey-configandvalkey-aclentries. It sets a particular permission mode by default for the items contained in them. In both cases, the parameter sets a read-only permission for all users with the444mode but only for the listeditems.Also know that, in the items list, the
keyparameter is the name identifying the file present inside theConfigMaporSecretobject, and thepathis the relative path assigned to the item. -
-
-
You have declared the pod executing the containers running the Valkey server and its Prometheus statistics exporter. Now you need to define the Service resource that enables access to them:
-
Generate a new file named
cache-valkey.service.yaml, also under theresourcessubfolder:$ touch $HOME/k8sprjs/ghost/components/cache-valkey/resources/cache-valkey.service.yaml -
Declare the Valkey
Serviceresource inresources/cache-valkey.service.yaml:# Ghost Valkey headless service apiVersion: v1 kind: Service metadata: name: cache-valkey annotations: prometheus.io/scrape: "true" prometheus.io/port: "9121" spec: type: ClusterIP clusterIP: None ports: - port: 6379 targetPort: server protocol: TCP name: server - port: 9121 targetPort: metrics protocol: TCP name: metrics
This
Serviceresource specifies how to access the services running in the Valkey pod's containers:-
metadata.annotations
Two annotations required for the Prometheus data scraping service (Prometheus deployment is covered by a later chapter).These annotations inform Prometheus about which port to scrape for getting metrics of your Valkey service, which is data provided by the specialized metrics service that runs in themetricscontainer of the Valkey pod. -
spec.type
By default, anyServiceresource is of typeClusterIP, meaning that the service is only reachable from within the cluster's internal network. You can omit this parameter altogether from the YAML when you are using this default type. -
spec.clusterIP
StatefulSetsare limited to use headless services, which are services with no cluster IP assigned (which explains theNonevalue here). These type of services are reachable by the FQDN they have assigned within the cluster. Learn more about this FQDN in the next subsection. -
spec.ports
Describe the ports open in this service. Notice how thenameandporton each port of thisServicematch the ones already defined for the containers in the previousStatefulSetresource.Also see how the
targetPortparameters invoke the ports in the containers by name, not by number. This technique allows you to change the port number in the containers without affecting thisServicedeclaration.
-
Kubernetes assigns to each pod and service a DNS record or predictable Fully Qualified Domain Name (FQDN). This is particularly important to reach headless services such as the one for your Valkey server instance. The predictable FQDN for services is constructed following this template:
<service name>.<namespace>.svc.<cluster domain name>The placeholders between <> in the template are rather self-explanatory:
-
The
<service name>refers to the string specified in themetadata.nameattribute of theServicedeclaration. In the case of the Valkey service iscache-valkey. -
The
<namespace>for the whole Ghost setup is going to beghost. -
The
<cluster domain name>in a Kubernetes cluster iscluster.localby default. But remember that this guide changed this value intohomelab.clusterby setting thecluster-domainparameter in the K3s server node's configuration.
Therefore, the Valkey headless service will have the following absolute FQDN:
cache-valkey.ghost.svc.homelab.cluster.You will use this absolute FQDN to make the Ghost platform connect with its Valkey server.
Note
An absolute FQDN is one with the final dot at its end, indicating that it is the complete DNS record and there is no need to initiate a DNS search
Using absolute FQDNs improves the cluster's performance by avoiding DNS searches when connecting with pods or services.
What remains to declare is the main kustomization.yaml file that describes the whole Valkey Kustomize subproject:
-
In the main
cache-valkeyfolder, create akustomization.yamlfile:$ touch $HOME/k8sprjs/ghost/components/cache-valkey/kustomization.yaml -
Enter the following
Kustomizationdeclaration in thekustomization.yamlfile:# Ghost Valkey setup apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization labels: - pairs: app: cache-valkey includeSelectors: true includeTemplates: true resources: - resources/cache-valkey.persistentvolumeclaim.yaml - resources/cache-valkey.statefulset.yaml - resources/cache-valkey.service.yaml replicas: - name: cache-valkey count: 1 images: - name: valkey/valkey newTag: 9.0-alpine - name: oliver006/redis_exporter newTag: v1.80.0-alpine configMapGenerator: - name: cache-valkey-config files: - configs/valkey.conf secretGenerator: - name: cache-valkey-exporter-user envs: - secrets/default_user_env.properties - name: cache-valkey-acl files: - secrets/users.acl
This
kustomization.yamlfile has elements you have already seen in previous deployments, plus a few extra ones:-
With
labelsyou can set up labels to all the resources generated from thiskustomization.yamlfile. In this case, there is only one labelapp: cache-valkeyto indicate that the resources declared in this Kustomize project belong to the Valkey caching server.The
includeSelectorsandincludeTemplatesare parameters for controlling if the labels must be also included within thespec.selectorandspec.templateblocks of resource declarations such as theDeploymentone you have for your Valkey server. -
The
replicassection allows you to handle the number of replicas you want for deployments, overriding whatever number is already set in their base declaration. In this case you only have one deployment listed, and the value put here is the same as the one set in thecache-valkeyStatefulSetdefinition. -
The
imagesblock gives you a handy way of changing the images specified within theStatefulSetresource, particularly useful for when you want to upgrade to newer minor versions without changing anything else from the deployment declaration. -
There are two details to notice about the
configMapGeneratorandsecretGenerator:-
The
cache-valkey-exporter-userturns the values declared in thesecrets/default_user_env.propertiesinto environment variables that can be loaded in any container that invokes this secret. -
None of these generator blocks have the
disableNameSuffixHashoption enabled, because the name of the resources they generate is only used in standard Kubernetes parameters that are recognized by Kustomize.
-
-
With everything in place, you can check out the YAML resulting from the Ghost Valkey's Kustomize subproject:
-
Execute the
kubectl kustomizecommand on the Ghost Valkey Kustomize subproject's root folder, piped toless(or your favorite text editor) to get the output paginated:$ kubectl kustomize $HOME/k8sprjs/ghost/components/cache-valkey | less
Alternatively, you could just dump the YAML output on a file, called
cache-valkey.k.output.yamlfor instance.$ kubectl kustomize $HOME/k8sprjs/ghost/components/cache-valkey > cache-valkey.k.output.yaml
-
The resulting YAML should look like this one:
apiVersion: v1 data: valkey.conf: |- # Custom Valkey configuration bind 0.0.0.0 protected-mode no port 6379 maxmemory 64mb maxmemory-policy allkeys-lru aclfile /etc/valkey/users.acl dir /data kind: ConfigMap metadata: labels: app: cache-valkey name: cache-valkey-config-c86dc4fh5d --- apiVersion: v1 data: users.acl: | dXNlciBkZWZhdWx0IG9uIH4qICYqICtAYWxsID5QNHM1VzByZF9GT3JfN2gzX0RlRjR1MX RfdVNFcgp1c2VyIGdob3N0Y2FjaGUgb24gfmdob3N0OiogJiogYWxsY29tbWFuZHMgPnBB UzJ3T1JUX2Ywcl9UI2VfR2gwNVRfVXMzUg== kind: Secret metadata: labels: app: cache-valkey name: cache-valkey-acl-bcc5gh9d6g type: Opaque --- apiVersion: v1 data: REDIS_PASSWORD: UDRzNVcwcmRfRk9yXzdoM19EZUY0dTF0X3VTRXI= REDIS_USER: ZGVmYXVsdA== kind: Secret metadata: labels: app: cache-valkey name: cache-valkey-exporter-user-6mdd99ft8d type: Opaque --- apiVersion: v1 kind: Service metadata: annotations: prometheus.io/port: "9121" prometheus.io/scrape: "true" labels: app: cache-valkey name: cache-valkey spec: clusterIP: None ports: - name: server port: 6379 protocol: TCP targetPort: server - name: metrics port: 9121 protocol: TCP targetPort: metrics selector: app: cache-valkey type: ClusterIP --- apiVersion: v1 kind: PersistentVolumeClaim metadata: labels: app: cache-valkey name: cache-valkey spec: accessModes: - ReadWriteOnce resources: requests: storage: 2.8G storageClassName: local-path volumeName: ghost-ssd-cache --- apiVersion: apps/v1 kind: StatefulSet metadata: labels: app: cache-valkey name: cache-valkey spec: replicas: 1 selector: matchLabels: app: cache-valkey serviceName: cache-valkey template: metadata: labels: app: cache-valkey spec: containers: - command: - valkey-server - /etc/valkey/valkey.conf image: valkey/valkey:9.0-alpine name: server ports: - containerPort: 6379 name: server resources: requests: cpu: "0.5" memory: 64Mi volumeMounts: - mountPath: /data name: valkey-storage - mountPath: /etc/valkey/valkey.conf name: valkey-config readOnly: true subPath: valkey.conf - mountPath: /etc/valkey/users.acl name: valkey-acl readOnly: true subPath: users.acl - envFrom: - secretRef: name: cache-valkey-exporter-user-6mdd99ft8d image: oliver006/redis_exporter:v1.80.0-alpine name: metrics ports: - containerPort: 9121 name: metrics resources: requests: cpu: "0.25" memory: 16Mi volumes: - name: valkey-storage persistentVolumeClaim: claimName: cache-valkey - configMap: defaultMode: 444 items: - key: valkey.conf path: valkey.conf name: cache-valkey-config-c86dc4fh5d name: valkey-config - name: valkey-acl secret: defaultMode: 444 items: - key: users.acl path: users.acl secretName: cache-valkey-acl-bcc5gh9d6g
There are a few things to highlight in the YAML output above:
-
You might have noticed this in the previous Kustomize projects you have deployed before, but the generated YAML output has the parameters within each resource sorted alphabetically. Be aware of this when you compare this output with the files you have created and your expected results.
-
The names of the
cache-valkey-configconfig map,cache-valkey-aclandcache-valkey-exporter-usersecrets have a hash as a suffix, added by Kustomize. The hash is calculated from the content of the renamed resources. -
Both
cache-valkey-exporter-userandcache-valkey-aclsecrets are printed obfuscated in base64 format, but in slightly different ways:-
Since the
cache-valkey-exporter-usersecret is a set of environment variables, only their values are obfuscated. -
The
cache-valkey-aclsecret is declared as a file, so its whole content is obfuscated.
-
-
Another detail to notice is how the label
app: cache-valkeyappears not only as label in themetadatasection of all the resources, but Kustomize has also set it asselectorboth in theServiceand theStatefulSetresources declarations.
-
-
If you installed the
kubeconformcommand in yourkubectlclient system (as explained in the chapter G026), you can validate the Kustomize output with it. So, assuming you have dumped the output in acache-valkey.k.output.yamlfile, execute the following:$ kubeconform -summary cache-valkey.k.output.yaml Summary: 6 resources found in 1 file - Valid: 6, Invalid: 0, Errors: 0, Skipped: 0Notice the
-summaryoption in the shell command above. It is what makes thekubeconformcommand print a results summary when it finishes.[!NOTE]
kubeconformdoes not produce an output when it considers the input valid
With a completely valid input as in this case and no option specified,kubeconformdoes not print anything in the shell.On the other hand,
kubeconform(at least, in the version0.7.0installed with this guide) is not yet able to understand Kustomize projects and ends up finding errors in them.
This Valkey setup is missing one critical element, the persistent volume it needs to store its working directory data. Do not confuse it with the claim you have configured for your Valkey cache server. That PV and other elements are to be declared in the main Kustomize project you will declare in the final part of this Ghost deployment procedure. Until then, do not deploy this Valkey subproject.
$HOME/k8sprjs/ghost$HOME/k8sprjs/ghost/components$HOME/k8sprjs/ghost/components/cache-valkey$HOME/k8sprjs/ghost/components/cache-valkey/configs$HOME/k8sprjs/ghost/components/cache-valkey/resources$HOME/k8sprjs/ghost/components/cache-valkey/secrets
$HOME/k8sprjs/ghost/components/cache-valkey/kustomization.yaml$HOME/k8sprjs/ghost/components/cache-valkey/configs/valkey.conf$HOME/k8sprjs/ghost/components/cache-valkey/resources/cache-valkey.persistentvolumeclaim.yaml$HOME/k8sprjs/ghost/components/cache-valkey/resources/cache-valkey.service.yaml$HOME/k8sprjs/ghost/components/cache-valkey/resources/cache-valkey.statefulset.yaml$HOME/k8sprjs/ghost/components/cache-valkey/secrets/default_user_env.properties$HOME/k8sprjs/ghost/components/cache-valkey/secrets/users.acl
- rpi4cluster. K3s Kubernetes. Redis
- Daniel Cushing. Simple Redis Cache on Kubernetes with Prometheus Metrics
- Mark Lu. Deploy and Operate a Redis Cluster in Kubernetes
- Suse Rancher Blog. Deploying Redis Cluster on Top of Kubernetes
- StackOverflow. Redis sentinel vs clustering
-
Kubernetes Documentation. Tasks. Configure Pods and Containers
-
Kubernetes Documentation. Tasks. Inject Data Into Applications
-
Kubernetes Documentation. Concepts. Scheduling, Preemption and Eviction
-
Kubernetes Documentation. Reference. Kubernetes API. Workload Resources
- TheNewStack. Strategies for Kubernetes Pod Placement and Scheduling
- TheNewStack. Implement Node and Pod Affinity/Anti-Affinity in Kubernetes: A Practical Example
- TheNewStack. Tutorial: Apply the Sidecar Pattern to Deploy Redis in Kubernetes
- Opensource.com. An Introduction to Kubernetes Secrets and ConfigMaps
- Dev. Kubernetes - Using ConfigMap SubPaths to Mount Files
- GoLinuxCloud. Kubernetes Secrets | Declare confidential data with examples
- StackOverflow. Import data to config map from kubernetes secret
- Baeldung. CPU Requests and Limits in Kubernetes
- DEV. Kubernetes CPU Limits: The Silent Killer of Performance (And How to Fix It)
<< Previous (G033. Deploying services 02. Ghost Part 1) | +Table Of Contents+ | Next (G033. Deploying services 02. Ghost Part 3) >>