Today, we’re officially archiving this project. First and foremost, Thank You. 🙏
What started as an idea grew into something much bigger because of this community. Your pull requests, bug reports, feature suggestions, stars, blog posts, tweets, and words of encouragement made this project what it is. The trust you placed in us and the time you invested here truly meant a lot.
As we focus on something new, the repository will remain available in read-only mode for anyone who finds it useful. If you’d like to fork it, build on it, or take it in a new direction, we wholeheartedly encourage that. We are also changing the license to a more permissive Apache 2.0 license.
Open source is about shared learning and shared progress — and we’re deeply grateful to have been part of that journey with you.
Thank you again for the support and the collaboration. 🙏
SigLens Helm Chart provides a simple deployment for a highly performant, low overhead observability solution that supports automatic Kubernetes events & container logs exporting
helm repo add siglens-repo https://siglens.github.io/charts
helm install siglens siglens-repo/siglens
Please ensure that helm is installed.
To install SigLens from source:
git clone
cd charts/siglens
helm install siglens .Important configs in values.yaml
| Values | Description |
|---|---|
| siglens.configs | Server configs for siglens |
| siglens.storage | Defines storage class to use for siglens StatefulSet |
| siglens.storage.size | Storage size for persistent volume claim. Recommended to be half of license limit |
| siglens.ingest.service | Configurations to expose an ingest service |
| siglens.ingest.service | Configurations to expose a query service |
| k8sExporter.enabled | Enable automatic exporting of k8s events using an exporting tool |
| k8sExporter.configs.index | Output index name for kubernetes events |
| logsExporter.enabled | Enable automatic exporting of logs using a Daemonset fluentd |
| logsExporter.configs.indexPrefix | Prefix of index name used by logStash. Suffix will be namespace of log source |
| affinity | Affinity rules for pod scheduling. |
| tolerations | Tolerations for pod scheduling. |
| ingress.enabled | Enable or disable ingress for the service. |
| ingress.className | Ingress class to use. |
| ingress.annotations | Annotations for the ingress resource. |
| ingress.hosts | List of hosts for the ingress. |
| ingress.tls | TLS configuration for the ingress. |
If k8sExporter or logsExporter is enabled, then a ClusterRole will be created to get/watch/list all resources in all apigroups. Which resources and apiGroups can be edited in serviceAccount.yaml
Currently, only awsEBS and local storage classes provisioners can be configured by setting storage.defaultClass: false and setting the required configs. To add more types of storage classes, add the necessary provisioner info to storage.yaml.
It it recommended to use a storage class that supports volume expansion.
Example configuration to use an EBS storage class.
storage:
defaultClass: false
size: 20Gi
awsEBS:
parameters:
type: "gp2"
fsType: "ext4"
Example configuration to use a local storage class.
storage:
defaultClass: false
size: 20Gi
local:
hostname: minikube
capacity: 5Gi
path: /data # must be present on local machine
To add AWS credentials, add the following configuration:
serviceAccount:
annotations:
eks.amazonaws.com/role-arn: <<arn-of-role-to-use>>
If issues with AWS credentials are encountered, refer to this guide.
To use abc.txt as a license, add the following configmap:
kubectl create configmap siglens-license --from-file=license.txt=abc.txt
Set the following config:
siglens:
configs:
license: abc.txt
GKE Cluster Setup
Install kubectx/kubectl:
- kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/
- kubectx: https://github.com/ahmetb/kubectx
Install Auth Plugin:
- Install gke-gcloud-auth-plugin
brew install google-cloud-sdk- See https://stackoverflow.com/a/74733176/16662168 if you have issues
gcloud components install gke-gcloud-auth-plugin
- Run
gke-gcloud-auth-plugin --versionto verify it's installed
Get Permissions:
- Go to the IAM page: https://console.cloud.google.com/iam-admin
- Add these permissions for yourself if you don't have them:
- Kubernetes Engine Admin
- Kubernetes Engine Cluster Admin
- Go to the GKE page: https://console.cloud.google.com/kubernetes
- Click Create, and choose the Standard option
- For the region, you should select the same region that you want the GCS bucket in, although it will still work if the GKE cluster and the GCS bucket are in different regions
- Use Standard Tier
- In the Node Pools → default-pool tab, set the number of nodes per zone
- In the Nodes tab for the default-pool, choose an instance type with enough CPU/RAM, and configure the amount of disk space
- Click Create (you don't need to alter the options in the other tabs)
- On the main GKE page, click your cluster to go to the page for it
- On the top, click Connect
- Copy the Command Line Access command, and run it
- In terminal, run
kubectxto verify you're on your new cluster (if you have multiple clusters, it highlights the one you're on) - Wait for the cluster to finish getting set up
Note: If you don't create the Service Account with the correct roles, I'm not sure how to update the roles; so you may need to make a new Service Account with the proper roles.
-
Go to the IAM page: https://console.cloud.google.com/iam-admin
-
Click the Service Accounts tab
-
Click Create Service Account
-
Set the name, then click Create and Continue
-
Add these roles:
- Storage Admin
- Storage Object Admin
Note: There are several other roles similar to these, but when I tried those, they didn't give me enough permissions.
-
Click Done
Note: If you're repeating this step on your cluster, first run kubectl delete secrets gcs-key. You might also want to delete any old keys in your Downloads folder.
- Run
kubectl create namespace siglensent - (Optional) Run
kubectl config set-context --current --namespace=siglensentto make siglensent your default namespace (kubectl only searches in your default namespace unless you specify a different namespace, or specify to search all namespaces) - On the Service Accounts page, click your new Service Account
- Go to the Keys tab
- Click Add Key → Create New Key
- Select the JSON key, and click Create to download it
- In the terminal, run the following. Make sure to use the absolute path to your key:
kubectl create secret generic gcs-key \ --from-file=key.json=/path/to/your-key.json \ --namespace=siglensent
EKS Cluster Setup
Install kubectl:
Install AWS CLI:
- AWS CLI: https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html
- Configure AWS credentials:
aws configure
-
Begin setting up a new EKS cluster on the AWS console
- Use "Custom Configuration"
- Disable EKS Auto Mode
- Name the cluster
- Click "Create recommended role" to create the Cluster IAM role. Leave the default settings, and assign your new role
-
Continue using the default settings for the next few pages. Stop when you get to the Select Add-ons page
-
Use these add-ons:
- CoreDNS
- Kube-proxy
- Amazon VPC CNI
- Amazon EBS CSI Driver
- Amazon EKS Pod Identity Agent
-
For the VPC CNI and EBS CSI add-ons, click "Create recommended role", keep the defaults, and then add that IAM role to the add-on
-
Click Next and create the cluster
-
Wait for the cluster to finish getting created
-
Go to the Compute tab in your cluster and click "Add node group"
-
Name the node group, and use "Create recommended role" to create a new role and assign it
-
Click next
-
Select the desired instance type and min/max/desired nodes
-
Leave the rest of the settings at their default, and create the node group
-
Wait until the node group is Active
Connect to your cluster using the AWS CLI:
aws eks update-kubeconfig --region <region> --name <cluster-name>This step is optional. If you won't configure SigLens to run with S3, then you don't need this step. If you want to give S3 permissions via an AWS access key and secret access key, you can skip this step.
-
Get the OpenID Connect provider URL on the Overview tab of your EKS cluster
-
Go to IAM → Identity Providers → Add provider
-
Configure the provider:
- Use OpenID Connect
- Paste your OpenID URL as the Provider URL
- Use
sts.amazonaws.comas the Audience
-
Go to IAM → Roles → Create role. Configure the role:
- Use Web identity
- Use your newly created Identity Provider as the identity provider
- Use
sts.amazonaws.comas the Audience - Click Add Condition
- Use
<IDENTITY_PROVIDER>:subas the Key - Use StringEquals as the Condition
- Use
system:serviceaccount:<NAMESPACE>:<RELEASE_NAME>-service-accountas the Value- The namespace is the namespace you'll install the helm chart into. It will be "siglensent" unless you change it in the values.yaml config file
- The release name is what you'll install the chart with helm as. It will be "siglensent" unless you change it
-
Click Next
-
Add S3 full access permissions
-
Click Next
-
Name the role, add an optional description, and click Create Role
-
Prepare Configuration:
- Begin by creating a
custom-values.yamlfile, where you'll provide your license key and other necessary configurations - Please look at this sample
values.yamlfile for all the available config - By default, the Helm chart installs in the
siglensentnamespace. If needed, you can change this in yourcustom-values.yaml, or manually create the namespace with the command:kubectl create namespace siglensent
- Begin by creating a
-
Add Helm Repository:
Add the Siglens Helm repository with the following command:
helm repo add siglens-repo https://siglens.github.io/charts
If you've previously added the repository, ensure it's updated:
helm repo update siglens-repo
-
Update License and TLS Settings:
- Update your
licenseBase64with your Base64-encoded license key. For license key, please reach out at support@sigscalr.io - If TLS is enabled, ensure you also update
acme.registrationEmail,ingestHost, andqueryHostin your configuration
- Update your
-
Apply Cert-Manager (If TLS is enabled): If TLS is enabled, apply the Cert-Manager CRDs using the following command:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.4/cert-manager.crds.yaml
-
Update the Resources Config:
- Update the CPU and memory resources for both raft and worker nodes:
raft.deployment.cpu.requestraft.deployment.memory.requestworker.deployment.cpu.requestworker.deployment.memory.requestworker.deployment.replicas
- Set the required storage size for the PVC of the worker node:
pvc.sizeand storage class type:storageClass.diskType
- Update the CPU and memory resources for both raft and worker nodes:
5.5. (Optional) Customize Storage Classes:
By default, Siglensent uses GCP Persistent Disk (pd.csi.storage.gke.io) as the provisioner and defines two StorageClass objects:
gcp-pd-rwo— used for worker node volumes (pd-ssdby default)gcp-pd-standard-rwo— used for raft node volumes (pd-standardby default)
These are customizable through the storageClass section in your custom-values.yaml.
You can define shared defaults under storageClass.<key> and override per component (worker, raft) only if needed.
storageClass:
provisioner: pd.csi.storage.gke.io # GCP default provisioner
diskType: pd-standard # Default disk type
reclaimPolicy: Retain
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
worker:
name: gcp-pd-rwo # name for worker PVCs
raft:
name: gcp-pd-standard-rwo # name for raft PVCsstorageClass:
provisioner: ebs.csi.aws.com # AWS EBS CSI driver
diskType: gp2
reclaimPolicy: Delete
worker:
name: aws-ebs-gp2-rwo-worker
diskType: gp3 # Override the default gp2 value
raft:
name: aws-ebs-gp2-rwo-raft💡 You only need to override fields that differ from the shared defaults.
⚠️ Important: Avoid changing thenameof an existingStorageClassin a running cluster unless you know what you're doing. Doing so may break existing PersistentVolumeClaims and lead to data loss or pod scheduling issues.
-
Update the RBAC Database Config (If SaaS is Enabled):
config: rbac: provider: "postgresql" # Valid options are: postgresql, sqlite dbname: db1 # Postgres configuration for RBAC host: "pstgresDbHost" port: 5432 user: "username" password: "password" -
(Optional) Enable Blob Storage:
-
Use S3:
-
Update Config: Update the config section in
values.yaml:config: ... # other config params blobStoreMode: "S3" s3: enabled: true bucketName: "bucketName" bucketPrefix: "subdir" regionName: "us-east-1" ... # other config params -
Setup Permissions
Option 1: AWS access keys:
- Create a secret with IAM keys that have access to S3 using the below command:
kubectl create secret generic aws-keys \ --from-literal=aws_access_key_id=<accessKey> \ --from-literal=aws_secret_access_key=<secretKey> \ --namespace=siglensent
- Set
s3.accessThroughAwsKeys: truein yourcustom-values.yaml
Option 2: IAM Role:
- Get the OpenID Connect provider URL for your cluster
- Go to IAM -> Identity providers -> Add provider, and setup a new OpenID Connect provider with that URL and the audience as
sts.amazonaws.com - Setup a role
- Go to IAM -> Roles -> Create role, and select the OIDC provider you just created
- Add the condition
<IDENTITY_PROVIDER>:sub = system:serviceaccount:<NAMESPACE>:<RELEASE_NAME>-service-account. The<NAMESPACE>and<RELEASE_NAME>are the namespace and release name of your Helm chart; they'll both besiglensentif you follow this README exactly. - Add S3 full access permissions, and create the role
- Add the service account to the
serviceAccountAnnotationssection invalues.yaml - Ensure your
custom-values.yamlhass3.accessThroughAwsKeys: false
-
-
Use GCS:
-
Update Config: Update the
configsection in thevalues.yaml:config: ... # other config params blobStoreMode: "GCS" s3: enabled: true gcs: bucketName: "bucketName" bucketPrefix: "subdir" regionName: "us-east1" ... # other config params -
Create GCS secret:
- Create a service account with these permissions:
- Storage Admin
- Storage Object Admin
- Create a key for the service account and download the JSON file
- Create a secret with the key using the below command (use the absolute path):
kubectl create secret generic gcs-key \ --from-file=key.json=/path/to/your-key.json \ --namespace=siglensent
- Add the service account to the
serviceAccountAnnotationssection invalues.yaml
- Create a service account with these permissions:
-
-
-
Install Siglensent: Install Siglensent using Helm with your custom configuration file:
helm install siglensent siglens-repo/siglensent -f custom-values.yaml --namespace siglensent
-
Update DNS for TLS (If Applicable):
- Run:
kubectl get svc -n siglensent
- Find the External IP of the
ingress-nginx-controllerservice. Then create two A records in your DNS to point to this IP; one foringestHostand one forqueryHostas defined in yourcustom-values.yaml
- Run:
Note: If you uninstall and reinstall the chart, you'll need to update your DNS again. But if you do a helm upgrade instead, the ingress controller will persist, so you won't have to update your DNS.