A single-binary CLI for creating S3-compatible buckets and issuing bucket-scoped credentials.
s3ctl is for teams that need repeatable bucket provisioning without manual
storage and IAM setup. It creates buckets, issues scoped credentials, rotates
OVHcloud keys, deletes empty buckets safely, and is available as release
archives, Debian packages, an APT package, and a container image.
Links: π¦ Releases Β· π³ GHCR Β· π Release Hub / APT Β· π§° Examples
- π Overview
- β¨ Capabilities
- π Quick Start
- π¦ Distribution
- π₯οΈ Website Preview
- ποΈ Batch Provisioning
- βοΈ Configuration
- π§© Built-In Templates
- π IAM Notes
- π§Ή Deleting Buckets
- βοΈ OVHcloud Notes
- π³ Container
- π οΈ Development
- π’ Release Process
s3ctl provisions S3-compatible buckets and automatically issues
bucket-scoped access credentials. It can work with a normal S3/IAM-compatible
provider, or with OVHcloud Public Cloud Object Storage where buckets are exposed
as S3-compatible containers.
It is designed for the common operational workflow:
- create one or many buckets
- optionally enable versioning
- optionally apply a bucket policy from a built-in template or JSON file
- create a fresh access key and secret key for each bucket
- attach a generated policy so each credential only has access to its own bucket
- rotate existing OVHcloud S3 credentials by bucket name
- delete empty buckets safely, or delete non-empty buckets with an explicit force guard
- drive the same workflow from flags, JSON config, or CSV batch input
flowchart LR
input["Flags, JSON config, or CSV batch"] --> plan["s3ctl builds a per-bucket plan"]
plan --> provider{"Provider"}
provider -->|s3| s3api["S3 API creates and configures buckets"]
provider -->|s3 with scoped credentials| iamapi["IAM API creates users, policies, and access keys"]
provider -->|ovh| ovhapi["OVHcloud API creates containers, users, policies, and S3 keys"]
s3api --> output["Text or JSON output"]
iamapi --> output
ovhapi --> output
output --> operator["Endpoint, region, and scoped credentials"]
- Put shared provider settings in
~/.config/s3ctl/config.json. - Run
s3ctl --bucket app-data --dry-run --output json. - Confirm the endpoint, region, and credential scope in the plan.
- Run
s3ctl --bucket app-data --output json. - Store the returned access key and secret securely; secrets are only printed once.
- Bucket provisioning: creates one bucket, many buckets, or CSV-driven batches
- Scoped credentials: creates bucket-specific IAM-style users and access keys
- OVHcloud support: creates containers, Public Cloud users, S3 keys, policies, and optional encryption
- Credential rotation: rotates OVHcloud S3 keypairs by bucket/user name
- OVHcloud policy repair: reapplies scoped S3 user policies to existing bucket users
- Safe deletion: deletes empty buckets without
--forceand requires--forcefor non-empty buckets - JSON output: emits success and error payloads for machine workflows
- Install options: provides release archives, Debian packages, a signed APT repository, and GHCR images
- Validated releases: publishes stable builds after release-candidate validation
Build locally:
make build
./dist/s3ctl --helps3ctl --help is a short operator quick reference. Use s3ctl --help-full
when you need the complete flag, template, and batch CSV reference.
Install the latest published binary:
curl -fsSL https://soakes.github.io/s3ctl/install.sh | bashOn macOS, use the installer instead of manually unpacking the release archive. The published macOS binaries are not Apple-notarized yet, so manually extracted downloads may be blocked by Gatekeeper unless the quarantine marker is removed. The installer handles that step after placing the binary in a user-owned bin directory.
Plan a single bucket with generated scoped credentials:
s3ctl \
--bucket app-data \
--endpoint https://objects.example.com \
--region us-east-1 \
--create-scoped-credentials \
--credential-policy-template bucket-readwrite \
--dry-runProvision an OVHcloud Object Storage container and a dedicated S3 key:
s3ctl \
--provider ovh \
--bucket app-data \
--region UK \
--ovh-service-name PUBLIC_CLOUD_PROJECT_ID \
--output jsonRotate an existing OVHcloud bucket keypair:
s3ctl \
--provider ovh \
--bucket app-data \
--ovh-rotate-credentials \
--output jsonRepair OVHcloud bucket scoping for an existing bucket user:
s3ctl \
--provider ovh \
--bucket app-data \
--ovh-repair-policies \
--output jsonPreview a bucket delete:
s3ctl \
--provider ovh \
--bucket app-data \
--delete \
--dry-runDelete an empty bucket after checking the dry-run output:
s3ctl \
--provider ovh \
--bucket app-data \
--deleteDelete a non-empty bucket after checking the dry-run output:
s3ctl \
--provider ovh \
--bucket app-data \
--delete \
--forceShow focused bucket workflow help:
s3ctl --bucket app-data --helpShow the full CLI reference:
s3ctl --help-fullPlan multiple buckets from repeated flags:
s3ctl \
--bucket app-data \
--bucket logs-archive \
--create-scoped-credentials \
--dry-run \
--output jsonPlan a batch from CSV:
s3ctl \
--batch-file ./examples/s3ctl-batch.csv \
--endpoint https://objects.example.com \
--region us-east-1 \
--create-scoped-credentials \
--dry-run \
--output jsonPublished artifacts cover the supported installation paths:
- GitHub release archives for
linux/amd64,linux/arm64,linux/arm/v7,darwin/amd64, anddarwin/arm64 - Debian
.debpackages foramd64,arm64, andarmhf - a GitHub Pages release hub with install commands and release metadata
- a signed APT repository
- a multi-arch GHCR image
Release candidates use tags like v1.2.3-rc.1. They are useful for testing a
version before it reaches the stable installer, stable APT channel, or
:latest container tag.
Direct installer, recommended for macOS:
curl -fsSL https://soakes.github.io/s3ctl/install.sh | bashOn macOS, install via this script unless you specifically need to handle the
archive yourself. The installer defaults to a user-owned bin directory, prefers
an existing home bin path already present in PATH such as $HOME/.local/bin,
$HOME/bin, or $HOME/.bin, and otherwise uses $HOME/.local/bin with a PATH
hint. It also clears the macOS download quarantine marker from the installed
binary.
If you download and extract a macOS archive manually, Finder may block the binary because the release is not Apple-notarized yet. Prefer the installer, or clear the quarantine marker yourself after verifying the checksum:
xattr -d com.apple.quarantine ./s3ctl-darwin-arm64Pinned installer run:
curl -fsSL https://soakes.github.io/s3ctl/install.sh | bash -s -- --version v1.2.3Custom install location:
curl -fsSL https://soakes.github.io/s3ctl/install.sh | bash -s -- --install-dir "$HOME/.local/bin"Direct Debian package:
curl -fsSLO https://github.com/soakes/s3ctl/releases/latest/download/s3ctl_1.2.3_amd64.deb
sudo apt install ./s3ctl_1.2.3_amd64.debSigned APT repository:
sudo install -d -m 0755 /etc/apt/keyrings
curl -fsSL https://soakes.github.io/s3ctl/apt/s3ctl-archive-keyring.gpg \
| sudo tee /etc/apt/keyrings/s3ctl-archive-keyring.gpg >/dev/null
sudo tee /etc/apt/sources.list.d/s3ctl.sources >/dev/null <<'EOF'
Types: deb
URIs: https://soakes.github.io/s3ctl/apt/
Suites: stable
Components: main
Signed-By: /etc/apt/keyrings/s3ctl-archive-keyring.gpg
EOF
sudo apt update && sudo apt install s3ctlRender the release hub locally with real browser screenshots:
make website-install
make website-check
make website-build
make website-captureDesktop and mobile captures are written to website/.captures/.
The website is built with Vite and the local preview flow falls back to
website/preview-metadata.json when generated release metadata is not present yet.
For bulk runs, the normal pattern is:
- Define the shared provider settings once with flags or config.
- Feed the bucket list in with repeated
--bucketflags or--batch-file. - Let
s3ctlgenerate one scoped user and one access key pair per bucket.
Supported batch CSV columns:
bucketiam_user_nameenable_versioningbucket_policy_filebucket_policy_templatecreate_scoped_credentialscredential_policy_template
Example CSV:
bucket,create_scoped_credentials,credential_policy_template,enable_versioning
app-data,true,bucket-readwrite,true
logs-archive,true,bucket-readonly,falseConfiguration is resolved in this order:
- CLI flags
- JSON config file
- Built-in defaults
Example config:
{
"endpoint": "https://objects.example.com",
"region": "us-east-1",
"enable_versioning": true,
"create_scoped_credentials": true,
"credential_policy_template": "bucket-readwrite",
"bucket_policy_template": "deny-insecure-transport",
"batch_file": "./s3ctl-batch.csv"
}Run it:
s3ctl --config ./examples/s3ctl.json --dry-run --output jsonWhen --output json or "output": "json" is set, command failures are also
written to stdout as JSON. The process still exits non-zero, but automation can
read the error.code, error.message, and
optional error.detail fields instead of scraping text:
{
"operation": "delete",
"dry_run": false,
"config_file": "/home/operator/.config/s3ctl/config.json",
"resource_count": 1,
"error": {
"code": "not_found",
"message": "OVH bucket/container \"app-data\" does not exist in region \"UK\"; nothing was deleted",
"detail": "OVHcloud API error ..."
}
}Example OVHcloud config with OAuth2 service account credentials:
{
"provider": "ovh",
"ovh_service_name": "PUBLIC_CLOUD_PROJECT_ID",
"ovh_client_id": "CLIENT_ID",
"ovh_client_secret": "CLIENT_SECRET",
"region": "UK",
"enable_versioning": true,
"ovh_encrypt_data": true,
"ovh_storage_policy_role": "readWrite",
"output": "json"
}Classic OVH API application credentials are also supported:
{
"provider": "ovh",
"ovh_service_name": "PROJECT_ID",
"ovh_application_key": "APPLICATION_KEY",
"ovh_application_secret": "APPLICATION_SECRET",
"ovh_consumer_key": "CONSUMER_KEY",
"region": "GRA"
}With that saved in your default config, this is enough:
s3ctl --bucket app-dataRelative paths inside the config file are resolved from the config file directory, so config-local batch files and policy documents work cleanly.
Default user config path:
$XDG_CONFIG_HOME/s3ctl/config.json$HOME/.config/s3ctl/config.json
When --config is unset, s3ctl will automatically load that default file if
it exists. This is the right place for shared operator settings such as
provider, endpoint, region, profile, credentials, IAM/OVH defaults, and output
preferences.
Example default user config:
{
"endpoint": "https://objects.example.com",
"region": "us-east-1",
"access_key": "MASTER_ACCESS_KEY_ID",
"secret_key": "MASTER_SECRET_ACCESS_KEY",
"create_scoped_credentials": true,
"credential_policy_template": "bucket-readwrite"
}Use either profile or explicit access_key and secret_key values, not both.
Add session_token when your master credentials are temporary. If those values
are not set in s3ctl, the AWS SDK still uses its normal credential and profile
discovery. If you keep secrets in the default user config, store that file
outside the repository and restrict its permissions.
Install that as your per-user default:
install -d -m 700 "${XDG_CONFIG_HOME:-$HOME/.config}/s3ctl"
install -m 600 ./examples/user-config.json "${XDG_CONFIG_HOME:-$HOME/.config}/s3ctl/config.json"Bucket policy templates:
| Template | Coverage |
|---|---|
deny-insecure-transport |
Denies all S3 actions against the bucket and objects when requests do not use secure transport. |
public-read |
Allows public s3:GetObject access to objects in the bucket. |
Scoped credential policy templates:
| Template | Coverage |
|---|---|
bucket-readonly |
Allows bucket location lookup, bucket listing, and object reads for one bucket. |
bucket-readwrite |
Allows bucket location lookup, bucket listing, object reads, writes, deletes, and multipart upload operations for one bucket. |
bucket-admin |
Allows all S3 actions against one bucket and its objects. |
By default, generated scoped credentials use bucket-readwrite, generated IAM
user names are derived directly from bucket names, and no IAM path is set.
Configure iam_user_prefix or --iam-user-prefix when generated user names
should share a prefix. Configure iam_path or --iam-path when generated
users should be created under an IAM path.
Scoped credential provisioning uses the IAM API in addition to the S3 API. The principal running s3ctl therefore needs permission to:
- create buckets and apply bucket configuration in S3
- create IAM users
- attach inline IAM policies
- create IAM access keys
AWS IAM is the default target. When you need an IAM-compatible alternative, use
--iam-endpoint or iam_endpoint in JSON config.
Use --delete with one or more --bucket values to remove buckets instead of
creating them. Empty buckets can be deleted without --force. Non-empty
buckets require --force; without it, s3ctl lists the bucket contents and
refuses to delete the bucket if objects, object versions, or delete markers are
present. Use --dry-run to preview the target.
s3ctl --bucket app-data --delete --dry-run
s3ctl --bucket app-data --delete
s3ctl --bucket app-data --delete --force --timeout 30mWithout --force, the S3 provider only lists object versions, delete markers,
and current objects to confirm the bucket is empty before deleting it. With
--force, it deletes object versions and delete markers when the endpoint
supports version listing, deletes any remaining current objects, and finally
deletes the bucket.
The S3 principal running the delete needs the matching list, object delete,
object version delete, and bucket delete permissions.
JSON config can also drive this mode:
{
"bucket": "app-data",
"delete_bucket": true
}The shorter "delete": true config key is accepted as an alias for
"delete_bucket": true.
Keep "force": true out of shared default configs unless every run using that
config should be allowed to remove bucket contents before deleting buckets.
Use --timeout or "timeout": "30m" for large buckets or slower
object-storage endpoints. The default timeout is 10m.
Use --provider ovh to create OVHcloud Object Storage through the Public Cloud
API. OVHcloud calls buckets "containers"; s3ctl keeps the CLI wording as
bucket because the resulting credentials are S3-compatible.
The OVHcloud provider creates one Public Cloud user and one S3 credential pair
per bucket, creates the container in --region, attaches the user to that
container with the matching OVHcloud container profile (readWrite by default),
and imports an OVHcloud S3 user policy scoped to that bucket. It does not apply
S3 bucket policy documents; access is controlled through OVHcloud container
profiles and S3 user policies. The replication policy profile uses OVHcloud's
native admin container profile plus an imported S3 user policy that keeps
access scoped to the bucket and denies bucket-administration writes.
The generated OVHcloud user policy denies s3:ListAllMyBuckets so a bucket key
cannot enumerate every bucket in the project. Use mc ls alias/bucket-name to
list objects in the bucket. Bare mc ls alias uses the S3 account-level bucket
listing API, which OVHcloud documents as all-buckets or denied rather than a
reliable single-bucket filtered result.
JSON output reports this OVHcloud container/S3 user policy as
scoped_access_policy_applied. bucket_policy_applied is only emitted when an
S3 bucket policy document was actually applied.
For readOnly and readWrite, s3ctl also adds explicit deny statements for
unsupported operations on the owned bucket. OVHcloud currently falls back to the
bucket owner's ACL when a user policy has no matching allow or deny, so explicit
denies are required for owner-scoped users.
Required OVHcloud settings:
provider:ovhovh_service_name: the Public Cloud project ID/service name- one OVHcloud auth mode: OAuth2 service account credentials, an access token,
classic OVH API application credentials, or standard go-ovh client discovery
such as
ovh.conf region: an OVHcloud Public Cloud/Object Storage region such asUK,GRA,BHS,SBG, orEU-WEST-PAR. Use the uppercase region returned by OVHcloud's Public Cloud API.s3ctlalso accepts lowercase S3 endpoint regions such asukand normalizes them for OVHcloud API calls.
Optional OVHcloud settings:
ovh_api_endpoint: endpoint name such asovh-eu,ovh-ca,ovh-us, or a custom API URLovh_client_idandovh_client_secret: OAuth2 service account credentialsovh_access_token: short-lived OVHcloud access tokenovh_application_key,ovh_application_secret, andovh_consumer_key: classic OVH API application credentialsovh_s3_endpoint: override the returned S3 endpoint when the defaulthttps://s3.<region>.io.cloud.ovh.netform is not right for your projectovh_user_role: defaults toobjectstore_operatorovh_storage_policy_role: one ofadmin,deny,readOnly,readWrite, orreplication. Usereplicationonly for buckets that act as replication targets; it allows bucket versioning/configuration reads and replication target object actions supported by OVHcloud while remaining scoped to the bucket.ovh_encrypt_data: set totrueto enable OVHcloud server-side encryption with OVH-managed keys (AES256/ SSE-OMK). When explicitly set tofalse,s3ctlrequests OVHcloudplaintextcontainer storage.ovh_tags: optional tags to apply to new OVHcloud containers.s3ctldoes not add tags by default. Use JSON config such as"ovh_tags": {"environment": "prod", "owner": "platform"}, repeat--ovh-tag environment=prod --ovh-tag owner=platform.ovh_rotate_credentials: set totrueto rotate S3 credentials for the existing OVHcloud container owner instead of creating a new container. Keep it out of the normal provisioning config unless every run should be a rotation.ovh_repair_policies: set totrueto reapply the OVHcloud container profile and S3 user policy for existing bucket owners without issuing new credentials.
flowchart TD
admin["OVHcloud account or IAM admin"] --> oauth["Create OAuth2 service account"]
admin --> iam["Grant IAM policy on the Public Cloud project"]
oauth --> config["Add client ID and secret to s3ctl config"]
project["Public Cloud project ID"] --> config
iam --> access["Service account can manage Object Storage resources"]
access --> run["s3ctl --provider ovh --bucket app-data"]
config --> run
run --> user["Create bucket-dedicated Public Cloud user"]
run --> bucket["Create Object Storage container in region"]
run --> userpolicy["Import bucket-scoped S3 user policy"]
run --> keys["Create S3 access key and secret"]
user --> policy["Attach container policy role"]
bucket --> policy
policy --> userpolicy
userpolicy --> keys
keys --> result["Return endpoint, region, and credentials"]
Create the OAuth2 service account first. The official ovhcloud CLI is the
cleanest route:
Install the CLI from OVHcloud's official guide:
https://help.ovhcloud.com/csm/en-cli-getting-started?id=kb_article_view&sysparm_article=KB0072704
brew install --cask ovh/tap/ovhcloud-cliWithout Homebrew:
curl -fsSL https://raw.githubusercontent.com/ovh/ovhcloud-cli/main/install.sh | shAuthenticate it with your OVHcloud account:
ovhcloud loginThen create the service account credentials for s3ctl:
ovhcloud account api oauth2 client create \
--name "s3ctl" \
--description "s3ctl bucket provisioning" \
--flow "CLIENT_CREDENTIALS"OVHcloud returns a clientId and clientSecret; use those as ovh_client_id
and ovh_client_secret in the s3ctl config.
You can also create the OAuth2 client from the OVHcloud API console. Open the
console for your account region, go to POST /me/api/oauth2/client, and submit
this body:
- EU:
https://eu.api.ovh.com/console/?branch=v1§ion=%2Fme - CA:
https://ca.api.ovh.com/console/?branch=v1§ion=%2Fme - US:
https://api.us.ovhcloud.com/console/?branch=v1§ion=%2Fme
{
"callbackUrls": [],
"flow": "CLIENT_CREDENTIALS",
"name": "s3ctl",
"description": "s3ctl bucket provisioning"
}Next, grant that service account access to the Public Cloud project. The service account cannot grant access to itself; use the OVHcloud account/admin user or an existing identity with IAM administration rights.
In OVHcloud Manager:
- Open Identity, Security & Operations.
- Open Policies.
- Create a policy named
s3ctl-object-storage. - Under Identities, select the
s3ctlservice account. - Under Product types, select Public Cloud Project.
- Under Resources, select the project long ID shown under the project name,
for example
51ab2732562648349de40f72ac51c1c8. Use this same value asovh_service_name; do not use the display name. - For the first smoke test, authorise all actions on that selected project resource. After confirming it works, tighten the policy to the actions below.
Least-privilege actions for s3ctl:
publicCloudProject:apiovh:getpublicCloudProject:apiovh:user/createpublicCloudProject:apiovh:user/deletepublicCloudProject:apiovh:user/getpublicCloudProject:apiovh:user/policy/createpublicCloudProject:apiovh:user/s3Credentials/createpublicCloudProject:apiovh:user/s3Credentials/deletepublicCloudProject:apiovh:user/s3Credentials/getpublicCloudProject:apiovh:region/storage/createpublicCloudProject:apiovh:region/storage/deletepublicCloudProject:apiovh:region/storage/editpublicCloudProject:apiovh:region/storage/getpublicCloudProject:apiovh:region/storage/policy/create
The policy body in examples/ovh-iam-policy.json is a starting point for the
API route, POST /iam/policy. Get the service account identity URN from
GET /me/api/oauth2/client/{clientId}. OVHcloud documents the format as
urn:v1:<eu|ca>:identity:credential:<account-id>/oauth2-<clientId>. Get the
project resource URN from GET /iam/resource by selecting the
publicCloudProject resource matching your Public Cloud project ID.
Verify the policy before running s3ctl. With the same OAuth2 credentials,
GET /cloud/project should list the project ID:
token="$(curl -fsS \
-d grant_type=client_credentials \
--data-urlencode "client_id=$OVH_CLIENT_ID" \
--data-urlencode "client_secret=$OVH_CLIENT_SECRET" \
-d scope=all \
https://www.ovh.com/auth/oauth2/token | jq -r .access_token)"
curl -fsS -H "Authorization: Bearer $token" \
https://eu.api.ovh.com/1.0/cloud/project | jq .Expected output should include the Public Cloud project ID:
[
"51ab2732562648349de40f72ac51c1c8"
]If OVHcloud returns This service does not exist while the project ID is
correct, the service account usually cannot see the project yet. Recheck the IAM
policy identity, resource, and actions.
Use --ovh-rotate-credentials or "ovh_rotate_credentials": true when a bucket
already exists and you only want a fresh S3 access key and secret:
s3ctl --provider ovh --bucket app-data --ovh-rotate-credentials --output jsonIf using JSON config for a rotation run:
{
"provider": "ovh",
"ovh_service_name": "PUBLIC_CLOUD_PROJECT_ID",
"ovh_client_id": "CLIENT_ID",
"ovh_client_secret": "CLIENT_SECRET",
"region": "UK",
"ovh_rotate_credentials": true,
"output": "json"
}Rotation looks up the existing container by bucket name, reads its ownerId,
reapplies the container profile and scoped S3 user policy, creates a new S3
credential pair for that OVH Public Cloud user, then deletes the previous S3
credentials for that user. The new secret is only returned once, so store the
command output securely. If an old key cannot be deleted after the new key is
created, s3ctl still prints the new credentials and includes a warning so the
stale key can be removed manually.
Use --ovh-repair-policies or "ovh_repair_policies": true when buckets and
keys already exist and you only need to reapply the scoped access policies:
s3ctl \
--provider ovh \
--bucket netspeedy-archives \
--ovh-repair-policies \
--output jsonYou can pass multiple --bucket values or a batch file to repair several
bucket users in one run. The command finds each bucket's ownerId, verifies the
owner still looks bucket-dedicated, reapplies the OVHcloud container profile,
and imports a generated S3 user policy for that bucket. It does not create,
delete, or rotate S3 access keys.
To widen a single bucket for replication target access without changing other
buckets, repair only that bucket with the replication profile:
s3ctl \
--provider ovh \
--bucket netspeedy-archives \
--ovh-storage-policy-role replication \
--ovh-repair-policies \
--output jsonFor already exposed credentials, prefer rotation after policy repair so old keys that may have been copied elsewhere are removed:
s3ctl \
--provider ovh \
--bucket netspeedy-archives \
--ovh-rotate-credentials \
--output jsonOVHcloud buckets are containers, but the delete command still uses the bucket name:
s3ctl --provider ovh --bucket app-data --deleteFor OVHcloud deletes, s3ctl looks up the container owner, creates a temporary
S3 credential for that OVH Public Cloud user, and checks whether the container
is empty through the S3-compatible API. Empty containers are deleted without
--force. Non-empty containers require --force, which allows s3ctl to empty
the container through the S3-compatible API before deleting it through the
OVHcloud Public Cloud API. After the container is removed, s3ctl deletes the
user's S3 credentials and the OVH Public Cloud user. If the container is removed
but a final credential/user cleanup call fails, the command prints a warning so
the stale identity can be removed manually.
For safety, OVHcloud delete, credential rotation, and policy repair only
continue when the container owner looks bucket-dedicated: the OVH Public Cloud
user description or username must match the bucket name, or the legacy
description s3ctl bucket <bucket>. This prevents managing credentials or
policies on a shared manual OVH user.
The application key, application secret, and consumer key flow is still
supported as OVHcloud's classic API authentication path and can be used directly
with s3ctl as well.
For classic OVH API application credentials, use OVHcloud's token creation
page. These links pre-fill the API rights s3ctl needs for Public Cloud bucket
provisioning, but they do not create OAuth2 service account credentials:
- EU:
https://eu.api.ovh.com/createToken/?GET=%2Fcloud%2Fproject%2F%2A&POST=%2Fcloud%2Fproject%2F%2A&DELETE=%2Fcloud%2Fproject%2F%2A - CA:
https://ca.api.ovh.com/createToken/?GET=%2Fcloud%2Fproject%2F%2A&POST=%2Fcloud%2Fproject%2F%2A&DELETE=%2Fcloud%2Fproject%2F%2A - US:
https://api.us.ovhcloud.com/createToken/?GET=%2Fcloud%2Fproject%2F%2A&POST=%2Fcloud%2Fproject%2F%2A&DELETE=%2Fcloud%2Fproject%2F%2A
After creating the token, paste the returned application key, application
secret, and consumer key into ovh_application_key, ovh_application_secret,
and ovh_consumer_key. To create ovh_client_id and ovh_client_secret,
use POST /me/api/oauth2/client instead.
Build locally:
make docker-build
docker run --rm s3ctl:dev --helpUse the published image:
docker run --rm ghcr.io/soakes/s3ctl:latest --helpRun against the bundled examples from the host:
docker run --rm \
-v "$PWD/examples:/examples:ro" \
ghcr.io/soakes/s3ctl:latest \
--config /examples/s3ctl.json \
--dry-run \
--output jsonCommon targets:
make lint-install
make fmt
make lint
make vet
make test
make build
make refresh-go-toolchain
make build-release
make package-deb BINARY_PATH=dist/s3ctl-linux-amd64 DEB_ARCH=amd64Recommended Go quality workflow:
make lint-install
make fmt
make lint
make vet
make test
make buildgofmt is the baseline formatter. The pinned golangci-lint configuration adds gofumpt, goimports, staticcheck, errcheck, and revive.
Use the website targets when changing the release hub so the local preview, metadata fallback, and production build stay aligned.
Dependency updates are managed by Dependabot. Related AWS SDK for Go v2 module
updates are grouped into one pull request so shared go.mod and go.sum
changes do not create a queue of conflicting PRs. The Dependabot auto-merge
workflow runs after Build and Validate succeeds and on a daily maintenance
schedule; when a Dependabot PR is conflicted or missing validation for its
current head revision, it comments @dependabot rebase once for that head and
waits for the refreshed branch to pass validation before merging.
make build-release produces release archives in dist/release/ for:
linux/amd64linux/arm64linux/arm/v7darwin/amd64darwin/arm64
The Linux binaries are built with CGO_ENABLED=0, so releases are architecture-specific rather than distro-specific and should run across most mainstream distributions for the same CPU family.
Stable releases are published only after the project passes validation for formatting, linting, vetting, tests, build output, packaging, website assets, and CLI smoke checks.
Release candidates use tags such as v1.2.3-rc.1 while a version is being
validated. Production installs should use the latest stable release unless you
are intentionally testing a candidate build.
Stable releases publish:
- Linux and macOS archives for
amd64,arm64, and Linuxarm/v7 - Debian packages for
amd64,arm64, andarmhf - a
SHA256SUMSchecksum file - GHCR images for the stable version,
latest, and semver convenience tags - the GitHub Pages release hub with current installer and asset metadata
- signed APT repository metadata for the stable channel