Deploy the aggregated API server (in-cluster)¶
This guide deploys the aggregated API server for:
- API group:
aggregation.coder.com - Version:
v1alpha1 - Resources:
coderworkspaces,codertemplates
1) Create namespace and RBAC¶
kubectl create namespace coder-system
kubectl apply -f config/rbac/
2) Apply service and APIService manifests¶
kubectl apply -f deploy/apiserver-service.yaml
kubectl apply -f deploy/apiserver-apiservice.yaml
3) Choose a deployment model¶
Option A: all-in-one mode (recommended for most dev/test setups)¶
deploy/deployment.yaml defaults to --app=all, which includes the aggregated API server.
kubectl apply -f deploy/deployment.yaml
In this mode, backend Coder client configuration is discovered dynamically from eligible CoderControlPlane resources.
Option B: standalone aggregated API mode (--app=aggregated-apiserver)¶
Use this when you want a split deployment and explicit backend configuration.
- Apply deployment manifest:
kubectl apply -f deploy/deployment.yaml
- Configure required args:
CODER_URL="https://coder.example.com"
CODER_SESSION_TOKEN="replace-me"
CODER_NAMESPACE="coder-system"
kubectl -n coder-system set args deployment/coder-k8s --containers=coder-k8s -- \
--app=aggregated-apiserver \
--coder-url="${CODER_URL}" \
--coder-session-token="${CODER_SESSION_TOKEN}" \
--coder-namespace="${CODER_NAMESPACE}"
- Update probes to HTTPS on port
6443for standalone mode:
kubectl -n coder-system patch deployment coder-k8s --type='merge' -p '{
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "coder-k8s",
"livenessProbe": {
"httpGet": {"scheme": "HTTPS", "path": "/healthz", "port": 6443}
},
"readinessProbe": {
"httpGet": {"scheme": "HTTPS", "path": "/readyz", "port": 6443}
}
}
]
}
}
}
}'
4) Verify¶
kubectl rollout status deployment/coder-k8s -n coder-system
kubectl get apiservice v1alpha1.aggregation.coder.com
kubectl get coderworkspaces.aggregation.coder.com -A
kubectl get codertemplates.aggregation.coder.com -A
kubectl logs -n coder-system deploy/coder-k8s
Server-Side Apply (SSA) behavior¶
coder-k8s now includes a compatibility fallback for SSA create-on-update requests
(for example, kubectl apply --server-side when the target resource does not exist yet).
- For missing
coderworkspaces/codertemplates, the aggregated API server'sUpdatepath can delegate toCreatewhenforceAllowCreate=true. - This is intentionally best-effort: Coder resources do not currently provide a
first-class metadata store for Kubernetes
metadata.managedFields, so SSA field-owner conflict semantics are not durable.
Planned follow-up options (in order of preference):
- Preferred: add first-class metadata support on template/workspace resources in
Coder +
codersdk, and round-trip Kubernetes-managed metadata there. - Persist Kubernetes-only metadata in a shadow Kubernetes resource (ConfigMap/CRD) managed by the aggregated API server.
- Keep the compatibility fallback and continue documenting the limitations.
Template build wait tuning¶
When updating CoderTemplate.spec.files, the aggregated API server now waits for
Coder to finish building the new template version before promoting it active.
The wait behavior is configurable via environment variables on the
coder-k8s deployment:
CODER_K8S_TEMPLATE_BUILD_WAIT_TIMEOUT(default:25m)CODER_K8S_TEMPLATE_BUILD_BACKOFF_AFTER(default:2m)CODER_K8S_TEMPLATE_BUILD_INITIAL_POLL_INTERVAL(default:2s)-
CODER_K8S_TEMPLATE_BUILD_MAX_POLL_INTERVAL(default:10s) -
Aggregated API request timeout defaults to
30m; keep it at or above the build wait timeout. -
CODER_K8S_TEMPLATE_BUILD_WAIT_TIMEOUTvalues above30mare rejected, because they cannot exceed the API request timeout.
Behavior:
- Polls at the initial interval for the first 2 minutes.
- After that, poll interval doubles up to the max poll interval.
- Fails if the version build ends in
failed/canceledor the total wait timeout is exceeded.
TLS note¶
deploy/apiserver-apiservice.yaml uses insecureSkipTLSVerify: true for development convenience.
Use proper CA-backed TLS wiring for production environments.