Deploy in high availability
Overview​
By default, vCluster Standalone installs on a single control plane node. For production deployments, run multiple control plane nodes in high availability (HA) to improve resilience. Each additional control plane node joins the cluster one at a time, starting after the initial node is running.
Predeployment configuration options​
Backing store must be embedded etcd or external database​
HA requires either embedded etcd or an external database as the backing store. This must be configured on the initial node before adding additional control plane nodes.
Control plane node roles​
Decide if each control plane node will also act as a worker node. Once a node joins the cluster, its roles cannot change.
Worker nodes​
With vCluster Standalone, worker nodes can only be private nodes. Since there is no Control Plane Cluster, there is no concept of Shared Nodes.
Prerequisites​
- Access to nodes that satisfy the node requirements
Install initial control plane node​
- Self-managed
- Platform-managed
Perform all steps on the initial control plane node as root.
Switch to root, then create a
vcluster.yamlwith HA enabled.Switch to rootsudo su -Create vCluster config file with HA enabledmkdir -p /etc/vcluster
cat <<EOF > /etc/vcluster/vcluster.yaml
controlPlane:
distro:
k8s:
version: v1.35.0
backingStore:
etcd:
embedded:
enabled: true # Required for HA (or use external DB)
EOFRun the installation script on the control plane node:
Install vCluster Standalone on control plane nodecurl -sfL https://github.com/loft-sh/vcluster/releases/download/v0.32.0/install-standalone.sh | sh -s -- --vcluster-name standaloneCheck that the control plane node is ready by running these commands:
Check node statuskubectl get nodesExpected output:
NAME STATUS ROLES AGE VERSION
ip-192-168-3-131 Ready control-plane,master 11m v1.32.1Verify cluster components are runningkubectl get pods -APods should include:
- Flannel: CNI for container networking
- CoreDNS: DNS service for the cluster
- KubeProxy: Network traffic routing and load balancing
- Konnectivity: Secure control plane to worker node communication
- Local Path Provisioner: Dynamic storage provisioning
Available flags to use in the install script​
There are several flags available that can be added to the script.
| Flag | Description |
|---|---|
--vcluster-name | Name of the vCluster instance |
--vcluster-version | Specific vCluster version to install |
--config | Path to the vcluster.yaml configuration file |
--binary | Path to an existing vCluster binary (use with --skip-download) |
--skip-download | Skip downloading vCluster binary (use existing) |
--skip-wait | Exit without waiting for vCluster to be ready |
--extra-env | Additional environment variables for vCluster |
--join-token | Token for joining additional nodes to the cluster |
--join-endpoint | Endpoint address for joining additional nodes |
--vcluster-kubernetes-bundle | Path to an air-gapped Kubernetes bundle |
--reset-only | Uninstall and reset the vCluster installation without reinstalling |
--fips | Enable FIPS-compliant mode |
--platform-access-key | Access key for vCluster Platform integration |
--platform-host | vCluster Platform host URL |
--platform-insecure | Skip TLS verification for Platform connection |
--platform-instance-name | Instance name in vCluster Platform |
--platform-project | Project name in vCluster Platform |
Access your cluster​
After installation, vCluster automatically configures the kubeconfig on the control plane node and sets the kubectl context to your new vCluster Standalone instance.
To access the cluster from other machines, copy the kubeconfig from /var/lib/vcluster/kubeconfig.yaml on the control plane node or use the vCluster CLI to generate access credentials.
Platform-managed​
When managing a Standalone HA cluster through vCluster Platform, the initial control plane node is provisioned through the Platform using Auto Nodes. Platform coordinates configuration updates and version upgrades as rolling operations, eliminating the need for manual SSH access.
Use vCluster Platform to:
- Add a Node Provider.
- Add the vCluster configuration (example below).
- Provision the cluster from the platform UI.
controlPlane:
standalone:
enabled: true
autoNodes:
provider: aws # Node provider you want to use
quantity: 3 # Number of nodes
distro:
k8s:
image:
tag: v1.35.0 # Kubernetes version you want to use
backingStore:
etcd:
embedded:
enabled: true # Required for HA (or use external DB)
# Worker nodes
privateNodes:
enabled: true
autoNodes: # (optional) Add worker nodes with Auto Nodes
- provider: aws
dynamic:
- name: aws-pool-1
After provisioning completes, vCluster Platform manages the control plane node lifecycle. Worker node lifecycle also remains managed through the platform UI when Auto Nodes are used.
Access your cluster​
To access a Standalone cluster managed by vCluster Platform, open the vCluster in the Platform UI and click Connect.
As an alternative, use the vcluster platform connect vcluster command.
Add additional control plane nodes​
After the initial control plane node is running, additional nodes join the cluster using a bootstrap token and the install script.
Create a join token​
Generate a token on the initial control plane node. A single token can be used for multiple nodes, or you can create one per node.
/var/lib/vcluster/bin/vcluster-cli token create --control-plane --expires=1h
The command returns the join endpoint and token:
Join endpoint: https://<vcluster-endpoint>
Join token: <token>
By default the token expires after 1 hour. The token is stored as a Secret prefixed with bootstrap-token- in the kube-system namespace.
Join each control plane node​
On each additional control plane node, run the install script with the --join-token and --join-endpoint flags:
curl -sfL https://github.com/loft-sh/vcluster/releases/download/v0.32.0/install-standalone.sh | sh -s -- \
--vcluster-name standalone \
--join-token your-token \
--join-endpoint https://your-vcluster-endpoint
The node automatically downloads the necessary binaries and configuration, and joins as an additional control plane node.
Add worker nodes​
After the vCluster control plane is up and running, add dedicated worker nodes.
The API server endpoint must be reachable from the worker nodes. Configure controlPlane.endpoint and controlPlane.proxy.extraSANs in your vCluster configuration to expose the API server.