Skip to main content
Version: v0.32

Deploy in high availability

Supported Configurations
Running the control plane as a binary with vCluster Standalone. When scaling with additional worker nodes, they are joined as private nodes.

Overview​

By default, vCluster Standalone installs on a single control plane node. For production deployments, run multiple control plane nodes in high availability (HA) to improve resilience. Each additional control plane node joins the cluster one at a time, starting after the initial node is running.

Predeployment configuration options​

Backing store must be embedded etcd or external database​

HA requires either embedded etcd or an external database as the backing store. This must be configured on the initial node before adding additional control plane nodes.

Control plane node roles​

Decide if each control plane node will also act as a worker node. Once a node joins the cluster, its roles cannot change.

Worker nodes​

With vCluster Standalone, worker nodes can only be private nodes. Since there is no Control Plane Cluster, there is no concept of Shared Nodes.

Prerequisites​

Install initial control plane node​

Control Plane Node

Perform all steps on the initial control plane node as root.

  1. Switch to root, then create a vcluster.yaml with HA enabled.

    Switch to root
    sudo su -
    Create vCluster config file with HA enabled
    mkdir -p /etc/vcluster
    cat <<EOF > /etc/vcluster/vcluster.yaml
    controlPlane:
    distro:
    k8s:
    version: v1.35.0
    backingStore:
    etcd:
    embedded:
    enabled: true # Required for HA (or use external DB)
    EOF
  2. Run the installation script on the control plane node:

    Install vCluster Standalone on control plane node
    curl -sfL https://github.com/loft-sh/vcluster/releases/download/v0.32.0/install-standalone.sh | sh -s -- --vcluster-name standalone
  3. Check that the control plane node is ready by running these commands:

    Check node status
    kubectl get nodes

    Expected output:

    NAME               STATUS   ROLES                  AGE   VERSION
    ip-192-168-3-131 Ready control-plane,master 11m v1.32.1
    Verify cluster components are running
    kubectl get pods -A

    Pods should include:

    • Flannel: CNI for container networking
    • CoreDNS: DNS service for the cluster
    • KubeProxy: Network traffic routing and load balancing
    • Konnectivity: Secure control plane to worker node communication
    • Local Path Provisioner: Dynamic storage provisioning

Available flags to use in the install script​

There are several flags available that can be added to the script.

FlagDescription
--vcluster-nameName of the vCluster instance
--vcluster-versionSpecific vCluster version to install
--configPath to the vcluster.yaml configuration file
--binaryPath to an existing vCluster binary (use with --skip-download)
--skip-downloadSkip downloading vCluster binary (use existing)
--skip-waitExit without waiting for vCluster to be ready
--extra-envAdditional environment variables for vCluster
--join-tokenToken for joining additional nodes to the cluster
--join-endpointEndpoint address for joining additional nodes
--vcluster-kubernetes-bundlePath to an air-gapped Kubernetes bundle
--reset-onlyUninstall and reset the vCluster installation without reinstalling
--fipsEnable FIPS-compliant mode
--platform-access-keyAccess key for vCluster Platform integration
--platform-hostvCluster Platform host URL
--platform-insecureSkip TLS verification for Platform connection
--platform-instance-nameInstance name in vCluster Platform
--platform-projectProject name in vCluster Platform

Access your cluster​

After installation, vCluster automatically configures the kubeconfig on the control plane node and sets the kubectl context to your new vCluster Standalone instance.

To access the cluster from other machines, copy the kubeconfig from /var/lib/vcluster/kubeconfig.yaml on the control plane node or use the vCluster CLI to generate access credentials.

Add additional control plane nodes​

After the initial control plane node is running, additional nodes join the cluster using a bootstrap token and the install script.

Create a join token​

Generate a token on the initial control plane node. A single token can be used for multiple nodes, or you can create one per node.

Create a token for control plane nodes
/var/lib/vcluster/bin/vcluster-cli token create --control-plane --expires=1h

The command returns the join endpoint and token:

Example output
Join endpoint: https://<vcluster-endpoint>
Join token: <token>

By default the token expires after 1 hour. The token is stored as a Secret prefixed with bootstrap-token- in the kube-system namespace.

Join each control plane node​

On each additional control plane node, run the install script with the --join-token and --join-endpoint flags:

Modify the following with your specific values to generate a copyable command:
Join an additional control plane node
curl -sfL https://github.com/loft-sh/vcluster/releases/download/v0.32.0/install-standalone.sh | sh -s -- \
--vcluster-name standalone \
--join-token your-token \
--join-endpoint https://your-vcluster-endpoint

The node automatically downloads the necessary binaries and configuration, and joins as an additional control plane node.

Add worker nodes​

After the vCluster control plane is up and running, add dedicated worker nodes.

The API server endpoint must be reachable from the worker nodes. Configure controlPlane.endpoint and controlPlane.proxy.extraSANs in your vCluster configuration to expose the API server.