Install K8ssandra on AKS
Azure Kubernetes Service or “AKS” is a managed Kubernetes service that makes it easy for you to run Kubernetes on Azure. AKS offers serverless Kubernetes, an integrated continuous integration and continuous delivery (CI/CD) experience, and enterprise-grade security and governance.
Tip
Follow-up topics cover role-based considerations for developers or site reliability engineers (SREs).Requirements
- Azure Blob Storage Container - Backups for K8ssandra are stored within a Azure Blob Storage Container. Follow the Quickstart: Upload, download, and list blobs with the Azure portal quickstart to create the container.
- Azure AKS Cluster - Follow the steps in Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster.
- kubectl installed and configured to access the cluster.
- Optional Helm installed and configured to access the cluster.
- Optional Kustomize installed and configured to access the cluster.
Infrastructure Considerations
Apache Cassandra can be deployed across a variety of hardware profiles. Consider the hardware requirements of Apache Cassandra when defining AKS node pools. Determine if you are planning on running a single Cassandra instance per node or multiple instances per node. If the decision is to run multiple instances per node, ensure that the node has enough resources to support the number of instances that you plan to run. For example, to run three Cassandra instances per node, the node must have enough CPU, memory, and storage to support three Cassandra instances.
Dependencies
- Cert Manager - Cert Manager provides a common API for management of Transport Layer Security (TLS) certificates. Cert Manager’s interfaces communicate with a variety of Certificate Authority backends, including self-signed, Venafi, and Vault. K8ssandra Operator uses this API for certificates used by the various K8ssandra components.
Installation Type
K8ssandra Operator may be deployed in one of two modes. Control-Plane
mode is the default method of installation. A Control-Plane
instance of K8ssandra Operator watches for the creation and changes to K8ssandraCluster
custom resources. When Control-Plane
is active Cassandra resources may be created within the local Kubernetes cluster and / or remote Kubernetes clusters (in the case of multi-region) deployments. When using K8ssandra Operator you must have only one instance running in Control-Plane
mode. Kubernetes clusters acting as remote regions for Cassandra deployments should be run in Data-Plane
mode. In Data-Plane
mode K8ssandra Operator does not directly reconcile K8ssandraCluster
resources.
For example consider the following scenarios:
- Single-region, shared-cluster - K8ssandra Operator is deployed in
Control-Plane
mode on the cluster. The operator can manage the creation ofK8ssandraCluster
objects and reconcile them locally into Cassandra clusters and nodes. - Single-region, dedicated cluster roles - K8ssandra Operator is deployed in
Control-Plane
mode on a small, dedicated K8s cluster. A separate cluster running inData-Plane
mode contains larger instances in which to run Cassandra. This creates a separation between management and production roles for each cluster. - Multiple-region, shared cluster - One region is configured with K8ssandra Operator running in
Control-Plane
mode. This cluster runs both the management operators and Cassandra nodes. Configure additional regions inData-Plane
mode to focus solely on Cassandra nodes and services. - Multiple-region, dedicated clusters - This scenario is similar to a shared cluster with the exception of a dedicated management cluster running in one of the regions. This follows the same pattern as the single-region, dedicated cluster roles scenario.
Installation Method
K8ssandra Operator is available to install with either Helm or Kustomize. Both tools provide various pros and cons which are outside of the scope of this document. Follow the steps within the specific section that is applicable to your environment.
Helm
Helm is sometimes described as a package manager for Kubernetes. A specific installation of a Helm chart is called a release. There may be multiple installations of a particular Helm chart within a Kubernetes cluster for the same package. This guide covers a single installation.
Configure Helm Repositories
Helm packages are called charts and reside within repositories. Add the following repositories to your Helm installation for both K8ssandra and its dependency Cert Manager.
# Add the Helm repositories for K8ssandra and Cert Manager
helm repo add k8ssandra https://helm.k8ssandra.io/stable
helm repo add jetstack https://charts.jetstack.io
# Update the local index of Helm charts
helm repo update
Install Cert Manager
Cert Manager provides a helm chart for easy installation within helm-based environments. Install Cert Manager with:
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager --create-namespace --set installCRDs=true
Output:
NAME: cert-manager
LAST DEPLOYED: Mon Jan 31 12:29:43 2022
NAMESPACE: cert-manager
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
cert-manager v1.7.0 has been deployed successfully!
In order to begin issuing certificates, you will need to set up a ClusterIssuer
or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer).
More information on the different types of issuers and how to configure them
can be found in our documentation:
https://cert-manager.io/docs/configuration/
For information on how to configure cert-manager to automatically provision
Certificates for Ingress resources, take a look at the `ingress-shim`
documentation:
https://cert-manager.io/docs/usage/ingress/
Deploy K8ssandra Operator
You can deploy K8ssandra Operator for namespace-scoped
operations (the default), or cluster-scoped
operations.
- Deploying a
namespace-scoped
K8ssandra Operator means its operations – watching for resources to deploy in Kubernetes – are specific only to the identified namespace within a cluster. - Deploying a
cluster-scoped
operator means its operations – again, watching for resources to deploy in Kubernetes – are global to all namespace(s) in the cluster.
Control-Plane Mode:
helm install k8ssandra-operator k8ssandra/k8ssandra-operator -n k8ssandra-operator --create-namespace
Data-Plane Mode:
helm install k8ssandra-operator k8ssandra/k8ssandra-operator -n k8ssandra-operator \
--set global.clusterScoped=true --set controlPlane=false --create-namespace
Output:
NAME: k8ssandra-operator
LAST DEPLOYED: Mon Jan 31 12:30:40 2022
NAMESPACE: k8ssandra-operator
STATUS: deployed
REVISION: 1
TEST SUITE: None
Tip
Optionally, you can use --set global.clusterScoped=true
to install K8ssandra Operator cluster-scoped. Example:
helm install k8ssandra-operator k8ssandra/k8ssandra-operator -n k8ssandra-operator \
--set global.clusterScoped=true --create-namespace
Kustomize
Install Cert Manager
In addition to Helm, Cert Manager also provides installation support via kubectl
. Install Cert Manager with:
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.9.1/cert-manager.yaml
Install K8ssandra Operator
Replace X.X.X
with the appropriate release you want to install:
Control-Plane Mode:
kustomize build "github.com/k8ssandra/k8ssandra-operator/config/deployments/control-plane?ref=vX.X.X" | kubectl apply --server-side -f -
Data-Plane Mode:
kustomize build "github.com/k8ssandra/k8ssandra-operator/config/deployments/data-plane?ref=vX.X.X" | kubectl apply --server-side -f -
Verify that the following CRDs are installed:
certificaterequests.cert-manager.io
certificates.cert-manager.io
challenges.acme.cert-manager.io
clientconfigs.config.k8ssandra.io
clusterissuers.cert-manager.io
issuers.cert-manager.io
k8ssandraclusters.k8ssandra.io
orders.acme.cert-manager.io
reapers.reaper.k8ssandra.io
replicatedsecrets.replication.k8ssandra.io
stargates.stargate.k8ssandra.io
Check that there are two deployments. A sample result is:
kubectl -n k8ssandra-operator get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
cass-operator-controller-manager 1/1 1 1 77s
k8ssandra-operator 1/1 1 1 77s
Verify that the K8SSANDRA_CONTROL_PLANE
environment variable is set to true
:
kubectl -n k8ssandra-operator get deployment k8ssandra-operator -o jsonpath='{.spec.template.spec.containers[0].env[?(@.name=="K8SSANDRA_CONTROL_PLANE")].value}'
Next Steps
Installation is complete. You can proceed with creating a K8ssandraCluster
custom resource instance. See the Quickstart Samples for a collection of K8ssandraCluster
resources to use as a starting point for your own deployment. Once a K8ssandraCluster
is up and running consider following either the developer or the site reliability engineer quickstarts.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.