K8ssandra release notes

Release notes for the open-source K8ssandra community project.

K8ssandra provides a production-ready platform for running Apache Cassandra® on Kubernetes. This includes automation for operational tasks such as repairs, backup and restores, and monitoring. Also deployed is Stargate, an open source data gateway that lets you interact programmatically with your Kubernetes-hosted Cassandra resources via a well-defined API.

Latest release: K8ssandra 1.3.1

Release date: 27-August-2021

Prerequisites

  • A Kubernetes environment, from v1.17 (minimum supported), up to v1.21.1 (current tested upper bound) - local or via a supported cloud provider
  • Helm v3. Note that K8ssandra 1.3.x has been tested with Helm v3.5.3. Recommendation: specifically avoid Helm 3.6.0 and 3.6.1 due to a known CVE and subsequent regression. Our testing will resume with Helm v3.6.2 and later. Related: see GH issue 1103.

Supported Kubernetes environments

K8ssandra deployed components

The K8ssandra helm chart deploys the following components. Some are optional, and depending on the configuration, may not be deployed:

  • Apache Cassandra - the deployed version depends on the configured cassandra.version setting:
    • 4.0.0 (default)
    • 3.11.11
    • 3.11.10
    • 3.11.9
    • 3.11.8
    • 3.11.7
  • DataStax Kubernetes Operator for Apache Cassandra (cass-operator) 1.7.1
  • Management API for Apache Cassandra (MAAC) 0.1.27
  • Stargate 1.0.29
  • Metric Collector for Apache Cassandra (MCAC) 0.2.0
  • kube-prometheus-stack 12.11.3 chart
  • Medusa for Apache Cassandra 0.11.0
  • medusa-operator 0.3.3
  • Reaper for Apache Cassandra 2.2.5
  • reaper-operator 2.3.0

Upgrade notices

Before upgrading to the latest K8ssandra release, be sure to read the sections below.

Upgrading to K8ssandra 1.3.0 and Cassandra 4.0

When you upgrade from a prior K8ssandra release to 1.3.0 or 1.3.1, the default cassandra.version is 4.0.0. Specifically in the case of upgrading from a prior K8ssandra release to K8ssandra 1.3.0 and Cassandra 4.0.0, a changed Cassandra num_tokens default may prevent Cassandra 4.0 from starting. This section explains how to avoid the issue for 1.3.0 users.

First, some background information. The default value for num_tokens in Cassandra 3.11 is 256. The default in Cassandra 4.0 is 16. All nodes are assigned the same number of tokens based on the num_tokens setting. More tokens means that the ring is divided up into smaller ranges. A larger number of tokens means that each individual range will be smaller.

As noted for this K8ssandra 1.3.0 upgrade scenario, in issue 1029, the Cassandra logs will include the following error, and Cassandra will not start:

org.apache.cassandra.exceptions.ConfigurationException: Cannot change the number of tokens from 256 to 16

To avoid the issue, first check the num_tokens settings in your K8ssandra-deployed Cassandra 3.11 database. Let’s walk through an example.

Initial installation

This example starts by installing K8ssandra 1.3.0 with Cassandra 3.11.11 and the latter’s default num_tokens (256).

helm install k8ssandra -f k8ssandra.values.yml k8ssandra/k8ssandra

k8ssandra.values.yml:

cassandra:
  version: "3.11.11"
  datacenters:
    - 
      name: dc1
      size: 1

Resulting deployment:

helm list

Output:

NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
k8ssandra       default         1               2021-07-27 18:58:03.831212 -0400 EDT    deployed        k8ssandra-1.3.0 

After about 10 minutes, verify the Cassandra Operator status:

kubectl describe cassdc dc1 | grep "Cassandra Operator Progress:"

Output:

  Cassandra Operator Progress:  Ready

Verify the server version:

kubectl describe cassdc dc1 | grep "Server Version:" 

Output:

  Server Version:  3.11.11

Verify the num_tokens, in this case defined by default value:

kubectl describe cassdc dc1 | grep "num_tokens:"

Output:

      num_tokens:                         256

Upgrades

Upgrade to Cassandra 4.0.0 and explicitly sets the num_tokens to match the previous default (256).

helm upgrade k8ssandra -f k8ssandra-upgrade.values.yml k8ssandra/k8ssandra

k8ssandra-upgrade.values.yml:

cassandra:
  version: "4.0.0"
  datacenters:
    - 
      name: dc1
      size: 1
      num_tokens: 256

Resulting deployment:

After about 10 minutes, verify the Cassandra Operator status:

kubectl describe cassdc dc1 | grep "Cassandra Operator Progress:"                              

Output:

  Cassandra Operator Progress:  Ready

Check the pods:

kubectl get pods

Output:

NAME                                                READY   STATUS      RESTARTS   AGE
k8ssandra-cass-operator-5bbf6584b9-pgldj            1/1     Running     0          9m23s
k8ssandra-crd-upgrader-job-k8ssandra-skrjh          0/1     Completed   0          2m6s
k8ssandra-dc1-default-sts-0                         2/2     Running     0          106s
k8ssandra-grafana-679b4bbd74-ngrz2                  2/2     Running     0          9m23s
k8ssandra-kube-prometheus-operator-85695ffb-qhqc7   1/1     Running     0          9m23s
k8ssandra-reaper-694cf49b96-jgnvg                   0/1     Running     1          6m12s
k8ssandra-reaper-operator-5c6cc55869-tdhjw          1/1     Running     0          9m23s
prometheus-k8ssandra-kube-prometheus-prometheus-0   2/2     Running     1          9m14s

Verify the server version:

kubectl describe cassdc dc1 | grep "Server Version:" 

Output:

  Server Version:  4.0.0

Verify num_tokens:

kubectl describe cassdc dc1 | grep "num_tokens:"

Output:

      num_tokens:                         256

Because the explicitly set num_tokens value in 4.0 matches the previous num_tokens default in 3.11 (256), Cassandra starts.

Upgrading from K8ssandra 1.0.0 to 1.3.1

Upgrading directly from K8ssandra 1.0.0 to 1.3.1 causes a StatefulSet update, due to issues #533 and #613). A StatefulSet update has the effect of a rolling restart. Because of issue #411, this could require you to perform a manual restart of all Stargate nodes after the Cassandra cluster is back online. This behavior also impacts in-place restore operations of Medusa backups - see issue #611.

To manually restart Stargate nodes:

  1. Get the Deployment object in your Kubernetes environment:
    kubectl get deployment | grep stargate
    
  2. Scale down with this command:
    kubectl scale deployment <stargate-deployment> --replicas 0
    
  3. Run this next command and wait until it reports 0/0 ready replicas. This should happen within a couple seconds.
    kubectl get deployment <stargate-deployment>
    
  4. Now scale up with:
     kubectl scale deployment <stargate-deployment> --replicas 1
    

K8ssandra 1.3.1 revisions

Release date: 27-August-2021

The following sections summarize and link to key revisions in K8ssandra 1.3.1. For the latest, refer to the CHANGELOG.

Changes

  • Add support for Apache Cassandra 3.11.11 #1063

Bug fixes

  • Correct Reaper image registry and JVM typos #1018
  • Correct duplicate roles and rolebindings in reaper-operator 1012
  • Preserve num_tokens when upgrading to prevent failures when upgrading to 4.0.0 1029

K8ssandra 1.3.0 revisions

Release date: 27-July-2021

The following sections summarize and link to key revisions in K8ssandra 1.3.0. For the latest, refer to the CHANGELOG.

Changes

  • Support for the General Availability (GA) official release of Apache Cassandra 4.0.0.
  • Upgrade to reaper-operator 0.3.3 and Reaper 2.3.0.
  • Upgrade from Stargate 1.0.18 to 1.0.29.
  • Upgrade from Medusa 0.10.1 to 0.11.0.
  • Upgrade from Reaper 2.2.2 to 2.2.5.
  • Integrate Fossa component/license scanning, #812.
  • Upgrade medusa-operator to v0.3.3, #905.

New features

  • Upgrade the Management API from 0.1.26 to 0.1.27 to provide support for Cassandra 4.0.0 (GA), and make Cassandra 4.0.0 the default release, #949.
  • Make affinity configurable for Stargate, #617.
  • Make affinity configurable for Reaper, #847.
  • Experimental support for custom init containers, #952.

Enhancements

  • Allow configuring the namespace of service monitors, #844.
  • Detect IEC formatted c* heap.size and heap.newGenSize; return error identifying issue, #29. Also see: Add validation check for Cassandra heap size properties, #701.
  • Add support for private registries, #420.
  • Add support for Medusa backups on Azure, #685.

Bug fixes

  • Fix property name in scaling docs, #853.
  • Hot replace disallowed characters in generated secret names, #870.
  • Stargate metrics don’t show up in the dashboards, #412.

Doc updates

K8ssandra 1.2.0 revisions

Release date: 02-June-2021

The following sections briefly summarize and link to key developments in K8ssandra 1.2.0. For the latest list, refer to the CHANGELOG.

Changes

  • Upgrade to Cassandra 4.0-rc1 #726.

New features

  • Make tolerations configurable #673, #698.

Enhancements

  • Add the ability to attach additional Persistent Volumes (PVs) for Medusa backups #560.
  • Reduce initial delay of Stargate readiness probe #654.
  • Update cass-operator to v1.7.0 #693.
  • Make allocate_tokens_for_replication_factor configurable #741.

Bug fixes

  • Token allocations are random when using 4.0 and lead to collisions #732. Related enhancement: Make allocate_tokens_for_replication_factor configurable #741.
  • Upgrade to Medusa 0.10.1 fixing failed backups after a restore #678.

Doc updates

  • Reference topics for the K8ssandra deployed Helm charts have been updated with the latest descriptions.

K8ssandra 1.1.0 revisions

Each of the following sections present a subset of key developments in K8ssandra 1.1.0. For the complete list, see the CHANGELOG.

Changes

  • Shut down cluster by default with in-place restores of Medusa backups #611.
  • Update Management API image locations #637.

Enhancements

  • Add option to disable Cassandra logging sidecar #576.
  • Developer documentation #239.
  • Add support for additionalSeeds in the CassandraDatacenter #547.

Bug fixes

  • Scale up fails if a restore was performed in the past #616.

Doc updates

Contributions

​ Your feedback and contributions are welcome! To contribute, fork the K8ssandra repo and submit Pull Requests (PRs) for review.

To submit documentation comments or edits, see Contribution guidelines.

Next steps

Read the K8ssandra FAQs - for starters, how to pronounce “K8ssandra.”

If you’re impatient, jump right in with the K8ssandra install steps for these platforms:

  • Local
  • Amazon Elastic Kubernetes Service (EKS)
  • DigitalOcean Kubernetes (DOKS)
  • Google Kubernetes Engine (GKE)
  • Microsoft Azure Kubernetes Service (AKS)