Attention: Deprecation notice for Bintray, JCenter, GoCenter and ChartCenter. Learn More

funkypenguin/cockroachdb

Chart version: 2.1.13
Api version: v1
App version: 19.1.3
CockroachDB is a scalable, survivable, strongly-consistent SQL ...
application
Chart Type
Active
Status
Unknown
License
349
Downloads
https://funkypenguin.github.io/helm-charts
Set me up:
helm repo add center https://repo.chartcenter.io
Install Chart:
helm install cockroachdb center/funkypenguin/cockroachdb
Versions (0)

CockroachDB Helm Chart

Documentation

Below is a brief overview of operating the CockroachDB Helm Chart and some specific implementation details. For additional information, please see https://www.cockroachlabs.com/docs/v19.1/orchestrate-cockroachdb-with-kubernetes-insecure.html

Prerequisites Details

StatefulSet Details

StatefulSet Caveats

Chart Details

This chart will do the following:

  • Set up a dynamically scalable CockroachDB cluster using a Kubernetes StatefulSet

Installing the Chart

To install the chart with the release name my-release:

helm install --name my-release stable/cockroachdb

Note that for a production cluster, you are very likely to want to modify the Storage and StorageClass parameters. This chart defaults to 100 GiB of disk space per pod, but you may want more or less depending on your use case, and the default persistent volume StorageClass in your environment may not be what you want for a database (e.g. on GCE and Azure the default is not SSD).

If you are running in secure mode (with configuration parameter Secure.Enabled set to true), then you will have to manually approve the cluster’s security certificates as the pods are created. You can see the pending certificate-signing requests by running kubectl get csr, and approve them by running kubectl certificate approve <csr-name>. You’ll have to approve one certificate for each node (e.g. default.node.eerie-horse-cockroachdb-0 and one client certificate for the job that initializes the cluster (e.g. default.node.root).

Confirm that three pods are running successfully and init has completed:

kubectl get pods
NAME                                READY     STATUS      RESTARTS   AGE
my-release-cockroachdb-0            1/1       Running     0          1m
my-release-cockroachdb-1            1/1       Running     0          1m
my-release-cockroachdb-2            1/1       Running     0          1m
my-release-cockroachdb-init-k6jcr   0/1       Completed   0          1m

Confirm that persistent volumes are created and claimed for each pod:

kubectl get persistentvolumes
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                                      STORAGECLASS   REASON    AGE
pvc-64878ebf-f3f0-11e8-ab5b-42010a8e0035   100Gi      RWO            Delete           Bound     default/datadir-my-release-cockroachdb-0   standard                 51s
pvc-64945b4f-f3f0-11e8-ab5b-42010a8e0035   100Gi      RWO            Delete           Bound     default/datadir-my-release-cockroachdb-1   standard                 51s
pvc-649d920d-f3f0-11e8-ab5b-42010a8e0035   100Gi      RWO            Delete           Bound     default/datadir-my-release-cockroachdb-2   standard                 51s

Upgrading

From 2.0.0 on

Launch a temporary interactive pod and start the built-in SQL client:

kubectl run cockroachdb -it \
--image=cockroachdb/cockroach \
--rm \
--restart=Never \
-- sql \
--insecure \
--host=my-release-cockroachdb-public

Set the cluster.preserve_downgrade_option cluster setting where $current_version = the version of CRDB currently running, e.g. 2.1:


Exit the shell and delete the temp pod:
```> \q ```

Kick off the upgrade process by changing to the new Docker image, where $new_version is the version being upgraded to:

```shell
kubectl delete job my-release-cockroachdb-init
helm upgrade \
my-release \
stable/cockroachdb \
--set ImageTag=$new_version \
--reuse-values

Monitor the cluster’s pods until all have been successfully restarted:

kubectl get pods
NAME                                READY     STATUS              RESTARTS   AGE
my-release-cockroachdb-0            1/1       Running             0          2m
my-release-cockroachdb-1            1/1       Running             0          3m
my-release-cockroachdb-2            1/1       Running             0          3m
my-release-cockroachdb-3            0/1       ContainerCreating   0          25s
my-release-cockroachdb-init-nwjkh   0/1       ContainerCreating   0          6s
kubectl get pods \
-o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].image}{"\n"}'
my-release-cockroachdb-0    cockroachdb/cockroach:v19.1.1
my-release-cockroachdb-1    cockroachdb/cockroach:v19.1.1
my-release-cockroachdb-2    cockroachdb/cockroach:v19.1.1
my-release-cockroachdb-3    cockroachdb/cockroach:v19.1.1

Resume normal operations. Once you are comfortable that the stability and performance of the cluster is what you’d expect post upgrade, finalize it by running the following:

kubectl run cockroachdb -it \
--image=cockroachdb/cockroach \
--rm \
--restart=Never \
-- sql \
--insecure \
--host=my-release-cockroachdb-public
> RESET CLUSTER SETTING cluster.preserve_downgrade_option;
\q

To 2.0.0

Due to having no explicit selector set for the StatefulSet before version 2.0.0 of this chart, upgrading from any version that uses a version of kubernetes that locks the selector labels to any other version is impossible without deleting the StatefulSet. Luckily there is a way to do it without actually deleting all the resources managed by the StatefulSet. Use the workaround below to upgrade from charts versions previous to 2.0.0. The following example assumes that the release name is crdb:

$ kubectl delete statefulset crdb-cockroachdb --cascade=false

Verify that no pod is deleted and then upgrade as normal. A new StatefulSet will be created taking over the management of the existing pods upgrading them if needed.

For more information about the upgrading bug see https://github.com/helm/charts/issues/7680.

Configuration

The following table lists the configurable parameters of the CockroachDB chart and their default values.

Parameter Description Default
Name Chart name cockroachdb
Image Container image name cockroachdb/cockroach
ImageTag Container image tag v19.1.3
ImagePullPolicy Container pull policy Always
Replicas k8s statefulset replicas 3
MaxUnavailable k8s PodDisruptionBudget parameter 1
Component k8s selector key cockroachdb
ExternalGrpcPort CockroachDB primary serving port 26257
ExternalGrpcName CockroachDB primary serving port name grpc
InternalGrpcPort CockroachDB inter-cockroachdb port 26257
InternalGrpcName CockroachDB inter-cockroachdb port name grpc
InternalHttpPort CockroachDB HTTP port 8080
ExternalHttpPort CockroachDB HTTP port on service 8080
HttpName Name given to the http service port http
Resources Resource requests and limits {}
InitPodResources Resource requests and limits for the short-lived init pod {}
Storage Persistent volume size 100Gi
StorageClass Persistent volume class null
CacheSize Size of CockroachDB’s in-memory cache 25%
MaxSQLMemory Max memory to use processing SQL queries 25%
ClusterDomain Cluster’s default DNS domain cluster.local
NetworkPolicy.Enabled Enable NetworkPolicy false
NetworkPolicy.AllowExternal Don’t require client label for connections true
Service.Type Public service type ClusterIP
Service.Annotations Annotations to apply to the service {}
PodManagementPolicy OrderedReady or Parallel pod creation/deletion order Parallel
UpdateStrategy.type allows setting of RollingUpdate strategy RollingUpdate
NodeSelector Node labels for pod assignment {}
Tolerations List of node taints to tolerate {}
Secure.Enabled Whether to run securely using TLS certificates false
Secure.RequestCertsImage Image to use for requesting TLS certificates cockroachdb/cockroach-k8s-request-cert
Secure.RequestCertsImageTag Image tag to use for requesting TLS certificates 0.4
Secure.ServiceAccount.Create Whether to create a new RBAC service account true
Secure.ServiceAccount.Name Name of RBAC service account to use ""
JoinExisting List of already-existing cockroach instances []
Locality Locality attribute for this deployment ""
ExtraArgs Additional command-line arguments []
ExtraSecretMounts Additional secrets to mount at cluster members []
ExtraEnvArgs Allows to set extra ENV args []
ExtraAnnotations Allows to set extra Annotations []
ExtraInitAnnotations Allows to set extra Annotations to init pod []

Specify each parameter using the --set key=value[,key=value] argument to helm install.

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

helm install --name my-release -f values.yaml stable/cockroachdb

Tip: You can use the default values.yaml

Deep dive

Connecting to the CockroachDB cluster

Once you’ve created the cluster, you can start talking to it by connecting to its “public” service. CockroachDB is PostgreSQL wire protocol compatible so there’s a wide variety of supported clients. For the sake of example, we’ll open up a SQL shell using CockroachDB’s built-in shell and play around with it a bit, like this (likely needing to replace “my-release-cockroachdb-public” with the name of the “-public” service that was created with your installed chart):

$ kubectl run -it --rm cockroach-client \
    --image=cockroachdb/cockroach \
    --restart=Never \
    --command -- ./cockroach sql --insecure --host my-release-cockroachdb-public
Waiting for pod default/cockroach-client to be running, status is Pending,
pod ready: false
If you don't see a command prompt, try pressing enter.
root@my-release-cockroachdb-public:26257> SHOW DATABASES;
+--------------------+
|      Database      |
+--------------------+
| information_schema |
| pg_catalog         |
| system             |
+--------------------+
(3 rows)
root@my-release-cockroachdb-public:26257> CREATE DATABASE bank;
CREATE DATABASE
root@my-release-cockroachdb-public:26257> CREATE TABLE bank.accounts (id INT
PRIMARY KEY, balance DECIMAL);
CREATE TABLE
root@my-release-cockroachdb-public:26257> INSERT INTO bank.accounts VALUES
(1234, 10000.50);
INSERT 1
root@my-release-cockroachdb-public:26257> SELECT * FROM bank.accounts;
+------+---------+
|  id  | balance |
+------+---------+
| 1234 | 10000.5 |
+------+---------+
(1 row)
root@my-release-cockroachdb-public:26257> \q
Waiting for pod default/cockroach-client to terminate, status is Running
pod "cockroach-client" deleted

If you are running in secure mode, you will have to provide a client certificate to the cluster in order to authenticate, so the above command will not work. See here for an example of how to set up an interactive SQL shell against a secure cluster or here for an example application connecting to a secure cluster.

Cluster health

Because our pod spec includes regular health checks of the CockroachDB processes, simply running kubectl get pods and looking at the STATUS column is sufficient to determine the health of each instance in the cluster.

If you want more detailed information about the cluster, the best place to look is the admin UI.

Accessing the admin UI

If you want to see information about how the cluster is doing, you can try pulling up the CockroachDB admin UI by port-forwarding from your local machine to one of the pods (replacing “my-release-cockroachdb-0” with one of your pods’ names):

kubectl port-forward my-release-cockroachdb-0 8080

Once you’ve done that, you should be able to access the admin UI by visiting http://localhost:8080/ in your web browser.

Failover

If any CockroachDB member fails it gets restarted or recreated automatically by the Kubernetes infrastructure, and will rejoin the cluster automatically when it comes back up. You can test this scenario by killing any of the pods:

kubectl delete pod my-release-cockroachdb-1
$ kubectl get pods -l "component=my-release-cockroachdb"
NAME                      READY     STATUS        RESTARTS   AGE
my-release-cockroachdb-0  1/1       Running       0          5m
my-release-cockroachdb-2  1/1       Running       0          5m

After a while:

$ kubectl get pods -l "component=my-release-cockroachdb"
NAME                      READY     STATUS        RESTARTS   AGE
my-release-cockroachdb-0  1/1       Running       0          5m
my-release-cockroachdb-1  1/1       Running       0          20s
my-release-cockroachdb-2  1/1       Running       0          5m

You can check state of re-joining from the new pod’s logs:

$ kubectl logs my-release-cockroachdb-1
[...]
I161028 19:32:09.754026 1 server/node.go:586  [n1] node connected via gossip and
verified as part of cluster {"35ecbc27-3f67-4e7d-9b8f-27c31aae17d6"}
[...]
cockroachdb-0.my-release-cockroachdb.default.svc.cluster.local:26257
build:      beta-20161027-55-gd2d3c7f @ 2016/10/28 19:27:25 (go1.7.3)
admin:      http://0.0.0.0:8080
sql:
postgresql://root@my-release-cockroachdb-1.my-release-cockroachdb.default.svc.cluster.local:26257?sslmode=disable
logs:       cockroach-data/logs
store[0]:   path=cockroach-data
status:     restarted pre-existing node
clusterID:  {35ecbc27-3f67-4e7d-9b8f-27c31aae17d6}
nodeID:     2
[...]

NetworkPolicy

To enable network policy for CockroachDB, install a networking plugin that implements the Kubernetes NetworkPolicy spec, and set NetworkPolicy.Enabled to true.

For Kubernetes v1.5 & v1.6, you must also turn on NetworkPolicy by setting the DefaultDeny namespace annotation. Note: this will enforce policy for all pods in the namespace:

kubectl annotate namespace default "net.beta.kubernetes.io/network-policy={\"ingress\":{\"isolation\":\"DefaultDeny\"}}"

For more precise policy, set networkPolicy.allowExternal=false. This will only allow pods with the generated client label to connect to CockroachDB. This label will be displayed in the output of a successful install.

Scaling

Scaling should typically be managed via the helm upgrade command, but StatefulSets don’t yet work with helm upgrade. In the meantime until helm upgrade works, if you want to change the number of replicas, you can use the kubectl scale as shown below:

kubectl scale statefulset my-release-cockroachdb --replicas=4

Note that if you are running in secure mode and increase the size of your cluster, you will also have to approve the certificate-signing request of each new node (using kubectl get csr and kubectl certificate approve).