Skip to content
GitHub

Helm & Kubernetes

This guide explains how to deploy Rafiki using Helm charts on a Kubernetes cluster. Helm is a package manager for Kubernetes that allows you to define, install, and upgrade complex Kubernetes applications through Helm charts.

Rafiki uses the following key components:

  • Tigerbeetle: High-performance accounting database used for financial transaction processing and ledger management
  • PostgreSQL: Used for storing application data and metadata
  • Redis: Used for caching and messaging between components

Before you begin, ensure you have the following:

Add the official Interledger Helm repository which contains the Rafiki charts:

Terminal window
helm repo add interledger https://interledger.github.io/charts
helm repo update

Create a values.yaml file to customize your Rafiki deployment.

Install Rafiki using the following command:

Terminal window
helm install rafiki interledger/rafiki -f values.yaml

This will deploy all Rafiki components to your Kubernetes cluster with the configurations specified in your values.yaml file.

If you want to install to a specific namespace:

Terminal window
kubectl create namespace rafiki
helm install rafiki interledger/rafiki -f values.yaml -n rafiki

Check the status of your deployment with the following commands:

Terminal window
# Check all resources deployed by Helm
helm status rafiki
# Check the running pods
kubectl get pods
# Check the deployed services
kubectl get services

To expose Rafiki services outside the cluster using NGINX Ingress Controller:

If you don’t already have NGINX Ingress Controller installed, you can install it using Helm:

Terminal window
# Add the ingress-nginx repository
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
# Install the ingress-nginx controller
helm install nginx-ingress ingress-nginx/ingress-nginx \
--set controller.publishService.enabled=true

Wait for the Load Balancer to be provisioned:

Terminal window
kubectl get services -w nginx-ingress-ingress-nginx-controller

Once the Load Balancer has an external IP or hostname assigned, create DNS records:

  • auth.example.com pointing to the Load Balancer IP/hostname
  • backend.example.com pointing to the Load Balancer IP/hostname

Apply your updated configuration:

Terminal window
helm upgrade rafiki interledger/rafiki -f values.yaml

Check if your ingress resources were created correctly:

Terminal window
kubectl get ingress

You should find entries for the auth server and backend API ingress resources.

If you don’t want to use ingress to access Rafiki services, you can use port forwarding to directly access the services:

ServicePort-Forward Command
Auth Serverkubectl port-forward svc/rafiki-auth-server 3000:3000
Backend APIkubectl port-forward svc/rafiki-backend-api 3001:3001
Admin UIkubectl port-forward svc/rafiki-backend-api 3001:3001
PostgreSQLkubectl port-forward svc/rafiki-postgresql 5432:5432
Rediskubectl port-forward svc/rafiki-redis-master 6379:6379

To upgrade your Rafiki deployment to a newer version:

Terminal window
# Update the Helm repository
helm repo update
# Upgrade Rafiki
helm upgrade rafiki interledger/rafiki -f values.yaml

To uninstall Rafiki from your cluster:

Terminal window
helm uninstall rafiki

Note that this won’t delete Persistent Volume Claims (PVC) created by the PostgreSQL and Redis deployments. If you want to delete them as well:

Terminal window
kubectl delete pvc -l app.kubernetes.io/instance=rafiki

If a component isn’t working correctly, you can check its logs:

Terminal window
# List all pods
kubectl get pods
# Check logs for a specific pod
kubectl logs pod/rafiki-auth-server-0
Terminal window
# List pods and their status
kubectl get pods
# Check logs for a specific pod
kubectl logs pod/rafiki-auth-server-0
# Get details about a pod
kubectl describe pod/rafiki-auth-server-0
# Check services and their endpoints
kubectl get services
# Check Persistent Volume Claims
kubectl get pvc
# Check ingress resources
kubectl get ingress
  1. Check if PostgreSQL pods are running:
    kubectl get pods -l app.kubernetes.io/name=postgresql
  2. Check PostgreSQL logs:
    kubectl logs pod/rafiki-postgresql-0
  3. Verify that the database passwords match those in your values.yaml
  1. Check Tigerbeetle logs:
    kubectl logs pod/tigerbeetle-0
  2. Ensure that the PVC for Tigerbeetle has been created correctly
    kubectl get pvc -l app.kubernetes.io/name=tigerbeetle
  3. Verify that the cluster ID is consistent across all components
  1. Verify NGINX Ingress Controller is running:
    kubectl get pods -n ingress-nginx
  2. Check if your DNS records are correctly pointing to the ingress controller’s external IP
  3. Check the ingress resource:
    kubectl get ingress
  4. Check ingress controller logs:
    kubectl logs -n ingress-nginx deploy/nginx-ingress-ingress-nginx-controller
  5. Verify that TLS secrets exist if HTTPS is enabled:
    kubectl get secrets
  1. If using cert-manager, check if certificates are properly issued:
    kubectl get certificates
  2. Check certificate status:
    kubectl describe certificate [certificate-name]
  3. Check cert-manager logs:
    kubectl logs -n cert-manager deploy/cert-manager
  1. Check if services are running:
    kubectl get services
  2. Verify pod health:
    kubectl describe pod [pod-name]
  3. Check for resource constraints:
    kubectl top pods
  1. Ensure all required services are running:
    kubectl get services
  2. Verify service endpoints:
    kubectl get endpoints
  3. Test connectivity between pods using temporary debugging pods:
    kubectl run -it --rm debug --image=busybox -- sh
    # Inside the pod
    wget -q -O- http://rafiki-auth-server:3000/health

When deploying Rafiki in production, consider the following security practices:

  • Use secure passwords: Replace all default passwords with strong, unique passwords
  • Enable TLS: Use HTTPS for all external communications
  • Implement network policies: Use Kubernetes network policies to restrict traffic between pods
  • Use RBAC: Use Kubernetes Role-Based Access Control to limit access to your cluster
  • Use secrets management: Consider using a secrets management solution
  • Perform regular updates: Keep your Rafiki deployment updated

To create a backup of your PostgreSQL database:

Terminal window
# Forward PostgreSQL port to local machine
kubectl port-forward svc/rafiki-postgresql 5432:5432
# Use pg_dump to create a backup
pg_dump -h localhost -U rafiki -d rafiki > rafiki_pg_backup.sql

Tigerbeetle is designed to be fault-tolerant with its replication mechanism. However, to create a backup of Tigerbeetle data, you can use the following approach:

Terminal window
# Create a snapshot of the Tigerbeetle PVC
kubectl get pvc tigerbeetle-data-tigerbeetle-0 -o yaml > tigerbeetle-pvc.yaml
# Create a volume snapshot
cat <<EOF | kubectl apply -f -
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: tigerbeetle-snapshot
spec:
volumeSnapshotClassName: csi-hostpath-snapclass
source:
persistentVolumeClaimName: tigerbeetle-data-tigerbeetle-0
EOF

To restore from a PostgreSQL backup:

Terminal window
# Forward PostgreSQL port to local machine
kubectl port-forward svc/rafiki-postgresql 5432:5432
# Use psql to restore from backup
psql -h localhost -U rafiki -d rafiki < rafiki_pg_backup.sql

To restore Tigerbeetle from a snapshot:

Terminal window
# Create a new PVC from the snapshot
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: tigerbeetle-data-restored
spec:
dataSource:
name: tigerbeetle-snapshot
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
EOF
# Update the Tigerbeetle StatefulSet to use the restored PVC
kubectl patch statefulset tigerbeetle -p '{"spec":{"template":{"spec":{"volumes":[{"name":"data","persistentVolumeClaim":{"claimName":"tigerbeetle-data-restored"}}]}}}}'