32
loading...
This website collects cookies to deliver better user experience
“Why do I need two clusters to handle disaster recovery? I thought Kubernetes had a distributed architecture?”
Etcd: It is the brain of your cluster and is not latency tolerant. Running it across geographically separated environments is problematic.
Networking: Cluster nodes need to be able to talk to each other directly and securely.
Latency: High latency is unacceptable for enterprise applications. If a microservice-based application spans multiple environments, you might end up with sub-optimal performance.
Etcd: Etcd is the default datastore for Kubernetes, but it’s not the only option. MicroK8s runs Dqlite by default. Dqlite is latency tolerant, allowing you to run master nodes that are far apart without breaking your cluster.
Networking: Netmaker is easy to integrate with Kubernetes and creates flat, secure networks over WireGuard for nodes to talk over.
Latency: Netmaker is one of the fastest virtual networking platforms available because it uses Kernel WireGuard, creating a negligible decrease in network performance (unlike options such as OpenVPN). In addition, we can use Kubernetes’ built-in placement policies to group applications together onto nodes in the same data center, eliminating the cross-cloud latency issue.
Data Center: **2 Nodes** (datacenter1, datacenter2)
DigitalOcean (region 1): **1 Node** (do1)
DigitalOcean (region 2): **1 Node** (do2)
apt install wireguard wireguard-tools
ssh root@do1
snap install microk8s --classic
microk8s enable dns ingress storage
microk8s kubectl create namespace cert-manager
microk8s kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.14.2/cert-manager.yaml
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
namespace: cert-manager
spec:
acme:
# The ACME server URL
server: [https://acme-v02.api.letsencrypt.org/directory](https://acme-v02.api.letsencrypt.org/directory)
# Email address used for ACME registration
email: EMAIL_ADDRESS
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-prod
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: public
microk8s kubectl apply -f clusterissuer.yaml
wget https://raw.githubusercontent.com/gravitl/netmaker/develop/kube/netmaker-template.yaml
sed -i 's/NETMAKER_BASE_DOMAIN/<your base domain>/g' netmaker-template.yaml
microk8s kubectl create ns nm
microk8s kubectl config set-context --current --namespace=nm
microk8s kubectl apply -f netmaker-template.yaml -n nm
microk8s kubectl get ingress nm-ui-ingress-nginx
hostnamectl set-hostname do1.microk8s
wget https://github.com/gravitl/netmaker/releases/download/v0.5.11/netclient && chmod +x netclient
./netclient join -t <YOUR_TOKEN> --dns off --daemon off --name $(hostname | sed -e s/.microk8s//)
wg show
#example output
#interface: nm-microk8s
# public key: AQViVk8J7JZkjlzsV/xFZKqmrQfNGkUygnJ/lU=
# private key: (hidden)
# listening port: 51821
wget https://raw.githubusercontent.com/gravitl/netmaker/develop/kube/netclient-template.yaml
sed -i 's/ACCESS_TOKEN_VALUE/< your access token value>/g' netclient-template.yaml
microk8s kubectl apply -f netclient-template.yaml
root@do1:~# microk8s kubectl logs netclient-<id>
2021/07/13 17:11:16 attempting to join microk8s at grpc.nm.k8s.gravitl.com:443
2021/07/13 17:11:16 node created on remote server...updating configs
2021/07/13 17:11:16 retrieving remote peers
2021/07/13 17:11:16 starting wireguard
2021/07/13 17:11:16 joined microk8s
Checking into server at grpc.nm.k8s.gravitl.com:443
Checking to see if public addresses have changed
Local Address has changed from to 210.97.150.30
Updating address
2021/07/13 17:11:16 using SSL
Authenticating with GRPC Server
Authenticated
Checking In.
Checked in.
hostnamectl set-hostname <nodename>.microk8s
wget https://github.com/gravitl/netmaker/releases/download/v0.5.11/netclient && chmod +x netclient
./netclient join -t <YOUR_TOKEN> --daemon off --dns off --name $(hostname | sed -e s/.microk8s//)
root@datacenter2:~# wg show
interface: nm-microk8s
public key: 2xUDmCohypHcCD5dZukhhA8r6BGWN879J8vIhrcwSHg=
private key: (hidden)
listening port: 51821
peer: lrZkcSzWdgasgegaimEYnrr5CgopcEAIP8m3Q1M7+hiM=
endpoint: 192.168.88.151:51821
allowed ips: 10.101.0.3/32
latest handshake: 41 seconds ago
transfer: 736 B received, 2.53 KiB sent
persistent keepalive: every 20 seconds
peer: IUobp84wipq44aFGP0SLuRhdSsDWvcxvBFefeRCE=
endpoint: 210.97.150.30:51821
allowed ips: 10.101.0.1/32
latest handshake: 57 seconds ago
transfer: 128.45 MiB received, 9.03 MiB sent
persistent keepalive: every 20 seconds
root@do1:~# microk8s add-node
From the node you wish to join to this cluster, run the following:
microk8s join 209.97.147.27:25000/14e3a77f1584cb42323f39ce8ece0852/be5e4c7be0c6
If the node you are adding is not reachable through the default interface you can use one of the following:**
microk8s join 210.97.150.27:25000/14e3a77f1584bc42323f39ce8ece0852/be5e4c7eb0c
microk8s join 10.17.0.5:25000/14e3a77f1584bc42323f39ce8ece0852/be5e4c7eb0c6
microk8s join 10.108.0.2:25000/14e3a77f1584bc42323f39ce8ece0852/be5e4c7eb0c6
microk8s join 10.101.0.1:25000/14e3a77f1584bc42323f39ce8ece0852/be5e4c7eb0c6
microk8s join 10.101.0.1:25000/14e3a77f1584bc42323f39ce8ece0852/be5e4c7eb0c6
root@do1:~/kube# microk8s kubectl get nodes -o wide
NAME STATUS VERSION INTERNAL-IP EXTERNAL-IP
do2.microk8s Ready v1.21.1-3+ba 10.101.0.2 <none>
datacenter1.microk8s Ready v1.21.1-3+ba 10.101.0.3 <none>
do1.microk8s Ready v1.21.1-3+ba 10.101.0.1 <none>
datacenter2.microk8s Ready v1.21.1-3+ba 10.101.0.4 <none>
microk8s kubectl label nodes do1.microk8s do2.microk8s location=cloud
microk8s kubectl label nodes datacenter1.microk8s datacenter2.microk8s location=onprem
wget https://raw.githubusercontent.com/gravitl/netmaker/develop/kube/example/nginx-example.yaml
#BASE_DOMAIN should be your wildcard, ex: app.example.com
#template will add a subdomain, ex: nginx.app.example.com
sed -i 's/BASE_DOMAIN/<your base domain>/g' nginx-example.yaml
microk8s kubectl apply -f nginx-example.yaml
root@do1:~# k get po -o wide | grep nginx
nginx-deployment-cb796dbc7-h72s8 1/1 Running 0 2m53s 10.1.99.68 datacenter1.microk8s <none> <none>
nginx-deployment-cb796dbc7-p5bhr 1/1 Running 0 2m53s 10.1.99.67 datacenter1.microk8s <none> <none>
nginx-deployment-cb796dbc7-pxpvw 1/1 Running 0 2m53s 10.1.247.3 datacenter2.microk8s <none> <none>
nginx-deployment-cb796dbc7-7vbwz 1/1 Running 0 2m53s 10.1.247.4 datacenter2.microk8s <none> <none>
nginx-deployment-cb796dbc7-x862w 1/1 Running 0 2m53s 10.1.247.5 datacenter2.microk8s <none> <none>
root@datacenter2:~# microk8s stop
root@datacenter1:~# microk8s stop
root@do2:~# k get nodes
NAME STATUS ROLES AGE VERSION
do2.microk8s Ready <none> 77m v1.21.1-3+ba118484dd39df
do1.microk8s Ready <none> 106m v1.21.1-3+ba118484dd39df
datacenter1.microk8s NotReady <none> 62m v1.21.1-3+ba118484dd39df
datacenter2.microk8s NotReady <none> 40m v1.21.1-3+ba118484dd39df
Scenarios such as DR historically required a multi-cluster deployment
A multi-cluster model is not absolutely necessary
You can enable multi-cluster patterns with a single cluster
Enabling these patterns requires tools like MicroK8s and Netmaker
32