33
loading...
This website collects cookies to deliver better user experience
Automates the provisioning of Highly Available Kubernetes clusters
Built on a state-sync model for dry-runs and automatic idempotency
Ability to generate Terraform
Supports zero-config managed kubernetes add-ons
Command line autocompletion
YAML Manifest Based API Configuration
Templating and dry-run modes for creating Manifests
kops create ig --name=k8s-cluster.example.com node-example \
--role node --subnet my-subnet-name
kops edit cluster k8s.cluster.site --state=s3://my-state-store
kops edit instancegroup --name k8s-cluster.example.com nodes --state=s3://my-state-store
kops update cluster k8s-cluster.example.com --yes --state=s3://my-state-store --yes
kops get clusters
kops get clusters lists all clusters in the registry (state store) one or many resources such as cluster, instancegroups and secret.
kops get k8s-cluster.example.com -o yaml
Obviuoslly, you can get resource with or without yaml format.
kops get secrets admin -oplaintext
kops delete cluster --name=k8s.cluster.site --yes
kops delete instance ip-xx.xx.xx.xx.ec2.internal --yes (delete an instance (node) from active cluster)
kops rolling-update cluster
kops rolling-update cluster (Preview a rolling update)
kops rolling-update cluster --yes
curl -Lo kops https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
chmod +x ./kops
sudo mv ./kops /usr/local/bin/
pip install awscli
$ aws configure
AWS Access Key ID [None]: XXXXXXX
AWS Secret Access Key [None]: XXXXXXX
Default region name [None]: us-west-2
Default output format [None]: json
Kops requires the following IAM permissions to work properly:
AmazonEC2FullAccess
AmazonRoute53FullAccess
AmazonS3FullAccess
IAMFullAccess
AmazonVPCFullAccess
Creating IAM group and user kops with the required permissions:
aws iam list-users
aws configure --profile kops
AWS Access Key ID [None]: XXXXXXXXXXXX
AWS Secret Access Key [None]: XXXXXXXXXX
Default region name [None]: us-west-2
Default output format [None]: json
After that we can confirm the new profile in: cat ~/.aws/config
[default]
region = us-east-2
[profile kops]
region = us-west-2
output = json
❯ export AWS_PROFILE=kops
export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)
❯ dig +short NS kubernetes.filipemotta.me
ns-1293.awsdns-33.org.
ns-2009.awsdns-59.co.uk.
ns-325.awsdns-40.com.
ns-944.awsdns-54.net.
aws s3 mb s3://clusters.kubernetes.filipemotta.me
S3 Versioning
It’s strongly versioning your S3 bucket in case you ever need to revert or recover a previous state store
aws s3api put-bucket-versioning --bucket
kubernetes.filipemotta.me --version
export KOPS_STATE_STORE=s3://clusters.kubernetes.filipemotta.me
aws ec2 describe-availability-zones --region us-west-2 --output text
AVAILABILITYZONES us-west-2 available usw2-az2 us-west-2a
AVAILABILITYZONES us-west-2 available usw2-az1 us-west-2b
AVAILABILITYZONES us-west-2 available usw2-az3 us-west-2c
AVAILABILITYZONES us-west-2 available usw2-az4 us-west-2d
kops create cluster --networking calico --node-count 3 --master-count 3 --zones us-west-2a,us-west-2b,us-west-2c --master-zones us-west-2a,us-west-2b,us-west-2c kubernetes.filipemotta.me
kops edit cluster kubernetes.filipemotta.me
etcdClusters:
- cpuRequest: 200m
etcdMembers:
- encryptedVolume: true
instanceGroup: master-us-west-2a
name: a
- encryptedVolume: true
instanceGroup: master-us-west-2b
name: b
...
kubernetesApiAccess:
- 0.0.0.0/0
...
networkCIDR: 172.20.0.0/16
networking:
calico: {}
nonMasqueradeCIDR: 100.64.0.0/10
sshAccess:
- 0.0.0.0/0
subnets:
- cidr: 172.20.32.0/19
name: us-west-2a
type: Public
zone: us-west-2a
...
topology:
dns:
type: Public
masters: public
nodes: public
kind: InstanceGroup
...
machineType: t3.medium
maxSize: 1
minSize: 1
...
role: Node
subnets:
- us-west-2b
kops update cluster --name kubernetes.filipemotta.me --yes --admin
kops edit ig --name=kubernetes.filipemotta.me nodes-us-west-2a
...
spec:
machineType: t3.medium
maxSize: 2
minSize: 2
...
❯ kops update cluster --name kubernetes.filipemotta.me --yes --admin
❯ kops rolling-update cluster --yes
kops create ig --name=kubernetes.filipemotta.me new-apiserver --dry-run -o yaml > api-server.yaml
❯ cat api-server.yaml
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
...
name: new-apiserver
spec:
machineType: t3.micro
maxSize: 1
minSize: 1
...
role: APIServer
subnets:
- us-west-2a
- us-west-2b
❯ kops create -f api-server.yaml
❯ kops update cluster --name kubernetes.filipemotta.me --yes
❯ kops rolling-update cluster --yes
❯ kops get instancegroups #get the instancegroups
❯ kops delete ig --name=kubernetes.filipemotta.me nodes-us-west-2b
Do you really want to delete instance group "nodes-us-west-2b"? This action cannot be undone. (y/N)
y
InstanceGroup "nodes-us-west-2b" found for deletion
I0716 15:15:48.767476 21651 delete.go:54] Deleting "nodes-us-west-2b"
❯ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-20-101-63.us-west-2.compute.internal Ready node 68m v1.21.2
ip-172-20-108-27.us-west-2.compute.internal Ready api-server,control-plane,master 2m54s v1.21.2
ip-172-20-56-77.us-west-2.compute.internal Ready node 68m v1.21.2
ip-172-20-59-46.us-west-2.compute.internal Ready node 46m v1.21.2
ip-172-20-60-40.us-west-2.compute.internal Ready api-server,control-plane,master 17m v1.21.2
ip-172-20-91-25.us-west-2.compute.internal Ready api-server,control-plane,master 9m56s v1.21.2
ip-172-20-93-4.us-west-2.compute.internal Ready api-server 11m v1.21.2
kubectl describe ingress -n ingress-nginx
Name: ingress-host
Namespace: ingress-nginx
Address: a205622a4b5f24923fc8516-615762329.us-west-2.elb.amazonaws.com
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
grafana.kubernetes.filipemotta.me
/ grafana:3000 (100.111.236.136:3000)
Annotations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 15s (x3 over 10m) nginx-ingress-controller Scheduled for sync
33