Pages

Showing posts with label Kubernetes. Show all posts
Showing posts with label Kubernetes. Show all posts

Tuesday, November 26, 2024

Harvester Setup and Configuration

Harvester is an open-source hyperconverged infrastructure (HCI) software that provides a powerful and easy-to-use platform for deploying and managing virtual machines (VMs). Built on Kubernetes, it simplifies the process of setting up and maintaining a virtualized environment. 

The following steps will guide you in setting up Harvester 

Download the Harvester ISO from the website.

Make a bootable USB from the ISO with any of the following tools

  • https://etcher.balena.io/
  • https://rufus.ie/en/

Once the machine has been booted from USB we will get the following Page



Once booted, follow the steps to complete the installatoon

  1. Cluster Creation:
    • Select "Create a new Harvester Cluster"
  2. Disk Selection:
    • Use the right arrow key to navigate and choose a disk for Harvester's system.
    • Select a separate disk dedicated to storing virtual machine data.
  3. Host Configuration:
    • Enter a hostname for your Harvester node.
  4. Network Setup:
    • Use the right arrow key to select your network interface card (NIC).
    • Choose between DHCP or static IP configuration.
      • If using Static, provide the necessary network details (IP address, subnet mask, gateway).
    • Configure DNS server addresses.
  5. VIP Configuration:
    • Use the right arrow key to navigate, Choose between DHCP or static IP for the Virtual IP (VIP) address.
      • If using Static, enter the desired VIP.
  6. Cluster Token:
    • Set a cluster token. This is crucial for adding more nodes to your cluster later.
  7. Password and SSH:
    • Set a strong password for accessing the node (default SSH user is 'rancher').
  8. NTP Servers:
    • Configure NTP servers (defaults to 0.suse.pool.ntp.org) to ensure time synchronization across all nodes. Use commas to separate multiple server addresses.
  9. Optional Configurations:
    • HTTP Proxy: If needed, provide the proxy URL.
    • SSH Keys: Import SSH keys by providing their HTTP URL (e.g., GitHub public keys).
    • Harvester Configuration: If you have a specific configuration file, enter its HTTP URL.
  10. Review and Install:
    • Review all the settings you've configured.
    • Confirm to start the installation process. This might take a few minutes.
  11. Access Harvester:
    • After the node restarts, the Harvester console will show the management URL and node status.
    • Access the web interface using the provided URL (defaults to https://your-virtual-ip).
    • Use F12 to switch to the shell if needed, and type exit to return to the console.

Latest Steps can be found @  https://github.com/harvester/harvester

Monday, April 10, 2023

Kubernetes(k8s) Sample Commands - 02

Following are a few of the  kubectl commands for managing Kubernetes clusters:

  • kubectl get nodes -o=jsonpath='{XX}'
    • This command retrieves information about the nodes in the cluster using the jsonpath output format. Replace {XX} with the desired path.
  • kubectl get nodes -o=custom-columns=<Column name>
    • This command retrieves information about the nodes in the cluster using custom columns output format. Replace <Column name> with the desired column name
  • --sort-by=
    • This option is used to sort the output based on a specified field.
  • kubectl get node node01 -o json > /opt/outputs/node01.json
    • This command retrieves information about a specific node and saves it as a JSON file.
  • kubectl get nodes -o jsonpath='{.items[*].status.nodeInfo.osImage}' > /opt/outputs/nodes_os.txt
    • This command retrieves the OS image of all the nodes in the cluster and saves it in a text file.
  • kubectl config view --kubeconfig=my-kube-config -o jsonpath="{.users[*].name}" > /opt/outputs/users.txt
    • This command retrieves the names of all users in the kubeconfig file and saves it in a text file.
  • kubectl get pv --sort-by=.spec.capacity.storage > /opt/outputs/storage-capacity-sorted.txt
    • This command retrieves the capacity of all persistent volumes and sorts the output by storage capacity.
  • kubectl config view --kubeconfig=my-kube-config -o jsonpath="{.contexts[?(@.context.user=='aws-user')].name}" > /opt/outputs/aws-context-name
    • This command retrieves the context name for a specific user in the kubeconfig file.
  • kubectl run test-nslookup --image=busybox:1.28 --rm -it --restart=Never -- nslookup nginx-resolver-service
    • This command creates a pod named test-nslookup and runs a DNS lookup on nginx-resolver-service.
  • kubectl run test-nslookup --image=busybox:1.28 --rm -it --restart=Never -- nslookup nginx-resolver-service > /root/CKA/nginx.svc
    • This command creates a pod named test-nslookup and redirects the output of the DNS lookup to a file.
  • K get nodes -o jason | jq -c paths |grep type
    • This command retrieves the paths of all fields in the node objects in the cluster that contain the word "type".
  • kubectl create deployment --image=nginx nginx --replicas=4 --dry-run=client -o yaml > nginx-deployment.yaml
    • This command creates a deployment named nginx with 4 replicas and saves the deployment manifest as a YAML file. The --dry-run=client flag is used to simulate the deployment without actually creating it.

Monday, August 8, 2022

PodMan

Podman is a container engine that allows you to create, run, and manage containers on a Linux host. It is similar to other container runtimes such as Docker, Rocket, Drawbridge, and LXC. Podman has a command-line interface that is similar to Docker, making it easy to switch from Docker to Podman.

If you're new to Podman, here are some basic commands that will help you get started:


  • podman login -u username -p password registry.access.redhat.com: Log in to a container registry.
  • podman pull <image-name>: Download a container image.
  • podman ps -a: List all containers, both running and stopped.
  • podman search <image-name>: Search for a container image.
  • podman images: List all container images.
  • podman run <image-name> echo 'Hello world!': Run a container with a specific image and command.
  • podman run -d -p 8080 httpd: Run a container with an image in the background and map port 8080.
  • podman port -l: Display the port details of the last used container.
  • podman run -it ubi8/ubi:8.3 /bin/bash: Run a container and enter into its bash shell.
  • podman run --name MySQL-custom -e MYSQL_USER=Ruser -e MYSQL_PASSWORD=PASS -e MYSQL_ROOT_PASSWORD=PASS -d MySQL: Run a container with a custom name and environment variables.
  • podman ps --format "{{.ID}} {{.Image}} {{.Names}}": List containers with custom output formatting.


In Podman, you can create both root and rootless containers. Root containers run with elevated privileges, while rootless containers run without elevated privileges and are isolated from the host system.


Here are some commands to create and manage root and rootless containers using Podman:


  • sudo podman run --rm --name asroot -ti httpd /bin/bash: Run a container as root.
  • podman run --rm --name asuser -ti httpd /bin/bash: Run a container as a regular user.
  • podman run --name my-httpd-container httpd: Run a container with a custom name.
  • podman exec httpd-container cat /etc/hostname: Run a command inside a running container.
  • podman stop my-httpd-container: Stop a running container.
  • podman kill -s SIGKILL my-httpd-container: Send a custom kill signal to a running container.
  • podman restart my-httpd-container: Restart a container that has been stopped.
  • podman rm my-httpd-container: Remove a container.
  • podman rm -a: Remove all containers.
  • podman stop -a: Stop all running containers.
  • podman exec mysql /bin/bash -c 'mysql -uuser1 -pmypa55 -e "select * from items.Projects;"': Run a command inside a running container.

 

Sharing a local directory with a container is a common task in containerization. Podman makes this process simple by allowing you to mount a local directory to a container using the -v option.

Create a local directory with proper SELinux permissions

mkdir /home/student/dbfiles
podman unshare chown -R 27:27 /home/student/dbfiles
sudo semanage fcontext -a -t container_file_t '/home/student/dbfiles(/.*)?'
sudo restorecon -Rv /home/student/dbfiles
ls -ldZ /home/student/dbfiles
The mount the path with -v location_in_local:location_in_container 
podman run -v /home/student/dbfiles:/var/lib/mysql rhmap47/mysql
podman unshare chown 27:27 /home/student/local/mysql

 

Port management

Port management is an important aspect of containerization, and Podman provides a simple way to manage ports for containers. You can use the -p option to map ports between the container and the host system.

Here's an explanation of the commands used in port management with Podman:


  • podman run -d --name apache1 -p 8080:8080 httpd: Run a container with the httpd image, map port 8080 on the host system to port 8080 in the container, and name the container apache1.
  • podman run -d --name apache2 -p 127.0.0.1:8081:8080 httpd: Run a container with the httpd image, map port 8081 on the localhost interface of the host system to port 8080 in the container, and name the container apache2.
  • podman run -d --name apache3 -p 127.0.0.1::8080 httpd: Run a container with the httpd image, map a random port on the localhost interface of the host system to port 8080 in the container, and name the container apache3.


podman port apache3: Display the port details of the apache3 container.

In the first command, the -p option is used to map port 8080 on the host system to port 8080 in the container. This means that if you access port 8080 on the host system, you will be accessing the container's port 8080.

In the second command, the -p option is used to map port 8081 on the localhost interface of the host system to port 8080 in the container. This means that if you access port 8081 on the localhost interface of the host system, you will be accessing the container's port 8080.

In the third command, the -p option is used to map a random port on the localhost interface of the host system to port 8080 in the container. This means that a random port on the host system will be mapped to the container's port 8080.

The podman port command displays the port details of a container, including the mapping between the container's ports and the host system's ports.

By using these commands, you can easily manage ports for containers in Podman.


Podman Image Management

Podman is available on a RHEL host with the following entry in /etc/containers/registries.conf file:

[registries.search] 
registries = ["registry.redhat.io","quay.io"]
  • podman save [-o FILE_NAME] IMAGE_NAME[:TAG]: Save an image to a file. You can use the -o option to specify the output file name. For example, podman save -o mysql.tar quay.io/mysql:latest saves the quay.io/mysql:latest image to a file named mysql.tar.
  • podman load [-i FILE_NAME]: Load an image from a file. You can use the -i option to specify the input file name. For example, podman load -i mysql.tar loads the mysql.tar file and creates an image.
  • podman rmi [OPTIONS] IMAGE [IMAGE...]: Remove one or more images. You can use the -a option to remove all images. For example, podman rmi -a removes all images.
  • podman commit [OPTIONS] CONTAINER [REPOSITORY[:PORT]/]IMAGE_NAME[:TAG]: Create a new image from a container. You can use the -a option to specify the author name. For example, podman commit -a 'Your Name' httpd httpd-new creates a new image named httpd-new from the httpd container with author name Your Name.


Here's an explanation of the few of the Podman commands:

  • podman diff container-name: This command shows the differences between the container's current state and its original state at the time of its creation. The diff subcommand tags any added file with an A, any changed ones with a C, and any deleted file with a D. This is useful for troubleshooting issues or for auditing the changes made to a container.
  • podman tag [OPTIONS] IMAGE[:TAG] [REGISTRYHOST/][USERNAME/]NAME[:TAG]: This command is used to tag an image with a new name or repository. You can use the [REGISTRYHOST/][USERNAME/] part to specify the registry where you want to tag the image. For example, podman tag mysql-custom devops/mysql tags the mysql-custom image with the name devops/mysql.
  • podman rmi devops/mysql:snapshot: This command removes an image with the specified name and tag. For example, podman rmi devops/mysql:snapshot removes the devops/mysql image with the snapshot tag.
  • podman push [OPTIONS] IMAGE [DESTINATION]: This command pushes an image to a specified destination, such as a container registry. You can use the [DESTINATION] part to specify the registry where you want to push the image. For example, podman push quay.io/bitnami/nginx pushes the quay.io/bitnami/nginx image to the specified registry.








Friday, August 5, 2022

Kubernets Components

 Kubernetes, also known as K8s, is a popular container orchestration tool that automates the deployment, scaling, and management of containerized applications. The Kubernetes environment is made up of several core components that work together to provide a scalable and robust container management system. While there are other optional components available, these core components are essential to the Kubernetes environment.


 

  • Kubernetes API Server: The Kubernetes API server acts as the primary management hub for the Kubernetes cluster. It exposes the Kubernetes API, which is used by other components to interact with the cluster. The API server validates and processes API requests, and updates the cluster state accordingly.
  • etcd: etcd is a distributed key-value store that stores the configuration data and state of the Kubernetes cluster. It provides a reliable and consistent data store that is used by the Kubernetes API server and other components to store and retrieve data.
  • kubelet: The kubelet is responsible for managing and monitoring individual nodes (worker machines) in the Kubernetes cluster. It communicates with the Kubernetes API server to ensure that the containers running on a node are healthy and running as intended.
  • kube-proxy: The kube-proxy is responsible for managing network communication within the Kubernetes cluster. It sets up and maintains network routes and load balancing for Kubernetes services running on the cluster.
  • Kubernetes Scheduler: The Kubernetes scheduler is responsible for scheduling workloads (containers) onto worker nodes in the cluster. It considers factors such as resource availability, workload constraints, and affinity rules to make optimal scheduling decisions.

 

Data Plane: Worker Nodes, Where the Pods or Containers with workload run
Control Plane: Master Node, where the k8s components run

Following are the Components of the Control Plane
  • Apiserver
    • Apiserver service act as the connection between all the components in the Control Plane and Data Plane
    • Orchestrating all operations in the cluster
    • Expose the K8s API which end users use for operation and monitoring
    • Collect data from Kubelet for Monitoring
    • Authenticates - Validates - retrieve data
    • Give data or do the operations with data
    • Pass data to kubelet to perform operations in the Worker node
  • etcd
    • etcd service is mainly used for the storage of all the details. Etcd is basically a key-value pair data store. 
    • Store Data not limited to the following details
      • Registry
      • Nodes
      • Pods
      • Config
      • Secrets
      • Accounts
      • Roles
      • -- other components as well
  • Kube scheduler
    • Identify the right worker nodes in which containers can be deployed and give data back to API Servers, then kubelet get data from API server and deploys the container. 
    • Keeps on monitoring the API Server for operations 
    • Identify the right worker node for mentioned operation and give it back to API Server
    • Filter nodes
    • Ranks nodes : 
      • Resource requirements, resources left after container placement
      • Taints and Tolerations
      • Node Selectors/Affinity
      • Labels and Selectors
      • Resource limits
      • Manual Scheduling 
      • Daemon Sets
      • Multiple Schedulers
      • Scheduler Events
  • Kube-controller-Manager
    • Watch Status
    • Remediate Situations
    • Monitor the state of the system and try to bring it to the desired state

Following are the Components of the Data Plane
  • Kubectl
    • Client used to connect to API Server
  • Kubelet
    • Agent runs on each Worker nodes
    • Listens to the Kube APIs and Performs the Operation 
    • give back data to Kube API Server for monitoring of operation
  • Kube-proxy
    • Enable communication between services in Worker nodes
    • Pod-Network
      • by Default All pods connect to each other
    • Create Iptable rules to allow communication between pods and services





Friday, January 28, 2022

Kubernetes(k8s) with Containerd Using Ansible Over Ubuntu Machines

Kubernetes is a popular container orchestration system that provides a powerful platform for managing containerized applications. Containerd is a lightweight, yet powerful container runtime that provides the underlying infrastructure for many Kubernetes deployments. In this, we can see how to set up Kubernetes with Containerd using Ansible over Ubuntu machines.

Environment

  • Ubuntu VM's running on Vmware
  • K8s with Containerd Runtime

User Creation

  • Asks for the User Name which has to be created
  • Create's the user
  • Adds a dedicated Sudo entry 
  • Setting up Password less sudo for user
  • Copy the local uses ssh key to server for password less auth
  • Print the details
  • Updates the System
  • Steps added for the Package Cleaning as well. 

- hosts: all
become: yes

vars_prompt:
- name: "new_user"
prompt: "Account need to be create in remote server."
private: no

tasks:
- name: creating the user {{ new_user }}.
user:
name: "{{ new_user }}"
createhome: yes
shell: /bin/bash
append: yes
state: present

- name: Create a dedicated sudo entry file for the user.
file:
path: "/etc/sudoers.d/{{ new_user }}"
state: touch
mode: '0600'
- name: "Setting up Sudo without Password for user {{ new_user }}."
lineinfile:
dest: "/etc/sudoers.d/{{ new_user }}"
line: '{{ new_user }} ALL=(ALL) NOPASSWD: ALL'
validate: 'visudo -cf %s'

- name: Set authorized key for user copying it from current {{ new_user }} user.
authorized_key:
user: "{{ new_user }}"
state: present
key: "{{ lookup('file', lookup('env','HOME') + '/.ssh/id_rsa.pub') }}"

- name: Print the created user.
shell: id "{{ new_user }}"
register: new_user_created
- debug:
msg: "{{ new_user_created.stdout_lines[0] }}"

- name: Remove Docker packages
apt:
name: docker-ce docker-ce-cli containerd.io
state: absent
purge: yes

- name: Remove Docker directories
file:
path: "{{ item }}"
state: absent
with_items:
- /etc/docker
- /var/lib/docker
- /var/run/docker.sock

- name: Remove containerd packages
apt:
name: containerd
state: absent
purge: yes

- name: Remove containerd directories
file:
path: "{{ item }}"
state: absent
with_items:
- /etc/containerd
- /var/lib/containerd
- name: "Update cache & Full system update"
apt:
update_cache: true
cache_valid_time: 3600
force_apt_get: true


Package Installation in Master and Worker Nodes

  • Copy the local host files to all the server for name resolution
  • update the hostnames of the machines based on the names in host file
  • Temporary Swap off
  • Swap off in fstab
  • Create a empty file for containerd module.
  • Configure module for containerd.
  • Create a empty file for kubernetes sysctl params.
  • Configure sysctl params for Kubernetes.
  • Apply sysctl params without reboot
  • Installing Prerequisites for Kubernetes
  • Add Docker’s official GPG key
  • Add Docker Repository
  • Install containerd.
  • Configure containerd.
  • Configure containerd.
  • Creating containerd Config file
  • Enable containerd service, and start it.
  • Add Google official GPG key
  • Add Kubernetes Repository
  • Installing Kubernetes Cluster Packages.
  • Enable service kubelet, and enable persistently
  • Reboot all the Kubernetes nodes.

- hosts: "master, workers"
remote_user: ansible
become: yes
become_method: sudo
become_user: root
gather_facts: yes
connection: ssh
tasks:
- name: Copying the host file
copy:
src: /etc/hosts
dest: /etc/hosts
owner: root
group: root

- name: "Updating hostnames"
hostname:
name: "{{ new_hostname }}"

- name: Make the Swap inactive
command: swapoff -a

- name: Remove Swap entry from /etc/fstab.
lineinfile:
dest: /etc/fstab
regexp: swap
state: absent

- name: Create a empty file for containerd module.
copy:
content: ""
dest: /etc/modules-load.d/containerd.conf
force: no

- name: Configure module for containerd.
blockinfile:
path: /etc/modules-load.d/containerd.conf
block: |
overlay
br_netfilter

- name: Create a empty file for kubernetes sysctl params.
copy:
content: ""
dest: /etc/sysctl.d/99-kubernetes-cri.conf
force: no

- name: Configure sysctl params for Kubernetes.
lineinfile:
path: /etc/sysctl.d/99-kubernetes-cri.conf
line: "{{ item }}"
with_items:
- 'net.bridge.bridge-nf-call-iptables = 1'
- 'net.ipv4.ip_forward = 1'
- 'net.bridge.bridge-nf-call-ip6tables = 1'

- name: Apply sysctl params without reboot.
command: sysctl --system

- name: Installing Prerequisites for Kubernetes
apt:
name:
- apt-transport-https
- ca-certificates
- curl
- gnupg-agent
- vim
- software-properties-common
state: present

- name: Add Docker’s official GPG key
apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
state: present
- name: Add Docker Repository
apt_repository:
repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable
state: present
filename: docker
update_cache: yes

- name: "Update cache & Full system update"
apt:
update_cache: true
upgrade: dist
cache_valid_time: 3600
force_apt_get: true

- name: Install containerd.
apt:
name:
- containerd.io
state: present

- name: Configure containerd.
file:
path: /etc/containerd
state: directory

- name: Enable containerd service, and start it.
systemd:
name: containerd
state: restarted
enabled: yes
daemon-reload: yes

- name: Add Google official GPG key
apt_key:
url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
state: present

- name: Add Kubernetes Repository
apt_repository:
repo: deb http://apt.kubernetes.io/ kubernetes-xenial main
state: present
filename: kubernetes
mode: 0600

- name: "Update cache & Full system update"
apt:
update_cache: true
upgrade: dist
cache_valid_time: 3600
force_apt_get: true

- name: Installing Kubernetes Cluster Packages.
apt:
name:
- kubeadm
- kubectl
- kubelet
state: present

- name: Enable service kubelet, and enable persistently
service:
name: kubelet
enabled: yes

- name: Reboot all the kubernetes nodes.
reboot:
msg: "Reboot initiated by Ansible"
connect_timeout: 5
reboot_timeout: 3600
pre_reboot_delay: 0
post_reboot_delay: 30
test_command: whoami



Master Configuration

  • Pulls all needed images
  • Reset Kubeadm if its already configured
  • Initialize K8s cluster
  • Create Directory for Kube config file in master
  • Create a local kube config file in master
  • Copy the kube config file to ansible local server
  • Genarates the Kube toke for workers and stores it
  • Copy the token to master's tmp directory
  • Copy the toke to ansible local tmp direcotry
  • Initialize the pod network with fannel
  • Copy the output to mater file
  • Copy the output to ansible local server


- hosts: master
remote_user: ansible
become: yes
become_method: sudo
become_user: root
gather_facts: yes
connection: ssh
tasks:

- name: Pulling images required for setting up a Kubernetes cluster
shell: kubeadm config images pull

- name: Resetting kubeadm
shell: kubeadm reset -f
register: output

- name: Initializing Kubernetes cluster
shell: kubeadm init --apiserver-advertise-address=$(ip a |grep ens160| grep 'inet ' | awk '{print $2}' | cut -f1 -d'/') --pod-network-cidr 10.244.0.0/16 --v=5
register: myshell_output

- debug: msg="{{ myshell_output.stdout }}"

- name: Create .kube to home directory of master server
file:
path: $HOME/.kube
state: directory
mode: 0755

- name: Copy admin.conf to user's kube config to master server
copy:
src: /etc/kubernetes/admin.conf
dest: $HOME/.kube/config
remote_src: yes

- name: Copy admin.conf to user's kube config to ansible local server
become: yes
become_method: sudo
become_user: root
fetch:
src: /etc/kubernetes/admin.conf
dest: /Users/rahulraj/.kube/config
flat: yes
- name: Get the token for joining the nodes with Kuberentes master.
shell: kubeadm token create --print-join-command
register: kubernetes_join_command
- debug:
msg: "{{ kubernetes_join_command.stdout_lines }}"

- name: Copy K8s Join command to file in master
copy:
content: "{{ kubernetes_join_command.stdout_lines[0] }}"
dest: "/tmp/kubernetes_join_command"

- name: Copy join command from master to local ansible server
fetch:
src: "/tmp/kubernetes_join_command"
dest: "/tmp/kubernetes_join_command"
flat: yes

- name: Install Pod network
shell: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
register: myshell_output

- name: Copy the output to master file
copy:
content: "{{ myshell_output.stdout }}"
dest: "/tmp/pod_network_setup.txt"

- name: Copy network output from master to local ansible server
fetch:
src: "/tmp/pod_network_setup.txt"
dest: "/tmp/pod_network_setup.txt"
flat: yes


Worker Configuration

  • Copy the token from ansible local file to worker nodes
  • Reset the kubeadm 
  • Join the Worker node to Master by running the command

- hosts: workers
remote_user: ansible
become: yes
become_method: sudo
become_user: root
gather_facts: yes
connection: ssh
tasks:

- name: Copy token to worker nodes.
become: yes
become_method: sudo
become_user: root
copy:
src: /tmp/kubernetes_join_command
dest: /tmp/kubernetes_join_command
mode: 0777
- name: Resetting kubeadm
shell: kubeadm reset -f
register: output

- name: Join the Worker nodes with the master.
become: yes
become_method: sudo
become_user: root
command: sh /tmp/kubernetes_join_command
register: joined_or_not
- debug:
msg: "{{ joined_or_not.stdout }}"


K8s should be up with the worker nodes now. 


Friday, January 21, 2022

Setting up MetalLB Load Balancer with Kubernetes k8s.

When we are deploying the Kubernetes in the local development environment and if we need to publish the services through load balancer services then Metallb load balancer is one of the easiest solutions we can use. All we need is a set of IP range from our network which metal lb can use.  

Following are the k8s configurations that need to be applied on the cluster. 

Below is the config map which mentions the IPs which can be used for the load balancers

apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.16.2.80-172.16.2.90



Below is the ansible-playbook I used to deploy the metal load balancer on the k8s cluster. 
  • Initialize the master with Metallb Clusters
  • Copy the metallb Configuration to master
  • Kube apply the configuration on master. 


- hosts: master
remote_user: ansible
become: yes
become_method: sudo
become_user: root
gather_facts: yes
connection: ssh
tasks:
- name: Initializing Metallb cluster
shell: kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.3/manifests/metallb.yaml
register: myshell_output

- name: Copying the Metallb config file
copy:
src: /Users/rahulraj/workspace/vmware-ansible/k8s/playbook/metallb-congif.yml
dest: $HOME/metallb-congif.yml


- name: Configuring Metallb cluster
shell: kubectl apply -f $HOME/metallb-congif.yml
register: myshell_output



For testing it we shall deploy a sample Nginx and expose it through load balancer type services. 

k create deployment nginx-deployments --image=nginx --replicas=3 --port=80
k expose deployment nginx-deployments --port=80 --target-port=80 --type=LoadBalancer



Output should be like following 

 kubectl get svc
NAME                TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes          ClusterIP      10.96.0.1        <none>        443/TCP        25h
nginx-deployments   LoadBalancer   10.100.137.154   172.16.2.80   80:30973/TCP   13h











Thursday, January 20, 2022

Kubernetes(k8s) With Ansible Over Ubuntu Machines with Docker

Kubernetes(k8s) is a popular container orchestration system that provides a powerful platform for managing containerized applications. Docker is a lightweight, yet powerful container runtime that provides the underlying infrastructure for many Kubernetes deployments. In this, we can see how to set up Kubernetes with Docker using Ansible over Ubuntu machines.

Environment

  • Ubuntu VM's running on Vmware
  • K8s with Docker Runtime
** Important Notice: Following Settings are not working since the new release of 1.27. Please make use of the deployment with Containers.

https://www.adminz.in/2022/01/kubernetes-with-containerd-using-ansible.html

User Creation

  • Asks for the User Name which has to be created
  • Create's the user
  • Adds a dedicated Sudo entry 
  • Setting up Password less sudo for user
  • Copy the local uses ssh key to server for password less auth
  • Print the details
  • Updates the System

- hosts: all
become: yes

vars_prompt:
- name: "new_user"
prompt: "Account need to be create in remote server."
private: no

tasks:
- name: creating the user {{ new_user }}.
user:
name: "{{ new_user }}"
createhome: yes
shell: /bin/bash
append: yes
state: present

- name: Create a dedicated sudo entry file for the user.
file:
path: "/etc/sudoers.d/{{ new_user }}"
state: touch
mode: '0600'
- name: "Setting up Sudo without Password for user {{ new_user }}."
lineinfile:
dest: "/etc/sudoers.d/{{ new_user }}"
line: '{{ new_user }} ALL=(ALL) NOPASSWD: ALL'
validate: 'visudo -cf %s'

- name: Set authorized key for user copying it from current {{ new_user }} user.
authorized_key:
user: "{{ new_user }}"
state: present
key: "{{ lookup('file', lookup('env','HOME') + '/.ssh/id_rsa.pub') }}"

- name: Print the created user.
shell: id "{{ new_user }}"
register: new_user_created
- debug:
msg: "{{ new_user_created.stdout_lines[0] }}"

- name: "Update cache & Full system update"
apt:
update_cache: true
upgrade: dist
cache_valid_time: 3600
force_apt_get: true


Package Installation in Master and Worker Nodes

  • Copy the local host files to all the server for name resolution
  • update the hostnames of the machines based on the names in host file
  • Temporary Swap off
  • Swap off in fstab
  • Installing Kubernetes Pre-requisites packages
  • Adding Docker Packages Keys
  • Adding Docker Respository
  • Install Docker packages
  • Enables Docker Services
  • Add Google repositories keys
  • Create Directory for Docker deamon file
  • Create the docker deamon file with Overlay details
  • Restart Docker Services
  • Install Kubernetes Packages
  • Enabled K8s Services
  • Reboot the Servers

- hosts: "master, workers"
remote_user: ansible
become: yes
become_method: sudo
become_user: root
gather_facts: yes
connection: ssh
tasks:
- name: Copying the host file
copy:
src: /etc/hosts
dest: /etc/hosts
owner: root
group: root

- name: "Updating hostnames"
hostname:
name: "{{ new_hostname }}"

- name: Make the Swap inactive
command: swapoff -a

- name: Remove Swap entry from /etc/fstab.
lineinfile:
dest: /etc/fstab
regexp: swap
state: absent

- name: Installing Prerequisites for Kubernetes
apt:
name:
- apt-transport-https
- ca-certificates
- curl
- gnupg-agent
- vim
- software-properties-common
state: present

- name: Add Docker’s official GPG key
apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
state: present

- name: Add Docker Repository
apt_repository:
repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable
state: present
filename: docker
mode: 0600

- name: Install Docker Engine.
apt:
name:
- docker-ce
- docker-ce-cli
- containerd.io
state: present

- name: Enable service docker, and enable persistently
service:
name: docker
enabled: yes

- name: Add Google official GPG key
apt_key:
url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
state: present


- name: Creates directory
file:
path: /etc/docker/
state: directory

- name: Creating a file with content
copy:
dest: "/etc/docker/daemon.json"
content: |
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}

- name: restart docker
service:
name: docker
state: restarted
enabled: yes

- name: Add Kubernetes Repository
apt_repository:
repo: deb http://apt.kubernetes.io/ kubernetes-xenial main
state: present
filename: kubernetes
mode: 0600

- name: Installing Kubernetes Cluster Packages.
apt:
name:
- kubeadm
- kubectl
- kubelet
state: present

- name: Enable service kubelet, and enable persistently
service:
name: kubelet
enabled: yes

- name: Reboot all the kubernetes nodes.
reboot:
msg: "Reboot initiated by Ansible"
connect_timeout: 5
reboot_timeout: 3600
pre_reboot_delay: 0
post_reboot_delay: 30
test_command: whoami





Master Configuration

  • Pulls all needed images
  • Reset Kubeadm if its already configured
  • Initialize K8s cluster
  • Create Directory for Kube config file in master
  • Create a local kube config file in master
  • Copy the kube config file to ansible local server
  • Genarates the Kube toke for workers and stores it
  • Copy the token to master's tmp directory
  • Copy the toke to ansible local tmp direcotry
  • Initialize the pod network with fannel
  • Copy the output to mater file
  • Copy the output to ansible local server


- hosts: master
remote_user: ansible
become: yes
become_method: sudo
become_user: root
gather_facts: yes
connection: ssh
tasks:

- name: Pulling images required for setting up a Kubernetes cluster
shell: kubeadm config images pull

- name: Resetting kubeadm
shell: kubeadm reset -f
register: output

- name: Initializing Kubernetes cluster
shell: kubeadm init --apiserver-advertise-address=$(ip a |grep ens160| grep 'inet ' | awk '{print $2}' | cut -f1 -d'/') --pod-network-cidr 10.244.0.0/16 --v=5
register: myshell_output

- debug: msg="{{ myshell_output.stdout }}"

- name: Create .kube to home directory of master server
file:
path: $HOME/.kube
state: directory
mode: 0755

- name: Copy admin.conf to user's kube config to master server
copy:
src: /etc/kubernetes/admin.conf
dest: $HOME/.kube/config
remote_src: yes

- name: Copy admin.conf to user's kube config to ansible local server
become: yes
become_method: sudo
become_user: root
fetch:
src: /etc/kubernetes/admin.conf
dest: /Users/rahulraj/.kube/config
flat: yes
- name: Get the token for joining the nodes with Kuberentes master.
shell: kubeadm token create --print-join-command
register: kubernetes_join_command
- debug:
msg: "{{ kubernetes_join_command.stdout_lines }}"

- name: Copy K8s Join command to file in master
copy:
content: "{{ kubernetes_join_command.stdout_lines[0] }}"
dest: "/tmp/kubernetes_join_command"

- name: Copy join command from master to local ansible server
fetch:
src: "/tmp/kubernetes_join_command"
dest: "/tmp/kubernetes_join_command"
flat: yes

- name: Install Pod network
shell: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
register: myshell_output

- name: Copy the output to master file
copy:
content: "{{ myshell_output.stdout }}"
dest: "/tmp/pod_network_setup.txt"

- name: Copy network output from master to local ansible server
fetch:
src: "/tmp/pod_network_setup.txt"
dest: "/tmp/pod_network_setup.txt"
flat: yes


Worker Configuration

  • Copy the token from ansible local file to worker nodes
  • Reset the kubeadm 
  • Join the Worker node to Master by running the command

- hosts: workers
remote_user: ansible
become: yes
become_method: sudo
become_user: root
gather_facts: yes
connection: ssh
tasks:

- name: Copy token to worker nodes.
become: yes
become_method: sudo
become_user: root
copy:
src: /tmp/kubernetes_join_command
dest: /tmp/kubernetes_join_command
mode: 0777
- name: Resetting kubeadm
shell: kubeadm reset -f
register: output

- name: Join the Worker nodes with the master.
become: yes
become_method: sudo
become_user: root
command: sh /tmp/kubernetes_join_command
register: joined_or_not
- debug:
msg: "{{ joined_or_not.stdout }}"


K8s should be up with the worker nodes now.