How to Setup Ingress in Kubernetes on Bare-metal?

To setup Ingress on Kubernetes, We need Load Balancer which should load request on all worker node where ingress controller running as deamonset and listen ingress rules to forward on service of deployment/pods

All must have below /etc/hosts configuration of client machine from where we surf website

/etc/hosts
192.168.225.31 test.me

Here 192.168.225.31 is our Haproxy address which load balance our worker nodes for specific url like http://test.me

HAPROXY configuration

  • config file

#cat /etc/hosts

192.168.225.188 kube-worker1
192.168.225.32 kube-worker2

  • haproxy configuration

#vi /etc/haproxy/haproxy.cfg

global
log /dev/log local0 warning
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon

stats socket /var/lib/haproxy/stats

defaults
log global
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000

frontend testme
bind 192.168.225.31:80
mode tcp
option tcplog
default_backend testme

backend testme
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server testme-1 192.168.225.188:80 check # Replace the IP address with your own.
server testme-2 192.168.225.32:80 check # Replace the IP address with your own.

Kubernetes Manifestations for Deployment, Deamonset and Ingress

  • Ingress Repo Clone

#git clone https://github.com/nginxinc/kubernetes-ingress.git –branch v3.2.1
#cd kubernetes-ingress/deployments

  • Configure RBAC

Create a namespace and a service account for NGINX Ingress Controller:
#kubectl apply -f common/ns-and-sa.yaml

Create a cluster role and cluster role binding for the service account:
#kubectl apply -f rbac/rbac.yaml

(App Protect only) Create the App Protect role and role binding
#kubectl apply -f rbac/ap-rbac.yaml

(App Protect DoS only) Create the App Protect DoS role and role binding
#kubectl apply -f rbac/apdos-rbac.yaml

  • Create Common Resources

Create a secret with a TLS certificate and a key for the default server in NGINX (below assumes you are in the kubernetes-ingress/deployment directory):
#kubectl apply -f ../examples/shared-examples/default-server-secret/default-server-secret.yaml

Create a config map for customizing NGINX configuration:
#kubectl apply -f common/nginx-config.yaml

Create an IngressClass resource:
#kubectl apply -f common/ingress-class.yaml

  • Create Custom Resources

Create custom resource definitions for VirtualServer and VirtualServerRoute, TransportServer and Policy resources:
#kubectl apply -f common/crds/k8s.nginx.org_virtualservers.yaml
#kubectl apply -f common/crds/k8s.nginx.org_virtualserverroutes.yaml
#kubectl apply -f common/crds/k8s.nginx.org_transportservers.yaml
#kubectl apply -f common/crds/k8s.nginx.org_policies.yaml

If you would like to use the TCP and UDP load balancing features, create a custom resource definition for the GlobalConfiguration resource:
#kubectl apply -f common/crds/k8s.nginx.org_globalconfigurations.yaml

If you would like to use the App Protect WAF module, you will need to create custom resource definitions for APPolicy, APLogConf and APUserSig:
#kubectl apply -f common/crds/appprotect.f5.com_aplogconfs.yaml
#kubectl apply -f common/crds/appprotect.f5.com_appolicies.yaml
#kubectl apply -f common/crds/appprotect.f5.com_apusersigs.yaml

If you would like to use the App Protect DoS module, you will need to create custom resource definitions for APDosPolicy,APDosLogConf and DosProtectedResource:
#kubectl apply -f common/crds/appprotectdos.f5.com_apdoslogconfs.yaml
#kubectl apply -f common/crds/appprotectdos.f5.com_apdospolicy.yaml
#kubectl apply -f common/crds/appprotectdos.f5.com_dosprotectedresources.yaml

  • Run the Arbitrator by using a Deployment and Service

#kubectl apply -f deployment/appprotect-dos-arb.yaml
#kubectl apply -f service/appprotect-dos-arb-svc.yaml

  • Running NGINX Ingress Controller

When you run the Ingress Controller by using a DaemonSet, Kubernetes will create an Ingress Controller pod on every node of the cluster.

#kubectl apply -f daemon-set/nginx-ingress.yaml

Check nginx ingress controller is ready or not

#kubectl get pods –namespace=nginx-ingress

  • Deployment of nginx application using deployment with 3 replica
#vi deployment.yaml
 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3 # tells deployment to run 3 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

#kubectl apply -f deployment.yaml

  • Service expose clusterip

#kubectl expose deployment nginx-deployment –port 80

[root@kube-master1 pods-creation-yml]# kubectl get svc
NAME               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
kubernetes         ClusterIP   10.96.0.1        <none>        443/TCP   22h
nginx-deployment   ClusterIP   10.105.173.136   <none>        80/TCP    80m
  • ingress rule
#vi ingress-rule.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-nginx-deployment
spec:
  ingressClassName: nginx
  rules:
  - host: "test.me"
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: nginx-deployment
            port:
              number: 80
 

#kubectl apply -f ingress-rule.yaml

[root@kube-master1 pods-creation-yml]# kubectl get pods --namespace=nginx-ingress -o wide
NAME                                 READY   STATUS    RESTARTS   AGE   IP          NODE           NOMINATED NODE   READINESS GATES
appprotect-dos-arb-977686c99-t6krg   1/1     Running   0          90m   10.39.0.1   kube-worker1   <none>           <none>
nginx-ingress-52v7x                  1/1     Running   0          89m   10.42.0.1   kube-worker2   <none>           <none>
nginx-ingress-pdc5q                  1/1     Running   0          89m   10.39.0.2   kube-worker1   <none>           <none>

Here two pod in each worker node running as deamonset of ingress-controller

  • Test test.me website

#curl http://test.me