Configure Ingress on Kubernetes using Azure Container Service

In my blogpost about running a .NET Core 2 application in Kubernetes which can be found here, I’m creating a Service to expose the .NET Core 2 Webapi, so the service gets a public IP address. The configuration looks like this:

apiVersion: v1
kind: Service
metadata:
  name: myapiservice
spec:
  ports:
    - port: 80
  selector:
    app: mywebapi
  type: LoadBalancer

So schematic it looks like this:
schematicservice
we are having 3 Pods which contain the same Docker container image. We expose the webapi, that the container is hosting, to the internet. We do this by configuring a Service in Kubernetes configured as LoadBalancer.

Now we want to create another webapi and expose this webapi to the internet also.
We need to deploy the same pieces as created before. So 3 Pods and a Service configured as LoadBalancer. Schematic, it looks like this.
schematicservices

This is not very expandable, although with 2 webapi’s/services it can be done. But what if we have 25 webapi’s or even more. The maximum amount of public IP Addresses that you can use in Azure by default is 20. Another disadvantage is that consumers of the services needs to know multiple IP Addresses to call the webapi’s.
Ingress is what solves this problems, and has some extra handy functionality also.

Ingress to the resque

With Ingress the services that belong to the Pods where your containers are running in are not marked as LoadBalancer, but are configured as NodeType. So they don’t have a public IP Address anymore. You deploy the following next to the Pods and Services:

Ingress: This contains the mapping between URL paths and Services. Also some configuration related to the Ingress Controller.
Ingress Controller: A Pod that runs the Ingress controller and nginx (I’m using Nginx in my sample, other load balancers are supported by Kubernetes also).
Nginx has a configuration file, how to loadbalance and how to route the traffic. This configuration file is mainly generated based on the Ingress.
Ingress Service: The Ingress Controller needs a public IP Address. The Ingress Service takes care of this. We only have 1 public IP Address for all our webapi’s now.

IngressOverview

Configure Ingress step by step

I assume you have configured your Kubernetes cluster so the ServiceAccount has access to your private Docker registry. In this case Azure Container Registry. You can read how to do this in this blogpost.

1. Deploy your Pods

This is exactly the same Deployment as in my mentioned blogpost. Notice which Docker Container image version you are using.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mywebapi-deployment
spec:
  replicas: 3  
  minReadySeconds: 10
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1 
  template:
    metadata:
      labels:
        app: mywebapi
    spec:
      containers:
      name: mywebapi
        image: pnk8sdemocrwe.azurecr.io/myservice/mywebapi:1        
        ports:
        containerPort: 80
        imagePullPolicy: Always   
kubectl apply -f DeployPodMyWebApi.yaml

2. Deploy a Service for your Pods

This time we deploy the Service with type NodePort instead of LoadBalancer.

apiVersion: v1
kind: Service
metadata:
  name: myapiservice
spec:
  ports:
    port: 80
  selector:
    app: mywebapi
  type: NodePort
kubectl apply -f DeployServiceMyApiService.yaml

3. Deploy the Ingress Controller

Note that the Ingress Controller is deployed in the kube-system namespace. At this moment nginx-ingress-controller:0.9.0-beta.15 is the latest version of the Docker Image. You can check the latest version here: https://github.com/kubernetes/ingress-nginx/releases

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  labels:
    k8s-appnginx-ingress-controller
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        k8s-appnginx-ingress-controller
      annotations:
        prometheus.io/port'10254'
        prometheus.io/scrape'true'
    spec:
      # hostNetwork makes it possible to use ipv6 and to preserve the source IP correctly regardless of docker configuration
      # however, it is not a hard dependency of the nginx-ingress-controller itself and it may cause issues if port 10254 already is taken on the host
      # that said, since hostPort is broken on CNI (https://github.com/kubernetes/kubernetes/issues/31307) we have to use hostNetwork where CNI is used
      # like with kubeadm
      # hostNetwork: true
      # Check latest version here: https://github.com/kubernetes/ingress-nginx/releases
      terminationGracePeriodSeconds: 60
      containers:
      image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.15
        name: nginx-ingress-controller        
        ports:
        containerPort: 80
          hostPort: 80
        containerPort: 443
          hostPort: 443
        env:
          name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
        args:
        - /nginx-ingress-controller
        - --default-backend-service=$(POD_NAMESPACE)/default-http-backend        
kubectl apply -f DeployIngressController.yaml

4. Deploy the Ingress Service

Till now, we don’t have a public IP Address, we need only one now. This Service takes care of it. It’s already configured to use port 80 and 443. This service is also deployed in namespace kube-system.

apiVersion: v1
kind: Service
metadata:
  name: ingressservice
  namespace: kube-system
spec:
  ports:
    port: 80
      name: http
    port: 443
      name: https
  selector:
    k8s-appnginx-ingress-controller
  type: LoadBalancer
kubectl apply -f DeployServiceIngressService.yaml

5. Deploy Ingress

Ingress is another kind that we can deploy in Kubernetes. For now only http traffic is configured. the serviceName myapiservice relates to the name of the service deployed in step 2. There are two annotations. The first annotation is optional and tells which Ingress Controller we want to use. So you can deploy multiple Ingress Controllers. The second annotation tells nginx to rewrite the URL. So the service is called on: http://mymicroservices.xpirit.nl/mywebapi this routes to the mywebapi .NET Core 2 webapi. If you leave this annotation the webapi is called like: http://192.168.0.1/mywebapi. But you actually want to call it like: http://192.168.0.1. So the name of the path, which is meant for routing traffic, is removed when the call is forwarded to your pod.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: myingress  
  annotations:    
    kubernetes.io/ingress.classnginx
    ingress.kubernetes.io/rewrite-target/
spec:  
  rules:
  host: mymicroservices.xpirit.nl
    http:
      paths:      
      path: /mywebapi
        backend:
          serviceName: myapiservice
          servicePort: 80
kubectl apply -f DeployIngress.yaml

In case you don’t have a domainname for your service, but just want to access your services by the IP Address, you can remove the host. The configuration looks like this:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: myingress  
  annotations:    
    kubernetes.io/ingress.classnginx
    ingress.kubernetes.io/rewrite-target/
spec:  
  rules:
  http:
      paths:      
      path: /mywebapi
        backend:
          serviceName: myapiservice
          servicePort: 80

When you have more services, just extend the routing:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: myingress  
  annotations:    
    kubernetes.io/ingress.classnginx
    ingress.kubernetes.io/rewrite-target/
spec:  
  rules:
  http:
      paths:      
      path: /mywebapi
        backend:
          serviceName: myapiservice
          servicePort: 80
path: /myotherwebapi
        backend:
          serviceName: myotherapiservice
          servicePort: 80

You can now access your service by the path you have configured in the Ingress or by the public IP Address that is being exposed by the Ingress Service.

6. Default backend

At the end of the Deploymentfile of the Ingress Controller a default backend is configured. We need to deploy this also. This Pod needs a Service also. It returns a 404 when the Ingress Controller cannot successfully route a request according to the mapping rules. Both Deployments are configured in the kube-system namespace.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    k8s-appdefault-http-backend
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        k8s-appdefault-http-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      name: default-http-backend
        # Any image is permissable as long as:
        # 1. It serves a 404 page at /
        # 2. It serves 200 on a /healthz endpoint
        image: gcr.io/google_containers/defaultbackend:1.4
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        ports:
        containerPort: 8080
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: kube-system
  labels:
    k8s-appdefault-http-backend
spec:
  ports:
  port: 80
    targetPort: 8080
  selector:
    k8s-appdefault-http-backend
kubectl apply -f DeployDefaultBackend.yaml

In case of problems

Configuration must be perfectly in order. When you make a mistake you will only see the result of the DefaultBackendService. In case of problems try to verify the following points:
– Check your labels and the corresponding selectors.
– Check the ports of the Pods.
– Check if you can access your service from within the Kubernetes cluster.
Begin on Pod level, then Service level and then Ingress level.
Pod Level as an example:
1. Get the name of any Pod:
kubectl get pods
2. Get a prompt into the selected Pod:
kubectl exec -it “mypodname” bash
3. curl to the IP Adddress of the Pod where your webapi is running
curl -vvv -L -k https://13.81.52.80

– Check the nginx.conf file. You can read it by:
1. Get the name of the Pod where the Ingress Controller is running:
kubectl get pods -n kube-system
2. Get a prompt into the Pod:
kubectl exec -it “myingresscontrollerpodname” -n kube-system bash
3. Open the generated configuration file of nginx
cat /etc/nginx/nginx.conf

All files can be found on my github account

In a future blogpost I will show you how to configure Ingress to support traffic over https using TLS.

Advertenties

2 gedachtes over “Configure Ingress on Kubernetes using Azure Container Service

  1. Pingback: Configure Https / TLS / SSL on Kubernetes with Kube-Lego hosted on Azure Container Service | Pascal Naber

  2. Pingback: KubeWeekly #110 – KubeWeekly

Geef een reactie

Vul je gegevens in of klik op een icoon om in te loggen.

WordPress.com logo

Je reageert onder je WordPress.com account. Log uit / Bijwerken )

Twitter-afbeelding

Je reageert onder je Twitter account. Log uit / Bijwerken )

Facebook foto

Je reageert onder je Facebook account. Log uit / Bijwerken )

Google+ photo

Je reageert onder je Google+ account. Log uit / Bijwerken )

Verbinden met %s