Blog

Knative: A Comprehensive Guide to Serverless Applications on Kubernetes

03.12.2024
Reading time: 5 mins.
Last Updated: 15.01.2025

Table of Contents

In today’s competitive world, companies are increasingly seeking ways to optimize and reduce the costs associated with their operations in the cloud. Cloud-native technologies evolve daily, and finding efficient solutions that balance performance and affordability has become essential. By using Knative, organizations can unleash the power of Kubernetes and minimize unnecessary expenses. In this blog, we will explore what exactly Knative is, its main components, and how it is used and deployed.

Knative is an open-source platform based on Kubernetes, that enables us to deploy, run, and manage serverless workloads on Kubernetes clusters. It removes the need for server management tasks, allowing developers to focus on their code without thinking about the underlying infrastructure. Knative automates a number of complex tasks around scaling, traffic routing, and event-driven execution. For DevOps engineers it can significantly reduce the operational overhead, simplify the CI/CD pipeline, and optimize resource utilization in the clusters by using its autoscaling possibility, which allows scaling the workload down to zero when idle for a specific time. This also makes it an excellent choice for teams looking to enhance stability, reduce costs, and accelerate application delivery.

Knative is an open-source platform based on Kubernetes,
  1. Knative Serving
    • Configuring/Deploying services
    • Version management and services update: Ensures that whenever there are configuration changes, a new version of the service is created, while the old one is also saved
    • Dynamic traffic routing: Enables traffic routing to different versions of the service which allows different deployment strategies like blue/green for example
    • Autoscaling services: Can scale up services up to thousands if needed and at the same time scale them to 0 when idle
  2. Knative Eventing
    • Responsible for triggering services and functions: event-driven architecture where services react to events
    • Uses standard HTTP POST request to send and receive events
    • Queueing and routing events – it provides components like:
      • Brokers – Event routing and management
      • Triggers – Event subscription and filtering
      • Channels – Event transport mechanism
      • Subscriptions – Channel message consumer binding
    • Simplifies the event handling: eliminates the need for custom scripts or additional solutions

It can be used in many scenarios:

  • Serverless workloads: You can deploy your application which will scale up and down automatically based on the incoming traffic and demand and without any human interference. You will pay only for what you use
  • Event-driven architectures: You can build systems that react to events like database changes, HTTP requests, or message queue events
  • CI/CD pipeline integration

Prerequisites

  • Running Kubernetes cluster
  • Kubectl is configured to interact with the cluster
  • A container registry for storing images (e.g. Docker Hub, ECR, GCR, etc.)

1. Install Knative Serving CRDs

kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.16.0/serving-crds.yaml

2. Install Knative Serving Core

kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.16.0/serving-core.yaml

3. Install Kourier as Networking Layer

kubectl apply -f https://github.com/knative/net-kourier/releases/download/knative-v1.16.0/kourier.yaml

What is Kourier and why do we use It?

Kourier is a lightweight, high-performance ingress controller designed for Knative Serving. It manages incoming HTTP requests and routes them to the appropriate Knative services.

Benefits:

  • Simplicity: Еasier to setup and manage than Istio
  • Performance: High-performance routing with minimal resource consumption
  • Seamless Integration: Knative Serving feature-optimized, including features like auto-scaling and traffic splitting

4. Configure Knative to use Kourier as the default ingress

kubectl patch configmap/config-network \
  --namespace knative-serving \
  --type merge \
  --patch '{"data":{"ingress.class":"kourier.ingress.networking.knative.dev"}}'

5. Verify the installation

kubectl get pods -n knative-serving
kubectl get pods -n kourier-system

1. Create a YAML file (my-service.yaml)

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: my-service		# Replace with your desired service name
  namespace: default 		# Replace with your desired namespace
spec:
  template:
    metadata:
      annotations:
        autoscaling.knative.dev/class: "kpa.autoscaling.knative.dev" 	# Use "kpa.autoscaling.knative.dev" for Kubernetes-based Pod Autoscaler (KPA) or "hpa.autoscaling.knative.dev" for Horizontal Pod Autoscaler (HPA)
        autoscaling.knative.dev/target: "50"  	# Target average concurrent requests per pod; adjust based on workload
        autoscaling.knative.dev/minScale: "0"		# Minimum number of pods (0 allows scale-to-zero)
        autoscaling.knative.dev/maxScale: "10"	# Maximum number of pods
        autoscaling.knative.dev/scale-to-zero-pod-retention-period: "5m"	 # Time to wait before scaling to zero
    spec:
      containers:
        - image: gcr.io/knative-samples/helloworld-go		# Replace with your container image URL
          imagePullPolicy: Always
          ports:
            - containerPort: 8080 		# Replace with your container port if different
          # optional - resources can be specified if required
          # resources: 
          #   requests:
          #     cpu: 100m
          #     memory: 640M
          #   limits:
          #     cpu: 1
      timeoutSeconds: 120		# Request timeout in seconds

2. Apply the YAML file

kubectl apply -f my-service.yaml

3. Check if the service is available and ready

kubectl get ksvc my-service

To verify what we deployed following the steps above, we will need to deploy another curl-pod in the Kubernetes cluster which we can use to manually trigger the Knative service and see if it will scale up and down. Please follow the steps below:

Create a YAML file for the curl-pod

apiVersion: v1
kind: Pod
metadata:
  name: curl-pod
  namespace: default
spec:
  containers:
  - name: curl-container
    image: curlimages/curl:latest
    command: ["/bin/sh"]
    args: ["-c", "sleep infinity"]
  restartPolicy: Never

Once the curl-pod is up and running, shell into it and execute the command below. It will generate more traffic to the service and we can see if it actually scales up. (it may require adjusting of ‘autoscaling.knative.dev/target’ value in the my-service.yaml file):

for i in {1..100}; do curl -s http://my-service.default.svc.cluster.local & done
wait

This command should trigger the Knative service and scale it up. It will keep it running as much as it is specified into the annotation in its YAML file.

Monitor the pods created by Knative

kubectl get pods -l serving.knative.dev/service=my-service

Knative revolutionizes serverless applications by seamlessly integrating with Kubernetes to automate deployment, scaling, and event handling. By adopting Knative, organizations can not only reduce operational costs but also enhance the stability and efficiency of their workloads. Its ability to scale resources dynamically, down to zero during idle times, ensures optimal resource utilization. Moreover, the integration with CI/CD pipelines and event-driven architecture simplifies the development process for DevOps teams, enabling faster and more reliable application delivery.

Leave a Reply

Your email address will not be published. Required fields are marked *

More Posts

Learn effective strategies for Amazon S3 migration and explore advanced tools like Storage Lens, Athena, and QuickSight to gain actionable insights into your big data usage. Perfect for organizations optimizing...
Reading
Deploying a PostgreSQL cluster on OKD/OpenShift using the Zalando Postgres Operator is a powerful and scalable solution. This guide provides step-by-step instructions on deploying the operator, configuring storage using StorageClass...
Reading
Get In Touch
ITGix provides you with expert consultancy and tailored DevOps services to accelerate your business growth.
Newsletter for
Tech Experts
Join 12,000+ business leaders and engineers who receive blogs, e-Books, and case studies on emerging technology.