[Serverless Knative] Knative - Getting Started
Knative
Knative is the Enterprise-grade Serverless on your own terms, Kubernetes-based platform to deploy and manage modern serverless workloads.
Knative is the Enterprise-grade Serverless on your own terms, Kubernetes-based platform to deploy and manage modern serverless workloads.
Istio is the simplify observability, traffic management, security, and policy with the leading service mesh
Istio addresses the challenges developers and operators face with a distributed or microservices architecture. Whether you’re building from scratch or migrating existing applications to cloud native, Istio can help.
This guide lets you quickly evaluate Istio. If you are already familiar with Istio or interested in installing other configuration profiles or advanced deployment models, refer to our which Istio installation method should I use? FAQ page.
Knative Quickstart Environments are for experimentation use only. For production installation, see our Installing Guide - https://knative.dev/docs/install/
Before you can get started with a Knative Quickstart deployment you must install kind
, the Kubernetes CLI kubectl
, and the Knative CLI kn
.
In this tutorial, you will deploy a “Hello world” service.
This service will accept an environment variable, TARGET
, and print “Hello ${TARGET}!.
”
Since our “Hello world” Service is being deployed as a Knative Service, not a Kubernetes Service, it gets some super powers out of the box 🚀.
The last super power 🚀 of Knative Serving we’ll go over in this tutorial is traffic splitting.
Splitting traffic is useful for a number of very common modern infrastructure needs, such as blue/green deployments - https://martinfowler.com/bliki/BlueGreenDeployment.html and canary deployments - https://martinfowler.com/bliki/CanaryRelease.html. Bringing these industry standards to bear on Kubernetes is as simple as a single CLI command on Knative or YAML tweak, let’s see how!
You may have noticed that when you created your Knative Service you assigned it a revision-name
, “world”. If you used kn
, when your Service was created Knative returned both a URL and a “latest revision” for your Knative Service. But what happens if you make a change to your Service?
You can think of a Revision - https://knative.dev/docs/serving/#serving-resources as a stateless, autoscaling, snapshot-in-time of application code and configuration.
A new Revision will get created each and every time you make changes to your Knative Service, whether you assign it a name or not. When splitting traffic, Knative splits traffic between different Revisions of your Knative Service.
Instead of TARGET
=“World” update the environment variable TARGET
on your Knative Service hello
to greet “Knative” instead. Name this new revision hello-knative
1 | $ kn service update hello \ |
As before, kn prints out some helpful information to the CLI.
Expected output:
1 | Service hello created to latest revision 'hello-knative' is available at URL: |
1 | # hello.yaml |
Once you’ve edited your existing YAML file:
1 | $ kubectl apply -f hello.yaml |
Expected output:
1 | service.serving.knative.dev/hello configured |
Note, since we are updating an existing Knative Service hello
, the URL doesn’t change, but our new Revision should have the new name hello-knative
Let’s access our Knative Service again on your browser http://hello.default.127.0.0.1.nip.io
to see the change, or use curl
in your terminal:
1 | $ curl http://hello.default.127.0.0.1.nip.io |
Expected output:
1 | Hello Knative! |
You may at this point be wondering, “where did ‘Hello World!’ go?” Remember, Revisions are a stateless snapshot-in-time of application code and configuration, so your “hello-world” Revision is still available to you.
We can easily see a list of our existing revisions with the kn CLI:
1 | $ kn revisions list |
Expected output:shell
1 | NAME SERVICE TRAFFIC TAGS GENERATION AGE CONDITIONS READY REASON |
Though the following example doesn’t cover it, you can peak under the hood to Kubernetes to see the revisions as Kubernetes sees them.
1 | $ kubectl get revisions |
Expected output:
1 | NAME SERVICE TRAFFIC TAGS GENERATION AGE CONDITIONS READY REASON |
The column most relevant for our purposes is TRAFFIC. It looks like 100% of traffic is going to our latest Revision (“hello-knative”) and 0% of traffic is going to the Revision we configured earlier (“hello-world”).
When you create a new Revision of a Knative Service, Knative defaults to directing 100% of traffic to this latest Revision. We can change this default behavior by specifying how much traffic we want each of our Revisions to receive.
Lets split traffic between our two Revisions:
Info
@latest
will always point to our “latest” Revision which, at the moment, is hello-knative
.
1 | $ kn service update hello \ |
Add the following traffic
to the bottom of your existing YAML file:
1 | # hello.yaml |
Once you’ve edited your existing YAML file:
1 | $ kubectl apply -f hello.yaml |
Verify traffic split configure correctly by listing the revisions again.
1 | $ kn revisions list |
Though the following example doesn’t cover it, you can peak under the hood to Kubernetes to see the revisions as Kubernetes sees them.
1 | $ kubectl get revisions |
Expected output:
NAME SERVICE TRAFFIC TAGS GENERATION AGE CONDITIONS READY REASON
hello-knative hello 50% 2 10m 3 OK / 4 True
hello-world hello 50% 1 36m 3 OK / 4 True
Access your Knative service on the browser again http://hello.default.127.0.0.1.nip.io
, and refresh multiple times to see the different output being served by each Revision.
Similarly, you can curl
the Service URL multiple times to see the traffic being split between the Revisions.
1 | $ curl http://hello.default.127.0.0.1.nip.io |
Expected output:
1 | $ curl http://hello.default.127.0.0.1.nip.io |
Congratulations, 🎉 you’ve successfully split traffic between 2 different Revisions of a Knative Service. Up next, Knative Eventing!
You won’t need the hello
Service in the Knative Eventing tutorial, so it’s best to clean up before you move forward:
1 | $ kn service delete hello |
1 | % kubectl delete -f hello.yaml |
[1] Traffic Splitting - Knative - https://knative.dev/docs/getting-started/first-traffic-split/
[2] Home - Knative - https://knative.dev/docs/
[3] Installing Guide - https://knative.dev/docs/install/
[4] blue/green deployments - https://martinfowler.com/bliki/BlueGreenDeployment.html
[5] canary deployments - https://martinfowler.com/bliki/CanaryRelease.html
[6] Revision - https://knative.dev/docs/serving/#serving-resources
Remember those super powers 🚀 we talked about? One of Knative Serving’s powers is built-in automatic scaling (autoscaling). This means your Knative Service only spins up your application to perform its job – in this case, saying “Hello world!” – if it is needed; otherwise, it will “scale to zero” by spinning down and waiting for a new request to come in.
Knative Serving provides automatic scaling, or autoscaling, for applications to match incoming demand. This is provided by default, by using the Knative Pod Autoscaler (KPA).
The following settings are specific to the Knative Pod Autoscaler (KPA).
Knative Services are used to deploy an application. To create an application using Knative, you must create a YAML file that defines a Service. This YAML file specifies metadata about the application, points to the hosted image of the app, and allows the Service to be configured.
Knative Serving provides components that enable:
Rapid deployment of serverless containers.
Autoscaling, including scaling pods down to zero.
Support for multiple networking layers, such as Ambassador, Contour, Kourier, Gloo, and Istio, for integration into existing environments.
Point-in-time snapshots of deployed code and configurations.