ksonnet - Tutorial
STREAMLINE HOW YOU WRITE AND DEPLOY KUBERNETES CONFIGURATIONS
Learn how to use ksonnet
Throughout this tutorial, you’ll see italicized text next to an expander icon [+]. You can click the [+] to get more context about the topic in question.
Overview
This tutorial assumes no prior knowledge of ksonnet. You don’t need expertise with Kubernetes, but it will be helpful to have seen the command kubectl apply
, which is used to deploy applications onto Kubernetes clusters.
What we’ll build
In this tutorial, we’ll walk through the steps of using ksonnet to configure and run a basic web app on your cluster. This app is based on the classic Kubernetes guestbook example, a form for submitting and searching through simple messages. When deployed, your guestbook will look like the following:
Along the way, you’ll see the most common ksonnet workflows in action, learn about best practices, and understand how ksonnet concepts tie together to streamline the process of writing Kubernetes manifests.
Additional context
If you have any of the following questions, click the corresponding [+] to learn more:
- What do you mean by a manifest? [+]
- Why not YAML or JSON manifests? [+]
- What is Jsonnet, and why does ksonnet use it for manifests? [+]
- What sort of tool is ksonnet? [+]
- How is this tutorial different from the “Tour of ksonnet”? [+]
If you have outstanding questions that remain unanswered by the end of this tutorial, help us improve by raising a documentation issue.
Now, let’s get started!
0. Prerequisites
Before we begin, ensure that:
- You have ksonnet installed locally. If not, follow the install instructions.
- You have access to an up-and-running Kubernetes cluster. Supported Kubernetes versions are 1.7 (stable) and 1.8 (beta). If you do not have a cluster, choose a setup solution from the official Kubernetes docs.
- You should have
kubectl
installed. If not, follow the instructions for installing via Homebrew (MacOS) or building the binary (Linux).
- Your environment variable
$KUBECONFIG
should specify a valid kubeconfig file, which points at the cluster you want to use for this demonstration. [+]
- Your cluster should have
kube-dns
running, which the application you’ll be building depends on. [+]
1. Initialize your app
In this section, we’ll be using the ksonnet CLI to set up your application.
Define “application”
First off, what exactly do we mean by a ksonnet application? Think of an application as a well-structured directory of Kubernetes manifests, which typically tie together in some way.
In this case, our app manifests collectively define the following architecture:
Our UI, datastore, search service, and logging stack are each going to be defined by a separate manifest. Note that this tutorial only covers the UI and datastore for your app. A future tutorial will address the search service and logging stack.
(Does a ksonnet application have be some sort of web app?) [+]
Commands
Now let’s run some commands:
-
First, create a “sandbox” namespace on your Kubernetes cluster that we can use for this tutorial.
It looks like there are a lot of commands but don’t worry! They’re meant to be copy and pasted, and as cluster-agnostic as possible.
(Why are we doing this?) [+]
kubectl create namespace ks-dev CURRENT_CONTEXT=$(kubectl config current-context) CURRENT_CLUSTER=$(kubectl config get-contexts $CURRENT_CONTEXT | tail -1 | awk '{print $3}') CURRENT_USER=$(kubectl config get-contexts $CURRENT_CONTEXT | tail -1 | awk '{print $4}') kubectl config set-context ks-dev \ --namespace ks-dev \ --cluster $CURRENT_CLUSTER \ --user $CURRENT_USER
-
Initialize your app, using the
ks-dev
context that we created in step (1).If you are running Kubernetes 1.8, you will also need to append
--api-spec=version:v1.8.0
to the end of the following command:ks init guestbook --context ks-dev
(What’s happening here?) [+]
- See your results:
cd guestbook
(What’s inside?) [+]
- Check your ksonnet app into version control:
git init git add . git commit -m "initialize guestbook app"
(Why is this neat?) [+]
Key takeaways
The structure of a ksonnet app is very important. Not only is it more modular than the standard “pile of YAML”, it is responsible for the ksonnet magic. In other words, this structure allows the ksonnet CLI to make assumptions about the app and thereby automate certain workflows.
2. Generate and deploy an app component
Now that we have a working directory for your app, let’s start adding manifests that we can deploy! These manifests define the following components of your app:
- A UI (AngularJS/PHP) - the webpage that your user interacts with
- A basic datastore (Redis) - where user messages are stored
This process is mostly automated by the ksonnet CLI. Any boilerplate YAML will be autogenerated, so you can avoid all that copying and pasting.
Define “component”
We’ve alluded to this a bit before, but any set of discrete components can be combined to make a ksonnet app:
(Can you please be more precise than that?) [+]
To iteratively add new components, we’ll use the following command pattern:
ks generate
- Generate the manifest for a particular componentks apply
- Apply all available manifests to your cluster
Commands (UI component)
First we’ll begin with the Guestbook UI. Its manifest will declare two Kubernetes API resources:
- A Deployment to run
- A Service to expose it to external users’ requests.
The container image itself is written with PHP and AngularJS.
To set up the Guestbook UI component:
- First generate the manifest that describes the Guestbook UI:
ks generate deployed-service guestbook-ui \ --image gcr.io/heptio-images/ks-guestbook-demo:0.1 \ --type ClusterIP
(I have a lot of questions about what just happened.) [+]
- View the YAML equivalent:
ks show default
(So my actual manifest file is
*.jsonnet
, not*.yaml
?) [+] - Now deploy the UI onto your cluster:
ks apply default
Note that
default
refers to theks-dev
context (and implicit namespace) that we used duringks init
.(How is this different from
kubectl apply
?) [+] -
Take a look at the live Guestbook app.
Again, don’t worry about these commands! They expose the Guestbook service so you can access it from your browser. They are as cluster-agnostic as possible, so you can copy and paste:
Note that you won’t be able to submit messages yet! Because we haven’t yet deployed the Redis component, clicking the buttons in your Guestbook UI will fail.
# Set up an API proxy so that you can access the guestbook-ui service locally kubectl proxy > /dev/null & KC_PROXY_PID=$! SERVICE_PREFIX=http://localhost:8001/api/v1/proxy GUESTBOOK_URL=$SERVICE_PREFIX/namespaces/ks-dev/services/guestbook-ui # Check out the guestbook app in your browser open $GUESTBOOK_URL
- Version control these changes:
git add . git commit -m "autogenerate ui component"
Takeaways
How do we know what components are available for us to generate, and furthermore, how are they generated?
Components are based off of common manifest patterns, which are called prototypes because they make it really easy to prototype a new component on your cluster with minimal effort. You just saw the deployed-service
prototype, which comes with ksonnet out-of-the-box.
If you review what we’ve just done, we only really needed steps (1) ks generate
and (3) ks apply
to get the Guestbook UI up and running on your cluster. Not bad! But we can do even better—you might be familiar with existing kubectl
commands like run
and expose
that seem pretty similar. When we deploy a prototype that is more specialized than a Service and Deployment combo (Redis!), the advantages of ksonnet commands will make more sense.
3. Understand how prototypes build components
Define “prototype”
Before we figure out how to get Redis working, let’s take a moment to formalize our understanding of prototypes. In addition to general combinations of Kubernetes API objects like deployed-service
, prototypes can also define common off-the-shelf components like databases.
We’ll actually be using the redis-stateless
prototype next, which sets up a basic Redis instance (stateless because it is not backed by persistent volumes). More complex prototypes need to be downloaded because they do not come out-of-the-box; in this section, we’ll show you how to do so.
By itself, a prototype is an incomplete, skeleton manifest, written in Jsonnet. During ks generate
, you can specify certain command-line parameters to “fill-in-the-blanks” of a prototype and output a component:
(Why is this useful?) [+]
You’ll see this process in action a few more times, as we set up the rest of the Guestbook app.
Commands (Datastore component)
Now let’s use the redis-stateless
prototype to generate the datastore component of our app, as depicted below:
We’ll need to do a little of extra package management first, the redis-stateless
prototype is not available by default.
- Start by seeing what prototypes we have available out of the box:
ks prototype list
- See what packages are currently available for us to download:
ks pkg list
(Where do these packages come from?) [+]
- Download a specific version of the ksonnet Redis library (which contains definitions for various Redis prototypes):
ks pkg install incubator/redis@master
- Check the updated list of packages and prototypes (you should see
redis
andstateless-redis
):
ks pkg list ks prototype list
- Figure out the parameters we need for this prototype:
ks prototype describe redis-stateless
- At this point, we’re ready to generate the manifest for our Redis component:
ks generate redis-stateless redis
(This is familiar, right?) [+]
- View the YAML equivalent (we’re still in our
default
“sandbox”):
ks show default
- Now deploy Redis to our cluster:
ks apply default
(Is there a way to see what will happen first, without actually changing our cluster?) [+]
- Let’s check out the Guestbook page again:
open $GUESTBOOK_URL
Enter something into the main textbox (it should say “Messages” in grayed out text), and click the Submit button. Unlike before, you should now see it appear below. This should look something like the following:
- Version control these changes:
git add . git commit -m "autogenerate redis component"
Awesome, we have the main functionality of the Guestbook working!
(Hm, but how does the Guestbook UI know how to talk to the Redis database?) [+]
Takeaways
Using ks generate
and ks apply
, you can use prototypes and parameters to quickly get the components of your app up and running on a Kubernetes cluster. You can use additional helper commands like ks show
and ks describe
to supplement the process of developing your manifests.
Full disclosure: even with parameter customization, your autogenerated manifests will not always match up perfectly with what you need. However, as the ksonnet tour demonstrates, you can leverage the flexibility of the Jsonnet language to tweak them accordingly.
(Great, but how do I keep track of all of these ks
commands?) [+]
4. Set up another environment for your app
At this point, we have the basics of our Guestbook app working. Users are able to submit messages via the UI, and these are persisted in our Redis datastore.
We aren’t covering fancier features in this tutorial(like search or logging), but we’re going to show how you can use the same set of component manifests in your ksonnet application to deploy to multiple environments. In practice, you might imagine that you’d be developing your manifests in a dev environment, and vetting the results before promoting to an official prod environment.
Define “environment”
Below is a visualization of two environments that represent different namespaces on the same cluster:
More formally, you can think of an environment as a combination of four elements, some of which can be pulled from your current kubeconfig context:
- A name — Used to identify a specific environment, and must be unique within a given ksonnet app.
- A server URI — The address and port of a Kubernetes API server. In other words, it identifies a unique cluster.
- A namespace — A specific namespace within the cluster specified by the server URI. Default is
default
. - A Kubernetes API version — The version of Kubernetes that your API server is running. Used to generate the appropriate helper libraries from Kubernetes’s OpenAPI spec.
We’re going to set up something very similar to the diagram above (e.g. two environments on the same cluster), in order to mock the process of release management.
Commands
- Create a new namespace and context, both named
ks-prod
, for your second environment:
kubectl create namespace ks-prod kubectl config set-context ks-prod \ --namespace ks-prod \ --cluster $CURRENT_CLUSTER \ --user $CURRENT_USER
- Add the prod environment under the name
prod
, and rename the existingdefault
environment todev
for clarity:
ks env list ks env add prod --context=ks-prod ks env set default --name dev ks env list
- Apply all existing manifests (Guestbook UI and Redis) to your
prod
environment:
ks apply prod
- Now you have a parallel version of Guestbook running in
prod
(same cluster,ks-prod
namespace):
PROD_GUESTBOOK_URL=$SERVICE_PREFIX/namespaces/ks-prod/services/guestbook-ui open $PROD_GUESTBOOK_URL
- Check your changes into version control:
git add . git commit -m "add prod env"
(Do we need version control if we didn’t add any new components?) [+]
Takeaways
Environments allow you deploy a common set of manifests to different environments. If you’re wondering why you might do this, here are some potential use cases:
- Release Management (dev vs test vs prod)
- Multi-AZ (us-west-2 vs us-east-1)
- Multi-cloud (AWS vs GCP vs Azure)
Environments are represented hierarchically, so if you’re dealing with many environments, you can nest them as us-west-2/dev
and us-east-1/prod
. As you’ll see next, this lets parameters of any specific environment override its base/parent environments in an intuitive way.
5. Customize an environment with parameters
Alright, so it’s great to be able to apply the same manifests to multiple environments—but oftentimes the whole point of distinct environments is slightly different configurations.
It’s a bit restrictive and unrealistic if our prod Guestbook has to run in exactly the same way as our dev Guestbook, so let’s start customizing our environments with parameters. Up until this point, we’ve been setting parameters during ks generate
, when we pass in command line flags to customize a new component. Here we’ll show how you can change these parameters after the fact, for specific environments.
Define “parameters”
As we’ve alluded to, parameters can be set for the entire app or per-environment. In this tutorial, all the parameters you’ll see are specific to a component. A future tutorial will address the idea of global parameters, which can be shared across multiple components.
Under the hood, the ks param
commands update a couple of local Jsonnet files, so that you always have a version-controllable representation of what you ks apply
onto your Kubernetes cluster.
(What does this look like?) [+]
Commands
- First let’s see the difference between our environments’ parameters (there should be none):
ks param diff dev prod
- Now let’s set some environment-specific params:
ks param set guestbook-ui image gcr.io/heptio-images/ks-guestbook-demo:0.2 --env dev ks param set guestbook-ui replicas 3 --env prod
(What’s the story here?) [+]
- Now let’s see if our
param diff
surfaces any differences:
ks param diff dev prod
Notice that the params we’ve changed have been highlighted!
- Alright, now let’s deploy to our two environments (remember, same cluster):
ks apply dev && ks apply prod
- Let’s check the difference between what’s actually running on
dev
andprod
:
ks diff remote:dev remote:prod
(What’s the output mean?) [+]
(What’s the syntax?) [+]
- Compare the two guestbook UIs (the one for
dev
should look pretty different!):
# Check out dev guestbook open $GUESTBOOK_URL # Make sure that the changes didn't affect prod open $PROD_GUESTBOOK_URL
- Once again, check your files into version control:
git add . git commit -m "update guestbook-ui parameters"
Takeaways
With the added power of parameters, environments allow you to do more than run identical copies of your app in different clusters and namespaces. Using parameters, you can fine tune your deployment to the needs of each environment, whether that is for different load requirements or just more accurate labels.
6. Tie it together
Congrats! You’ve just developed and deployed the main components of the Guestbook using ksonnet, and you now have a sustainable set of manifests that you can continue to use if you decide to add more functionality later on.
We realize that we’ve gone over a lot, so the following diagram provides a quick overview of the key ksonnet concepts you’ve used:
In plain English:
- Prototypes and parameters can combine to form components.
- Multiple components make up an app.
- An app can be deployed to multiple environments.
Cleanup
If you’d like to remove the Guestbook app and other residual traces from your cluster, run the following commands in the root of your Guestbook app directory:
# Remove your app from your cluster (everything defined in components/)
ks delete dev && ks delete prod
# If you used 'kubectl proxy' to connect to your Guestbook service, make sure
# to end that process
sudo kill -9 $KC_PROXY_PID
# Remove the "sandbox"
kubectl delete namespace ks-dev ks-prod
kubectl config delete-context ks-dev && kubectl config delete-context ks-prod
Next steps
We’ve also only just skimmed the surface of the ksonnet framework. Much of what you’ve seen has been focused on the CLI. To learn more, check out the following resources:
- Tour of ksonnet
- Learn more about how some CLI commands are implemented
- Experiment directly with Jsonnet
- CLI Reference and Core Concepts
- See the full list of commands and concepts
(What about the rest of the Guestbook (e.g. search)?) [+]
Troubleshooting
ERROR user: Current not implemented on linux/amd64
If you encounter this error when running the ksonnet Linux binary, you can temporarily work around it by setting the USER
environment variable (e.g. export USER=your-username
).
This error results from cross-compilation (Linux on Mac). To avoid this, future binaries will be built on the appropriate target machines.
Github rate limiting errors
If you get an error saying something to the effect of 403 API rate limit of 60 still exceeded
you can work around that by getting a Github personal access token and setting it up so that ks
can use it. Github has higher rate limits for authenticated users than unauthenticated users.
- Go to https://github.com/settings/tokens and generate a new token. You don’t have to give it any access at all as you are simply authenticating.
- Make sure you save that token someplace because you can’t see it again. If you lose it you’ll have to delete and create a new one.
- Set an environment variable in your shell:
export GITHUB_TOKEN=<token>
. You may want to do this as part of your shell startup scripts (i.e..profile
).