Kubernetes container orchestration

Kubernetes is one of the strongest contenders for the Docker orchestration framework, especially since version 1.2. If you're looking for a way to deploy Docker containers to any of your environments, Kubernetes gives you at least 7 reasons to choose it.

Deployments

In the default settings in K8S1.1, Deployments are alpha versions. In 1.2, when you start a new cluster, the Deployments feature is in beta, considered stable, and operational.

Why deploying programs in K8S1.1 seems a bit tedious (click here to read more:  click  ), I won't go into details here, the main points here are:

  1. You have to calculate the unique value for each deployment yourself and put it in the Replication-Controller definition file.

  2. To create and update an existing Replication-Controller for the first time, you have to have different processes.

  3. After you are able to configure a new version via rolling update, you have to find an existing Replication-Controller in the system.

Deployments are starting to gradually replace Replication-Controller/Rolling-Update procedures. Deployments are declarative, which is great: you don't have to tell the cluster what to do, you just declare what functionality you want, and the cluster will schedule everything it needs to present itself in an ideal state. You don't need to calculate the unique value yourself, and you no longer need to find the existing configuration when you want to update it.

The official introductory guide uses kubectl create to create deployments and kubectl apply to update deployments. But from my personal experience, you can use both cases above, which means you don't need different programs when you create and update.

A final great deployment feature is support for rollbacks. The rollback function in K8S1.1 has been completed by redeploying the old Replication-Controller. In K8S1.2, when you create a configuration, you can use the record flag. This will allow you to roll back a configuration to the current version whenever you want.

Support for multiple availability zones

Before K8S 1.2, one of the biggest drawbacks of K8S was that it lacked support for extended programs on different AZs. This means that your cluster only lives on a single AZ, and if something goes wrong with that AZ, you will lose your entire cluster. The only way to handle these failures is to manage multiple clusters, but the overhead of doing so is unaffordable.

K8S1.2 brings the full support of Multi-AZ. You can easily spawn nodes in any AZ and the scheduler is fully aware of different nodes to schedule your pods.

While this is a significant improvement in this area, Multi-AZ support does not apply to the K8S and its components. Your cluster still exists in an AZ, and if that AZ goes down you get into a weird state: the cluster is fully functional but the cluster isn't, which means it can't handle deploy operations, etc.

K8S1.2 brings full support for Multi-AZ. You can easily revive on any AZ, and the scheduler knows this when scheduling your pods to different nodes.

In this area, this is a remarkable improvement, as support for Multi-AZ is not only applied to the K8S master and its components. Your master is also running on a single AZ. If the AZ fails, you will be in a bad state: the cluster will all work, but the master will not, which means that the operation you want to configure cannot be handled .

ConfigMaps & secrets as environment variables

K8S1.1 has a built-in option to store configuration via Secrets. But it is still recommended to use Secrets to store sensitive data, ConfigMap allows us to store insensitive data configuration in a more direct and convenient way.

K8S1.2中一个很棒的调整就是,Secrets和ConfigMap不仅可以作为数据卷(K8S1.1中的唯一选择)使用,而且对于你的定义文件来说,还可以作为环境变量。比加载数据卷和在应用程序上读取文件更加方便,就是为了获取一个简单的配置项目。

Daemon-Sets

拥有一个K8S集群有时让我们忘记我们有集群中还有节点。我们创建容器,但是大多数时候我们甚至不知道他们跑在哪个节点上。

虽然也有那么几次当我们需要处理一些与节点相关的任务的时候是知道的。一个例子就是,一个应用程序从节点收集语句,然后传送他们到一些度量服务器。另一个例子就是,从所有运行在节点上的容器那里收集所有日志,然后发送到我们的登录系统。这些例子中,我们需要单个的容器在运行每个节点。

K8S1.1仅仅只是提供给我们静态pods来完成这个目的。为了定义一个静态pod,我们可能不得不在每个节点上的特定文件夹下用pod定义。这显然很不方便因为:

  1. 如果我们想要添加静态pods,我们就不得不警告在集群上运行的每个节点。

  2. 静态pods在本地被kubelet管理,所以我们无法查询API,也无法对他们进行任何别的操作。

K8S1.2介绍了 Daemon-Sets,它会提供给我们一个更加方便的方式在每个节点上运行一个pod。Daemon-Sets里面的pods是可视的,就好像系统里的其他pod一样。你可以删除一个Daemon-Set,然后通过API创建你想要的Daemon-Sets。不需要改变节点上的文件了。

集群大小和性能

集群大小对于一个公司来说是一个很重要的问题,它有着决定核心基础设施的权利。我们此刻永远也不会知道我们会在一年后规模变得多大,但是我们需要百分之百确定的是,我们现在选择的工具以后不会限制我们。

官方新发布的1.2版本每个集群支持1000个节点,同时支持30000个pod同时运行。

然而这些数字可能是好的也可能是坏的(取决于你的主观意愿),查看团队运行到了什么进程是鼓舞人心的,1.2相比1.1发布版已经有了一个X10的缩放改善。

期待在1.3上看到一个更高的数字。

Jobs

Jobs允许你运行pods,以及成功完成一定数量的pods。在K8S1.1中,我们可以创建裸pods(没有Replication-Controller),但是这些pods根本不能保证完成。例如,运行有pod的节点在执行过程中重启,pod就不会在另一个节点被重启。通过验证我们完成的job,上述的情况确认不会发生。

虽然这不是一个改变世界的功能,但是绝对是一个有用的功能!

项目进程

除了上文描述的功能和改进,你很容易觉察到1.1版本后的巨大进步。每个issue就是几个小时的问题,而且由拥有者优先化。等待良久的功能即将实现。越来越多的贡献者正在加入这个大派对,通过提交代码帮助改善这个项目,扩大以及讨论这些issue。这大概就是我最喜欢使用的OSS项目之一了。

 

来自: https://segmentfault.com/a/1190000005020508

 

http://www.open-open.com/lib/view/open1461806367305.html

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326609335&siteId=291194637