Original link: Podman Guide
Podman
The original CRI-O part of the project, then is separated into a single project called libpod . Podman experience and Docker
similar, the difference is Podman no daemon. Previously used Docker CLI when, Docker CLI will pass gRPC API to talk to Docker Engine say, "I want to start a container", and then Docker Engine will by OCI Container runtime (default runc
) to start a container. This process means that the container can not be the child process Docker CLI, but the child process Docker Engine.
Podman relatively simple and crude, it does not use Daemon, but by OCI runtime (default is direct runc
) to start the container, the container so the process is a child process of podman. This is more like the Linux fork/exec
model, while Docker uses C/S
(client / server) model. Compared with the C / S model, fork/exec
the model has many advantages, such as:
The system administrator can know who in the end is a container process started.
If the use
cgroup
of podman do some restrictions, then all created containers are capped.SD_NOTIFY : If podman command into the
systemd
unit file container processes can return notification via podman, indicates that the service is ready to receive the task.socket activation : can be connected to
socket
transfer from systemd to podman, and transferred to the process vessel in order to use them.
Ado, here we go directly to the actual link, this article taught you how to deploy a static blog with podman, and by Sidecar mode will be added to the container where the blog Envoy
into the mesh.
1. Solution Architecture
My deployment scenario involves two Envoy:
There will be a front-end proxy to run a single container. Agent's job is to give the visitor a front-end entry, the access from the outside forwards the request to a specific back-end services.
Second, the blog page is provided by static nginx, Sidecar mode to run a
Envoy
container, which is shared with nginxnetwork nemspace
.All Envoy to form a mesh, and then share routing information between them.
Before I wrote with Docker
the deployment of hugo static blog and configure HTTPS
certificates article, we use the same approach, just docker replaced podman, with particular reference to open Envoy TLS authentication combat .
2. Deploy and sidecar proxy hugo
My blog is by hugo generated static pages, you can put it nginx
in other static sites similar tools (such as hexo, etc.), you can do so. Now I have to do is let nginx envoy container and container share the same network namespace, but also make front-end proxy service can be found through the domain name . Before very simple with docker, the direct use of docker-compose to get, podman more trouble, it can not be used docker-compose
, service discovery seems to get it working.
Finally found on Github a project called podman-compose , that have saved, try a little or not found, will field when podman-compose create the container network_mode: "service:hugo"
into the parameters podman CLI's --network service:hugo
(true brain damage), cause the container creation fails, error information CNI network "service:hugo" not found
. The field value to network_mode: "container:hugo_hugo_1"
be started successfully, however, has given rise to another question: podman-compose each practice is service
a creation pod
(pod name directory name docker-compose.yml is located), then go in the pod Add container. I can not front-end and back-end proxy services into a single pod in it? Only two directories were created for the front-end proxy and hugo, then create separate docker-compose.yml. This issue is resolved, the next question again, podman-compose does not support service discovery through a service name, chops circle found support links
(in fact, add a parameter --add-host
), but links only take effect in the same pod, I are split into the two pod, links beyond the reach of ah, or use no eggs. How can I do, now the only way is to hand line and the command line.
I mentioned a new term called pod
, spend here 30 seconds to give you a brief introduction, if you are Kubernetes
a heavy user, this word should not be unfamiliar, but there really say is podman the pod, the meaning is still the same , first create a pause
container, and then create a service container, the container business shared pause
container various linux namespace, so you can communicate with each other between a pod in the container by localhost easily. Not only that, podman pod can also be exported as declarative resource definition Kubernetes, and give chestnuts:
Create a pod:
$ podman pod create --name hugo
View pod:
$ podman pod ls
POD ID NAME STATUS CREATED # OF CONTAINERS INFRA ID
88226423c4d2 hugo Running 2 minutes ago 2 7e030ef2e7ca
Hugo start a container in the pod in:
$ podman run -d --pod hugo nginx:alpine
View container:
$ podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3c91cab1e99d docker.io/library/nginx:alpine nginx -g daemon o... 3 minutes ago Up 3 minutes ago reverent_kirch
View all vessels, including container pause:
$ podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3c91cab1e99d docker.io/library/nginx:alpine nginx -g daemon o... 4 minutes ago Up 4 minutes ago reverent_kirch
7e030ef2e7ca k8s.gcr.io/pause:3.1 6 minutes ago Up 6 minutes ago 88226423c4d2-infra
View all vessels, including container pause and display pod id container belongs:
$ podman ps -ap
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD
3c91cab1e99d docker.io/library/nginx:alpine nginx -g daemon o... 4 minutes ago Up 4 minutes ago reverent_kirch 88226423c4d2
7e030ef2e7ca k8s.gcr.io/pause:3.1 6 minutes ago Up 6 minutes ago 88226423c4d2-infra 88226423c4d2
View pod in the process of resource usage:
$ podman pod top hugo
USER PID PPID %CPU ELAPSED TTY TIME COMMAND
root 1 0 0.000 8m5.045493912s ? 0s nginx: master process nginx -g daemon off;
nginx 6 1 0.000 8m5.045600833s ? 0s nginx: worker process
nginx 7 1 0.000 8m5.045638877s ? 0s nginx: worker process
0 1 0 0.000 9m41.051039367s ? 0s /pause
The pod exported as declarative deployment checklist:
$ podman generate kube hugo > hugo.yaml
View a list of content deployment:
$ cat hugo.yaml
# Generation of Kubernetes YAML is still under development!
#
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-1.0.2-dev
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: 2019-10-17T04:17:40Z
labels:
app: hugo
name: hugo
spec:
containers:
- command:
- nginx
- -g
- daemon off;
env:
- name: PATH
value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- name: TERM
value: xterm
- name: HOSTNAME
- name: container
value: podman
- name: NGINX_VERSION
value: 1.17.4
- name: NJS_VERSION
value: 0.3.5
- name: PKG_RELEASE
value: "1"
image: docker.io/library/nginx:alpine
name: reverentkirch
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
workingDir: /
status: {}
How, is not the kind of familiar taste? This is a compatible pod defined kubernetes, you can directly kubectl apply -f hugo.yaml
deploy it in Kubernetes cluster can be directly deployed by podman steps roughly like this:
Delete pod created earlier:
$ podman pod rm -f hugo
Then create pod through the deployment list:
$ podman play kube hugo.yaml
Back to the previous question, if you create a pod, still unable to resolve service issues discovered through declarative definition, unless another static IP support CNI
plug-ins, and supports static IP CNI these plug-ins you need etcd as a database, so I point resources , I do not want plus a etcd, command line or hand line and right.
First of all I want to create a hugo container and container specified IP:
$ podman run -d --name hugo \
--ip=10.88.0.10 \
-v /opt/hugo/public:/usr/share/nginx/html \
-v /etc/localtime:/etc/localtime \
nginx:alpine
Create another envoy container, and shared network namespace hugo container:
$ podman run -d --name hugo-envoy \
-v /opt/hugo/service-envoy.yaml:/etc/envoy/envoy.yaml \
-v /etc/localtime:/etc/localtime \
--net=container:hugo envoyproxy/envoy-alpine:latest
service-envoy.yaml reads as follows:
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 8080
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
access_log:
- name: envoy.file_access_log
config:
path: "/dev/stdout"
route_config:
name: local_route
virtual_hosts:
- name: service
domains:
- "*"
routes:
- match:
prefix: "/"
route:
cluster: local_service
http_filters:
- name: envoy.router
config: {}
clusters:
- name: local_service
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
hosts:
- socket_address:
address: 127.0.0.1
port_value: 80
admin:
access_log_path: "/dev/null"
address:
socket_address:
address: 0.0.0.0
port_value: 8081
Please refer to the specific meaning is open Envoy TLS authentication combat .
Mentioned at the outset container podman create a child process podman of this presentation may be more vague, in fact podman consists of two parts, one is podman CLI, there is a container runtime, container runtime by the conmon
responsible, including monitoring, logs, TTY allocation and similar out-of-memory
situations chores. In other words, conmon is the parent of all containers.
conmon need to do all systemd
do not do or do not want to do. Even if CRI-O was used without systemd to manage the container, the container allocated to it sytemd compatible cgroup
, so that conventional tools such systemd systemctl
can see the container resource usage.
$ podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
42762bf7d37a docker.io/envoyproxy/envoy-alpine:latest /docker-entrypoin... About a minute ago Up About a minute ago hugo-envoy
f0204fdc9524 docker.io/library/nginx:alpine nginx -g daemon o... 2 minutes ago Up 2 minutes ago hugo
Cgroup unfamiliar to students, you can refer to the following series:
Zero-based students Daguai upgrade recommendations from top to bottom in accordance with the above directory, I wish you good luck!
3. Deploy the front-end proxy
This is very simple and straightforward to create a container like:
$ podman run -d --name front-envoy \
--add-host=hugo:10.88.0.10 \
-v /opt/hugo/front-envoy.yaml:/etc/envoy/envoy.yaml \
-v /etc/localtime:/etc/localtime \
-v /root/.acme.sh/yangcs.net:/root/.acme.sh/yangcs.net \
--net host envoyproxy/envoy
Because no way automatic service discovery, need to pass parameters --add-host
to add hosts to the container manually. envoy profile is added to the cluster by the domain name, front-envoy.yaml follows:
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 80
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
access_log:
- name: envoy.file_access_log
config:
path: "/dev/stdout"
route_config:
virtual_hosts:
- name: backend
domains:
- "*"
routes:
- match:
prefix: "/"
redirect:
https_redirect: true
response_code: "FOUND"
http_filters:
- name: envoy.router
config: {}
- address:
socket_address:
address: 0.0.0.0
port_value: 443
filter_chains:
- filter_chain_match:
server_names: ["yangcs.net", "www.yangcs.net"]
tls_context:
common_tls_context:
alpn_protocols: h2
tls_params:
tls_maximum_protocol_version: TLSv1_3
tls_certificates:
- certificate_chain:
filename: "/root/.acme.sh/yangcs.net/fullchain.cer"
private_key:
filename: "/root/.acme.sh/yangcs.net/yangcs.net.key"
filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: backend
domains:
- "yangcs.net"
- "www.yangcs.net"
routes:
- match:
prefix: "/admin"
route:
prefix_rewrite: "/"
cluster: envoy-ui
- match:
prefix: "/"
route:
cluster: hugo
response_headers_to_add:
- header:
key: "Strict-Transport-Security"
value: "max-age=63072000; includeSubDomains; preload"
http_filters:
- name: envoy.router
config: {}
clusters:
- name: hugo
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
http2_protocol_options: {}
hosts:
- socket_address:
address: hugo
port_value: 8080
admin:
access_log_path: "/dev/null"
address:
socket_address:
address: 0.0.0.0
port_value: 8001
Please refer to the specific meaning is open Envoy TLS authentication combat .
Now you can access the blog site through the public domain name, if there are other subsequent applications, you can refer to the second step, and then re-create the front-end proxy, add --add-host
parameters. In my site https://www.yangcs.net example:
What I seem to reveal some incredible things, stop there, you do not say, you do not ask.
4. boot from Kai
Since podman no longer use daemon management services, --restart
parameters were abandoned, in order to achieve automatic start-up container, systemd can only be managed. Create systemd service configuration file:
$ vim /etc/systemd/system/hugo_container.service
[Unit]
Description=Podman Hugo Service
After=network.target
After=network-online.target
[Service]
Type=simple
ExecStart=/usr/bin/podman start -a hugo
ExecStop=/usr/bin/podman stop -t 10 hugo
Restart=always
[Install]
WantedBy=multi-user.target
$ vim /etc/systemd/system/hugo-envoy_container.service
[Unit]
Description=Podman Hugo Sidecar Service
After=network.target
After=network-online.target
After=hugo_container.service
[Service]
Type=simple
ExecStart=/usr/bin/podman start -a hugo-envoy
ExecStop=/usr/bin/podman stop -t 10 hugo-envoy
Restart=always
[Install]
WantedBy=multi-user.target
$ vim /etc/systemd/system/front-envoy_container.service
[Unit]
Description=Podman Front Envoy Service
After=network.target
After=network-online.target
After=hugo_container.service hugo-envoy_container.service
[Service]
Type=simple
ExecStart=/usr/bin/podman start -a front-envoy
ExecStop=/usr/bin/podman stop -t 10 front-envoy
Restart=always
[Install]
WantedBy=multi-user.target
Then before the container is created before stopping, attention: stop, not deleted!
$ podman stop $(podman ps -aq)
These containers last start by systemd services.
$ systemctl start hugo_container
$ systemctl start hugo-envoy_container
$ systemctl start front-envoy_container
Set the post.
$ systemctl enable hugo_container
$ systemctl enable hugo-envoy_container
$ systemctl enable front-envoy_container
Systemd after each system restart will automatically start the service corresponding to the container.
4. Summary
These are the blog has been migrated from Docker to change all operations Podman, the overall look down still more twists and turns, because Podman is designed to Kubernetes, and I ask too much, it is a tight resource vps, namely not want to go Kubernetes
, I do not want to on etcd
both want to get sidecar, they want to engage in automatic service discovery, how can I do, and I am desperate ah, this thing can not blame podman, so we had to leave in order to prevent "podman not work" impression, hereby declare it. And consequently I do not want to fend for themselves a ~ ~
Micro-channel public number
The following sweep the two-dimensional code micro-channel public concern number, reply ◉ ◉ plus group to join our cloud-native Exchange Group, and Sun Hongliang, Zhangguan Chang, Yang Ming and other native chiefs to discuss cloud technology in the public No.