网络在云计算中又起着至关重要的作用。大概浏览了一下目录和内容总结,很符合我的需求。类似的英文版书籍我一直在寻找,但还真没见到过。很赞同作者在前言中的一句话:工程师不能只当使用者,还要理解底层的实现。其实很多新技术都是原有技术的再封装和创新应用,真正理解了本质的东西对快速学习非常有帮助。

我有另一篇blog专门记录了Kubernetes Operators这本书的总结: Kubernetes Operators

根据书中资料总结的demo: https://github.com/chengdol/k8s-operator-sdk-demo

最近安排我去写一个operator的任务,最开始是helm-based operator, then evolve to Go-based, 挺有意思。How to explain Kubernetes Operators in plain English: https://enterprisersproject.com/article/2019/2/kubernetes-operators-plain-english

Brief introduction: https://www.youtube.com/watch?v=DhvYfNMOh6A

首先,从K8s官网上粗略地了解一下什么是:

CNCF:

一些中文资料,最近在看书Kuberneter Operators,然后被Go卡主了,准备开始学习:

01/22/2020

02/26/2020

  • 如果不想让一个executable 执行多次,可以在每次run的时候在当前或固定文件夹create一个hidden file, for example: .commad.lock,然后写入当前正在run的command 参数等。通过检查这个file是否存在去决定是不是正在执行。

  • redhat 一个很不错的网站: https://www.redhat.com/sysadmin/

  • command uuidgen 可以用来generate random unique number.

04/07/2020

  • find softlink file, use type l:
1
find . -type l -name dsenv

or use -L, then the -type predicate will always match against the type of the file that a symbolic link points to rather than the link itself (unless the symbolic link is broken).

1
find -L / type f -name dsenv

类似于readlink, 会去softlink directory里面寻找original file.

04/09/2020

  • > /dev/null 2>&1 can be written as &> /dev/null

  • su vs su -. 都是login to another user, - 表示normal login,login后会完全变成当前user的初始环境,比如在当前user的home dir且$PATH也是当前user的。没有-, 则会继承上个user的环境,比如dir还是上次的。2个login都会执行~/.bashrc.

04/18/2020

  • 在pod container中,如果script process不是init process (pid 1),那么script中的trap 内容不会被执行。 还不太清楚为什么(这是docker container的一个特性)

04/26/2020

  • declare -F show function name in current shell
  • declare -f show function definition in current shell, declare -f <f name> get just that definition

06/16/2020

  • 很多build可以用Makiefile + make command去实现,应该是一个通用的工具。

06/21/2020

06/24/2020

  • bash -x ./script, no need set -x

06/25/2020

  • script performance comparison strace -c ./script. 也就是说,多用bash internal command from help. -c will output show system time on each system call.

06/26/2020

07/05/2020

  • shopt -s nocasematch, set bash case-insensitive match in case or [[ ]] condition. 这个是从bash tutorial 中文版中学到的, shopt is bash built-in setting, unlike set is from POSIX.

07/09/2020

  • vim can directly operate on file in tarball: vim xx.tgz then save the changes

07/12/2020

07/29/2020

  • !! re-execute last command.
  • !<beginning token> re-execute last command start with this token.
  • $_ last token in last command line

08/22/2020

  • Interesting: top 10 most used CLI: history | awk {'print $2'} | sort | uniq -c | sort -nr | head -n 10
  • same idea to check my blog theme: ls -ltr | awk 'NR>1 {print $NF}' | cut -d'-' -f1 | sort | uniq -c | sort

09/09/2020

  • Mac netstat list listening port, the netstat on Mac is not that powerful as on Linux:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    ## -p: protocol
    ## -n: numerical address
    ## -a: show all connections
    netstat -an -ptcp | grep -i listen
    ## or
    ## -i: lists IP sockets
    ## -P: do not resolve port names
    ## -n: do not resolve hostnames
    lsof -i -P -n | grep -i "listen"

09/18/2020

10/25/2020

11/03/2020

Known from IBM channel, docker security talk

11/20/2020

  • shell nop command: :, synopsis is true, do nothing, similar to pass in python

    1
    2
    3
    4
    5
    ## : as true
    while :; do
    sleep 1
    echo 1
    done
  • shell mutliline comment, can use heredoc

    1
    2
    3
    : <<'COMMENT'
    ...
    COMMENT

11/24/2020

  • python shebang:
1
2
#!/usr/bin/env python
# -*- coding: utf-8 -*-

11/27/2020

  • ls -1 每行只输出一个文件名.
  • mv -v verbose

12/06/2020

  • bash read file line by line
1
2
3
4
5
6
7
8
9
10
11
cat $file | while read line
do
echo $line
done

# or
input="/path/to/txt/file"
while IFS= read -r line
do
echo "$line"
done < "$input"

12/09/2020

  • sudo to edit restricted permission file:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# executing as non-root user
# wrong, because redirection is done by the shell which doesn't has write permission.
sudo echo "sth" >> /root/.bashrc

# correct
echo "sth" | sudo tee -a /root/.bashrc
# or
sudo sh -c 'echo sth >> /root/.bashrc'
# or mutli-line
sudo tee -a /root/.bashrc << EOF
dexec()
{
docker exec -it \${1} sh
}
EOF

12/10/2020

12/25/2020

  • cat jsonfile | python -m json.tooljq 类似,一个json的format工具,但只能pretty print.
  • sudo: unable to resolve host: 我不太理解为什么sudo 会涉及到hostname resolution?

Entry

https://kubernetes.io/docs/concepts/storage/volumes/#nfs

这个demo很有意思,从之前来看,如果需要为K8s设置NFS,则我们需要一个physical NFS server,然后create PV based on that NFS server then PVC claim from PV. 这样需要自己去维护NFS的健康保证high availability.

这个例子完全将NFS交给了K8s来管理,它先提供一块存储去构造PVC(这块存储可能来自provisioner或者其他PV),然后用这个PVC构造了一个NFS server pod以及给这个pod绑定了一个cluster IP,这样就相当于一个虚拟的physical NFS server node了。当我们需要PV的时候,就从这个NFS server pod中取(本质就是从一个PV中构造另一些PV)。

当然为了满足NFS的特性,这个NFS server的镜像必须要特别的构造,安装了NFS的组件,暴露相关端口,以及在初始化启动程序中配置好/etc/export的相关参数。以及当NFS server pod被重新构造时,保证之前的share不受影响。

这样的好处就是完全交给K8s去管理,不用担心NFS高可用性的问题,也不用自己去搭建物理的NFS cluster了,只要提供一块存储,就可以构造成NFS。

Demo

写这个blog的时候这个demo yaml中有一些错误参数,以我的blog准,这是demo git repository: https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs

Under provisioner folder, it uses storageclass (internal provisioner) to create a PV, for example, in GCE:

1
2
3
4
5
6
7
8
9
10
11
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pv-provisioning-demo
labels:
demo: nfs-pv-provisioning
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 20Gi

If no internal provisioner is available, you can create an external provisioner (for example: NFS), or just create a PV with hostPath:

1
2
3
4
5
6
7
8
9
10
11
12
13
kind: PersistentVolume
apiVersion: v1
metadata:
name: nfs-pv-provisioning-demo
labels:
demo: nfs-pv-provisioning
spec:
capacity:
storage: "20Gi"
accessModes:
- ReadWriteOnce
hostPath:
path: "/nfs-server-pv"

This /nfs-server-pv folder will be created on the host where nfs server pod reside.

Then it creates a replicationController for NFS server (just like a physical NFS server):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-server
spec:
replicas: 1
selector:
role: nfs-server
template:
metadata:
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: k8s.gcr.io/volume-nfs:0.8
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
## the nfs exports folder
- mountPath: /exports
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
## mount the pvc from provisioner
claimName: nfs-pv-provisioning-demo

Notice that the image is dedicated with nfs-utils pre-installed, and it expose some nfs dedicated ports, see the dockerfile:

1
2
3
4
5
6
7
8
9
10
11
FROM centos
RUN yum -y install /usr/bin/ps nfs-utils && yum clean all
RUN mkdir -p /exports
ADD run_nfs.sh /usr/local/bin/
ADD index.html /tmp/index.html
RUN chmod 644 /tmp/index.html

## expose mountd 20048/tcp and nfsd 2049/tcp and rpcbind 111/tcp
EXPOSE 2049/tcp 20048/tcp 111/tcp 111/udp
## init script to set up this nfs server
ENTRYPOINT ["/usr/local/bin/run_nfs.sh", "/exports"]

Then create a service with cluster IP to expose the NFS server pod.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
  
kind: Service
apiVersion: v1
metadata:
name: nfs-server
spec:
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
selector:
role: nfs-server

Then we can create application PV and PVC from this service: refer to DNS for service and pod.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
## service name
## 这里需要cluster IP,如果有其他DNS配置,可以直接用service name
server: <nfs server service cluster IP>
path: "/exports"

Then you can create PVC bind this PV for other use.

ceph github

https://github.com/ceph/ceph

Ceph for object storage, block storage and network file system

Ceph uniquely delivers object, block, and file storage in one unified system. differences: https://cloudian.com/blog/object-storage-vs-block-storage/

for cephFS, https://docs.ceph.com/docs/master/

what does NFS/CIFS deployable mean?

how to begin:

https://docs.ceph.com/docs/master/start/intro/ Whether you want to provide Ceph Object Storage and/or Ceph Block Device services to Cloud Platforms, deploy a Ceph File System or use Ceph for another purpose, all Ceph Storage Cluster deployments begin with setting up each Ceph Node, your network, and the Ceph Storage Cluster.

ceph ansible playbook?

ceph can be installed by cephadm (like k8s by kubeadm):

https://docs.ceph.com/docs/master/bootstrap/#installation-cephadm cannot get cephadm from yum install, need to install ceph repos and get rpms

构造好ceph cluster,现在的问题是怎么和k8s联合起来?

https://medium.com/flant-com/to-rook-in-kubernetes-df13465ff553

Rook turns distributed storage systems into self-managing, self-scaling, self-healing storage services. It automates the tasks of a storage administrator: deployment, bootstrapping, configuration, provisioning, scaling, upgrading, migration, disaster recovery, monitoring, and resource management.

Rook uses the power of the Kubernetes platform to deliver its services: cloud-native container management, scheduling, and orchestration.

Another similar tool is Kubernetes External Provisioner.

NFS

Now is alpha version in order to use rool NFS, we must have a NFS provisioner or PV first, then rook will work on top of that, manage the NFS storage and provision it again to other application.

https://rook.io/docs/rook/v1.2/nfs.html 好好理解一下这段话: The desired volume to export needs to be attached to the NFS server pod via a PVC. Any type of PVC can be attached and exported, such as Host Path, AWS Elastic Block Store, GCP Persistent Disk, CephFS, Ceph RBD, etc. The limitations of these volumes also apply while they are shared by NFS. You can read further about the details and limitations of these volumes in the Kubernetes docs.

NFS is just a pattern, the file system can be any.

NFS client packages must be installed on all nodes where Kubernetes might run pods with NFS mounted. Install nfs-utils on CentOS nodes or nfs-common on Ubuntu nodes.

Ceph

rook will create a soft ceph cluster for us.

The step to set up secure docker registry service in K8s is different from docker. There are some adjustments and changes to apply.

Toolkits we need to achieve our goal:

  1. openssl
  2. htpasswd
  3. skopeo

Create SSL/TLS Certificate and Key

Use openssl command to generate certificate and private key for setup secure connection:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
## create certs
mkdir -p /root/registry-certs
## get from master

DOCKER_REGISTRY_URL=blair1.fyre.com
## make a copy of .crt and give suffix .cert
openssl req \
-newkey rsa:4096 -nodes -x509 -sha256 \
-keyout /root/registry-certs/tls.key \
-out /root/registry-certs/tls.cert \
-days 3650 \
-subj "/C=US/ST=CA/L=San Jose/O=IBM/OU=Org/CN=${DOCKER_REGISTRY_URL}"

cp /root/registry-certs/tls.cert /root/registry-certs/tls.crt

Then copy the crt file to every host under /etc/docker/certs.d/<${DOCKER_REGISTRY_URL}>:5000 folder for self-signed certificate trust.

Notice that if the docker daemon json file has enabled the insecure registry, it will not verify the ssl/tls cert! You get docker user account and password, then you can login without certs!

Create Docker User Info

1
2
3
4
5
6
##  create auth file
DOCKER_USER=demo
DOCKER_PASSWORD=demo

mkdir -p /tmp/registry-auth
htpasswd -Bbn ${DOCKER_USER} ${DOCKER_PASSWORD} > /tmp/registry-auth/htpasswd

Generate Secret

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
## create secrets
## we want to setup docker registry in default namespace
kubectl create secret tls docker-registry-tls \
--key=/root/registry-certs/tls.key \
--cert=/root/registry-certs/tls.cert \
-n default

kubetctl create secret generic docker-registry-auth \
--from-file=htpasswd=/tmp/registry-auth/htpasswd \
-n default

## Assume the working namespace is test-1
WORKING_NAME_SPACE=test-1
DOCKER_REGISTRY_SERVER="${DOCKER_REGISTRY_URL}:5000"

kubectl create namespace ${NAME_SPACE}
kubectl create secret docker-registry docker-registry-creds \
--docker-server=$DOCKER_REGISTRY_SERVER \
--docker-username=$DOCKER_USER \
--docker-password=$DOCKER_PASSWORD \
-n ${WORKING_NAME_SPACE}

Bind Image Pull Secret to Service Account

see document in K8s.

1
2
3
4
5
## patch creds to default service account in test-1
## assume we use default service account in yaml
kubectl patch serviceaccount default \
-p '{"imagePullSecrets": [{"name": "docker-registry-creds"}]}' \
-n ${WORKING_NAME_SPACE}

Or you can specify imagePullSecrets in yaml explicitly, for example:

1
2
3
4
5
6
7
8
9
10
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: <secret name>

Create Secure Docker Registry

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
## notice the 
## env field
## secret mount field

## deletion is enabled
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: docker-registry
labels:
app: docker-registry
spec:
replicas: 1
selector:
matchLabels:
app: docker-registry
template:
metadata:
labels:
app: docker-registry
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- {key: docker-registry, operator: In, values: ["true"]}
hostNetwork: true
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
containers:
- name: docker-registry
image: localhost:5000/registry:2.7.1
imagePullPolicy: IfNotPresent
env:
- name: REGISTRY_STORAGE_DELETE_ENABLED
value: "true"
- name: REGISTRY_AUTH
value: "htpasswd"
- name: REGISTRY_AUTH_HTPASSWD_REALM
value: "Registry Realm"
- name: REGISTRY_AUTH_HTPASSWD_PATH
value: "/auth/htpasswd"
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: /certs/tls.crt
- name: REGISTRY_HTTP_TLS_KEY
value: /certs/tls.key
ports:
- name: registry
containerPort: 5000
hostPort: 5000
volumeMounts:
- name: docker-data
mountPath: /var/lib/registry
- name: docker-tls
mountPath: /certs
readOnly: true
- name: docker-auth
mountPath: /auth
readOnly: true
volumes:
- name: docker-data
persistentVolumeClaim:
claimName: registry-pv-claim
- name: docker-tls
secret:
secretName: docker-registry-tls
- name: docker-auth
secret:
secretName: docker-registry-auth

So far the secure docker registry in K8s is up and running in default namespace, it’s host network true so can be accessed from remote. Later can expose it by ingress.

Update Docker User Info

See this post.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
## create new htpasswd file
DOCKER_USER=demonew
DOCKER_PASSWORD=demonew

mkdir -p /tmp/registry-auth
htpasswd -Bbn ${DOCKER_USER} ${DOCKER_PASSWORD} > /tmp/registry-auth/htpasswd
## then encode base64
AUTH_BASE64=$(cat /tmp/registry-auth/htpasswd | base64 -w 0)
## replace old auth secret
## the change will be populated to registry pod
kubectl get secret docker-registry-auth -o yaml -n default \
| sed -e "/htpasswd/c\ htpasswd: ${AUTH_BASE64}" \
| kubectl replace -f -

## replace old docker config creds secret in used working namespaces
NEW_REGISTRY_CREDS=$(kubectl create secret docker-registry docker-registry-creds \
--docker-server=$DOCKER_REGISTRY_SERVER \
--docker-username=$DOCKER_USER \
--docker-password=$DOCKER_PASSWORD \
-n default \
-o yaml --dry-run \
| grep "\.dockerconfigjson" | cut -d":" -f2)

kubectl get secret docker-registry-creds -o yaml -n ${WORKING_NAME_SPACE} \
| sed -e "/\.dockerconfigjson/c\ .dockerconfigjson: ${NEW_REGISTRY_CREDS}" \
| kubectl replace -f -

Skopeo Operation

Please refer my skopeo blog for more details.

1
2
3
4
5
6
7
8
9
10
skopeo copy \
--dest-creds ${DOCKER_USER}:${DOCKER_PASSWORD} \
--dest-cert-dir /root/registry-certs \
docker-archive:/root/busybox.tar.gz \
docker://${DOCKER_REGISTRY_SERVER}/busybox:latest

skopeo inspect \
--creds ${DOCKER_USER}:${DOCKER_PASSWORD} \
--cert-dir /root/registry-certs \
docker://${DOCKER_REGISTRY_SERVER}/busybox:latest

2021/01/04

  • GoCD, the CI/CD tool like Jenkins

2021/01/17

  • !<command prefix letter> will rerun the last command that starts with this prefix.

2021/01/23

  • runbook/playbook is for commonly problems solution and instruction.

2021/01/25

  • pipeline echo 123 |& cat, |& means piping message to both stdout and stderr.

2021/01/27

  • envsubst command: substitutes environment variables in shell format strings.

2021/04/03

  • yq, similar with jq command but for yaml parsing.

2021/05/11

  • ps -o etime -o time -p <pid>, show process the elapsed time since it started, and the cpu time.

2021/09/05

  • docker container run is the same as docker run
  • docker container run = docker container create + docker container start + [docker container attach]

2021/10/06

  • Docker has OS or not? , obviously no, remember docker vs virtual machine. Docker is lightweight virtualization, For Linux docker, the container always runs/shares on linux/host kernel, docker image supplies the necessary files, libs, utilities, etc.

  • We have ubuntu, centos, alpine docker image, they can run in the same host OS, the key differences are filesystem/libraries, they share the host OS kernel. see this post, also see the second answer: “Since the core kernel is common technology, the system calls are expected to be compatible even when the call is made from an Ubuntu user space code to a Redhat kernel code. This compatibility make it possible to share the kernel across containers which may all have different base OS images.”

  • Docker run on MacOS, Docker on Mac actually creates a Linux VM by LinuxKit and containers run on it.

2021/10/31

  • DNS record type TXT and A: TXT is for owner custom info, A is IP hostname mapping.

2021/12/25

  • Why ls -l does not show group name: because of alias alias ls="ls -G", -G will suppress the group name in long format, using /bin/ls -l instead.

Make recording on MacOS with both internal and external sounds for screen and audio.

This configuration summary is from this Youtube episode and credit to it :)

This configuration only works for Mac microphone and speaker and wire connected headphone, the bluetooth connected AirPod Max does not work =(

Please don’t use prior Soundflower(deprecated), ignore it and go to download BlackHole.

  1. Download and install BlackHole 2ch audio driver from its Github website here for MacOS.

Go to the download page, first you need to input your email and it will send you another downland link, choose the BlachHold 2ch and download/install it.

  1. On your Mac, open Audio MIDI Setup app, you can see the BlackHold 2ch is in the left bar. Then use + button to create a Aggregate Device, name it as Quicktime Player Input, check the BlackHold 2ch and External Microphone (Usually if I need to say something in recording, I will use external headphone, if not, please check the built-in MacBook Pro Microphone instead, don’t check both becuase that will generate big recording file!). Then select the Clock Source as BlackHold 2ch.

  2. Next, use + button to Create Multi-Output Device, name it as Screen Record w/Audio, check the BlackHold 2ch and External Headphones (If no headphone is used, check the MacBook Pro Speakers instead). Then select the Clock Source as BlackHold 2ch.

  3. In the Audio MIDI Setup left bar, set the MacBook Pro and External Microphone both to the highest value, otherwise the sounds will be small in the recording.

  4. Open system perference Sound, in Output select Screen Record w/Audio, in Input section, select External Microphone if you are using headphone etc.

  5. Now we are all set, using command + shift + 5 as shortcut to launch the screen recording, in the Options select Quicktime Player Input, that’s it. If you only want to record internal sounds, select BlackHold 2ch is enough.

  6. After recording, please revert the Output in Sound app.

NOTE: Audio recording is the same, just replace step 6 with audio launch.

In my blog Create Working Branch, when run git push origin <non-master>, we actually create a pull request.

More references, it also talks about creating pull request from fork.

I am sometimes confusing about the term, why it is called pull request not push request (because I push my code to remote repo)? And I am not alone, a reasonable explanation see here: Because You are asking the target repository grab your changes, stand on their side it is a pulling operation.

If the changes in local branch develop are ready, but the remote branch develop is out of date, after git pull origin develop your local get messy, you can use git reset --hard roll back to the last commit you made, next delete remote branch on github GUI, then do the git push origin develop to recreate it.

0%