Kubernetes 网络权威指南
网络在云计算中又起着至关重要的作用。大概浏览了一下目录和内容总结,很符合我的需求。类似的英文版书籍我一直在寻找,但还真没见到过。很赞同作者在前言中的一句话:工程师不能只当使用者,还要理解底层的实现。其实很多新技术都是原有技术的再封装和创新应用,真正理解了本质的东西对快速学习非常有帮助。
网络在云计算中又起着至关重要的作用。大概浏览了一下目录和内容总结,很符合我的需求。类似的英文版书籍我一直在寻找,但还真没见到过。很赞同作者在前言中的一句话:工程师不能只当使用者,还要理解底层的实现。其实很多新技术都是原有技术的再封装和创新应用,真正理解了本质的东西对快速学习非常有帮助。
我有另一篇blog专门记录了Kubernetes Operators这本书的总结: Kubernetes Operators
根据书中资料总结的demo: https://github.com/chengdol/k8s-operator-sdk-demo
最近安排我去写一个operator的任务,最开始是helm-based operator, then evolve to Go-based, 挺有意思。How to explain Kubernetes Operators in plain English: https://enterprisersproject.com/article/2019/2/kubernetes-operators-plain-english
Brief introduction: https://www.youtube.com/watch?v=DhvYfNMOh6A
首先,从K8s官网上粗略地了解一下什么是:
Getting started with the Operator SDK This is from Red Hat, excellent!
Advanced Operators are designed to handle upgrades seamlessly, react to failures automatically, and not take shortcuts, like skipping a software backup process to save time.
Complex and stateful applications are where an Operator can shine. The cloud-like capabilities that are encoded into the Operator code can provide an advanced user experience, automating such features as updates, backups and scaling.
Introducing Operators: Putting Operational Knowledge into Software
Best practices for building Kubernetes Operators and stateful apps
CNCF:
Build Kubernetes Operators from Helm Charts in 5 steps However when it comes to stateful applications there is more to the upgrade process than upgrading the application itself.
Helm Charts often are insufficient for upgrading stateful applications and services (e.g. PostgreSQL or Elasticsearch) which require a complex and controlled upgrade processes beyond upgrading the application version.
一些中文资料,最近在看书Kuberneter Operators,然后被Go卡主了,准备开始学习:
NFS network file system, list nfs port: https://serverfault.com/questions/377170/which-ports-do-i-need-to-open-in-the-firewall-to-use-nfs
1 | rpcinfo -p | grep nfs |
It depends on the version of the protocol you intent to use. NFS 4 only require 2049 while older versions require more.
setup nfs cluster to prevent single point of failure https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/ch-nfsserver-haaa
如果不想让一个executable 执行多次,可以在每次run的时候在当前或固定文件夹create一个hidden file, for example: .commad.lock,然后写入当前正在run的command 参数等。通过检查这个file是否存在去决定是不是正在执行。
redhat 一个很不错的网站: https://www.redhat.com/sysadmin/
command uuidgen 可以用来generate random unique number.
find softlink file, use type l:1 | find . -type l -name dsenv |
or use -L, then the -type predicate will always match against the type of the file that a symbolic link points to rather than the link itself (unless the symbolic link is broken).
1 | find -L / type f -name dsenv |
类似于readlink, 会去softlink directory里面寻找original file.
> /dev/null 2>&1 can be written as &> /dev/null
su vs su -. 都是login to another user, - 表示normal login,login后会完全变成当前user的初始环境,比如在当前user的home dir且$PATH也是当前user的。没有-, 则会继承上个user的环境,比如dir还是上次的。2个login都会执行~/.bashrc.
declare -F show function name in current shelldeclare -f show function definition in current shell, declare -f <f name> get just that definition#!/usr/bin/env NAME makes the shell search for the first match of NAME in the $PATH environment variable. It can be useful if you aren’t aware of the absolute path or don’t want to search for it.bash -x ./script, no need set -xstrace -c ./script. 也就是说,多用bash internal command from help. -c will output show system time on each system call.shopt -s nocasematch, set bash case-insensitive match in case or [[ ]] condition. 这个是从bash tutorial 中文版中学到的, shopt is bash built-in setting, unlike set is from POSIX.vim xx.tgz then save the changes!! re-execute last command.!<beginning token> re-execute last command start with this token.$_ last token in last command linehistory | awk {'print $2'} | sort | uniq -c | sort -nr | head -n 10ls -ltr | awk 'NR>1 {print $NF}' | cut -d'-' -f1 | sort | uniq -c | sortnetstat on Mac is not that powerful as on Linux:1 | ## -p: protocol |
seq commands, i.e. seq 1 9 generate sequence 1 to 9, used in shell for loop as counter.ed editor: https://sanctum.geek.nz/arabesque/actually-using-ed/Dash is a Mac app browser APITypora markdown editorKnown from IBM channel, docker security talk
shell nop command: :, synopsis is true, do nothing, similar to pass in python
1 | ## : as true |
shell mutliline comment, can use heredoc
1 | : <<'COMMENT' |
1 | #!/usr/bin/env python |
ls -1 每行只输出一个文件名.mv -v verbose1 | cat $file | while read line |
1 | # executing as non-root user |
cat jsonfile | python -m json.tool 和jq 类似,一个json的format工具,但只能pretty print.https://kubernetes.io/docs/concepts/storage/volumes/#nfs
这个demo很有意思,从之前来看,如果需要为K8s设置NFS,则我们需要一个physical NFS server,然后create PV based on that NFS server then PVC claim from PV. 这样需要自己去维护NFS的健康保证high availability.
这个例子完全将NFS交给了K8s来管理,它先提供一块存储去构造PVC(这块存储可能来自provisioner或者其他PV),然后用这个PVC构造了一个NFS server pod以及给这个pod绑定了一个cluster IP,这样就相当于一个虚拟的physical NFS server node了。当我们需要PV的时候,就从这个NFS server pod中取(本质就是从一个PV中构造另一些PV)。
当然为了满足NFS的特性,这个NFS server的镜像必须要特别的构造,安装了NFS的组件,暴露相关端口,以及在初始化启动程序中配置好/etc/export的相关参数。以及当NFS server pod被重新构造时,保证之前的share不受影响。
这样的好处就是完全交给K8s去管理,不用担心NFS高可用性的问题,也不用自己去搭建物理的NFS cluster了,只要提供一块存储,就可以构造成NFS。
写这个blog的时候这个demo yaml中有一些错误参数,以我的blog准,这是demo git repository: https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs
Under provisioner folder, it uses storageclass (internal provisioner) to create a PV, for example, in GCE:
1 | apiVersion: v1 |
If no internal provisioner is available, you can create an external provisioner (for example: NFS), or just create a PV with hostPath:
1 | kind: PersistentVolume |
This /nfs-server-pv folder will be created on the host where nfs server pod reside.
Then it creates a replicationController for NFS server (just like a physical NFS server):
1 | apiVersion: v1 |
Notice that the image is dedicated with nfs-utils pre-installed, and it expose some nfs dedicated ports, see the dockerfile:
1 | FROM centos |
Then create a service with cluster IP to expose the NFS server pod.
1 |
|
Then we can create application PV and PVC from this service: refer to DNS for service and pod.
1 | --- |
Then you can create PVC bind this PV for other use.
Ceph uniquely delivers object, block, and file storage in one unified system. differences: https://cloudian.com/blog/object-storage-vs-block-storage/
what does NFS/CIFS deployable mean?
https://docs.ceph.com/docs/master/start/intro/ Whether you want to provide Ceph Object Storage and/or Ceph Block Device services to Cloud Platforms, deploy a Ceph File System or use Ceph for another purpose, all Ceph Storage Cluster deployments begin with setting up each Ceph Node, your network, and the Ceph Storage Cluster.
ceph ansible playbook?
https://docs.ceph.com/docs/master/bootstrap/#installation-cephadm cannot get cephadm from yum install, need to install ceph repos and get rpms
构造好ceph cluster,现在的问题是怎么和k8s联合起来?
https://medium.com/flant-com/to-rook-in-kubernetes-df13465ff553
Rook turns distributed storage systems into self-managing, self-scaling, self-healing storage services. It automates the tasks of a storage administrator: deployment, bootstrapping, configuration, provisioning, scaling, upgrading, migration, disaster recovery, monitoring, and resource management.
Rook uses the power of the Kubernetes platform to deliver its services: cloud-native container management, scheduling, and orchestration.
Another similar tool is Kubernetes External Provisioner.
Now is alpha version in order to use rool NFS, we must have a NFS provisioner or PV first, then rook will work on top of that, manage the NFS storage and provision it again to other application.
https://rook.io/docs/rook/v1.2/nfs.html 好好理解一下这段话: The desired volume to export needs to be attached to the NFS server pod via a PVC. Any type of PVC can be attached and exported, such as Host Path, AWS Elastic Block Store, GCP Persistent Disk, CephFS, Ceph RBD, etc. The limitations of these volumes also apply while they are shared by NFS. You can read further about the details and limitations of these volumes in the Kubernetes docs.
NFS is just a pattern, the file system can be any.
NFS client packages must be installed on all nodes where Kubernetes might run pods with NFS mounted. Install nfs-utils on CentOS nodes or nfs-common on Ubuntu nodes.
rook will create a soft ceph cluster for us.
The step to set up secure docker registry service in K8s is different from docker. There are some adjustments and changes to apply.
Toolkits we need to achieve our goal:
Use openssl command to generate certificate and private key for setup secure connection:
1 | ## create certs |
Then copy the crt file to every host under /etc/docker/certs.d/<${DOCKER_REGISTRY_URL}>:5000 folder for self-signed certificate trust.
Notice that if the docker daemon json file has enabled the insecure registry, it will not verify the ssl/tls cert! You get docker user account and password, then you can login without certs!
1 | ## create auth file |
1 | ## create secrets |
see document in K8s.
1 | ## patch creds to default service account in test-1 |
Or you can specify imagePullSecrets in yaml explicitly, for example:
1 | apiVersion: v1 |
1 | ## notice the |
So far the secure docker registry in K8s is up and running in default namespace, it’s host network true so can be accessed from remote. Later can expose it by ingress.
See this post.
1 | ## create new htpasswd file |
Please refer my skopeo blog for more details.
1 | skopeo copy \ |
!<command prefix letter> will rerun the last command that starts with this prefix.echo 123 |& cat, |& means piping message to both stdout and stderr.envsubst command: substitutes environment variables in shell format strings.yq, similar with jq command but for yaml parsing.ps -o etime -o time -p <pid>, show process the elapsed time since it started, and the cpu time.docker container run is the same as docker rundocker container run = docker container create + docker container start + [docker container attach]Docker has OS or not? , obviously no, remember docker vs virtual machine. Docker is lightweight virtualization, For Linux docker, the container always runs/shares on linux/host kernel, docker image supplies the necessary files, libs, utilities, etc.
We have ubuntu, centos, alpine docker image, they can run in the same host OS, the key differences are filesystem/libraries, they share the host OS kernel. see this post, also see the second answer: “Since the core kernel is common technology, the system calls are expected to be compatible even when the call is made from an Ubuntu user space code to a Redhat kernel code. This compatibility make it possible to share the kernel across containers which may all have different base OS images.”
Docker run on MacOS, Docker on Mac actually creates a Linux VM by LinuxKit and containers run on it.
TXT and A: TXT is for owner custom info, A is IP hostname mapping.ls -l does not show group name: because of alias alias ls="ls -G", -G will suppress the group name in long format, using /bin/ls -l instead.Make recording on MacOS with both internal and external sounds for screen and audio.
This configuration summary is from this Youtube episode and credit to it :)
This configuration only works for Mac microphone and speaker and wire connected headphone, the bluetooth connected AirPod Max does not work =(
Please don’t use prior Soundflower(deprecated), ignore it and go to download
BlackHole.
BlackHole 2ch audio driver from its Github website
here for MacOS.Go to the download page, first you need to input your email and it will send you
another downland link, choose the BlachHold 2ch and download/install it.
On your Mac, open Audio MIDI Setup app, you can see the BlackHold 2ch is
in the left bar. Then use + button to create a Aggregate Device, name it as
Quicktime Player Input, check the BlackHold 2ch and External Microphone
(Usually if I need to say something in recording, I will use external headphone,
if not, please check the built-in MacBook Pro Microphone instead, don’t check
both becuase that will generate big recording file!). Then select the Clock
Source as BlackHold 2ch.
Next, use + button to Create Multi-Output Device, name it as
Screen Record w/Audio, check the BlackHold 2ch and External Headphones
(If no headphone is used, check the MacBook Pro Speakers instead). Then select
the Clock Source as BlackHold 2ch.
In the Audio MIDI Setup left bar, set the MacBook Pro and External
Microphone both to the highest value, otherwise the sounds will be small in the
recording.
Open system perference Sound, in Output select Screen Record w/Audio,
in Input section, select External Microphone if you are using headphone etc.
Now we are all set, using command + shift + 5 as shortcut to launch
the screen recording, in the Options select Quicktime Player Input, that’s
it. If you only want to record internal sounds, select BlackHold 2ch is enough.
After recording, please revert the Output in Sound app.
NOTE: Audio recording is the same, just replace step 6 with audio launch.
In my blog Create Working Branch, when run git push origin <non-master>, we actually create a pull request.
More references, it also talks about creating pull request from fork.
I am sometimes confusing about the term, why it is called pull request not push request (because I push my code to remote repo)? And I am not alone, a reasonable explanation see here: Because You are asking the target repository grab your changes, stand on their side it is a pulling operation.
If the changes in local branch develop are ready, but the remote branch develop is out of date, after git pull origin develop your local get messy, you can use git reset --hard roll back to the last commit you made, next delete remote branch on github GUI, then do the git push origin develop to recreate it.