Note in order to run gitk from terminal you need to have a Desktop GUI environment (for example VNC).

gitk is a graphical history viewer. Think of it like a powerful GUI shell over git log and git grep. This is the tool to use when you’re trying to find something that happened in the past, or visualize your project’s history

Install gitk

For CentOS and RedHat:

1
yum install -y gitk

Usage

gitk is easy to invoke from the command-line. Just cd into your target git repository, and type:

1
gitk

Then a dedicated GUI will be launched for you:

You can get a brief usage introduction from man gitk, for example, if I want to see the commit history of hello.sh

1
gitk <path to file>/hello.sh

Show all branches:

1
gitk --all

Resources

Use gitk to understand git

When run git pull origin master to make your local branch up-to-date, if there are files need to be merged, you will get prompt to confirm the merge and make commit. This is not good for automation, need to mute this prompt.

Note this method fits this situation, but not sure if this is a general way to go, because git pull has different syntax format.

Actually, git pull does a git fetch followed by git merge, so can sepetate it into two steps:

1
git pull origin master

Changes to

1
2
git fetch origin master
git merge FETCH_HEAD --no-edit

Note that The --no-edit option can be used to accept the auto-generated message (this is generally discouraged).

I have got chance to learn something about SELinux (Security-Enhanced Linux). This is a online training from O’REILLY.

The Linux operating system was never designed with overall security in mind, and that’s exactly where SELinux comes in. Using SELinux adds 21st century security to the Linux operating system. It is key to providing access control and is also an important topic in the Red Hat RHCSA, CompTIA Linux+ and Linux Foundation LFCS exams.

Security-Enhanced Linux I am using a CentOS machine in this training.

SELinux implements Mandatory Security. All syscalls are denied by default, unless specifically enabled

  • All objects (files, ports, processes) are provided with a security label (the context)
  • User, role and type part in the context
  • Type part is the most important
  • The SELinux policy contains rules where you can see which source context has access to which target context

To check if SELinux status, dsiabled or enforcing Enable SELinux

Z flag is the magic to show SELinux information

1
2
3
ls -Z /boot
netstat -Ztunlp
ps auxZ

看到22:00,没来得及看完😂,唉。。。这个topic对于目前的我,有点用不上,晦涩。不过这个配置有时会被特别提起,disable or permissive.

最近看了一本书,书名是<<Docker进阶与实战>>, 这里并不是讲一些基础入门,而是在已经掌握和应用的基础上,告诉你背后的原理和技术细节。平时留意到的很多现象都在这里得到了解释,还是很值得记一下要点的。

Chapter 3 镜像

主要介绍 Docker Image,其实就是启动容器的只读模板,是容器启动所需的rootfs。

Image 表示方法:

1
dockerhub-web-address/namespace/repository:tag
  • namespace: 用于划分用户或组织,有时并没有用到
  • repository: 类似于Git仓库,一个仓库有很多镜像
  • tag: 区分同一镜像不同版本

Layer这个东西类似于git commit。Image ID是最上层layer的ID。

Docker开源了镜像存储部分的代码,也就是docker registry, 在接触Docker的开始阶段我一直没明白registryrepository的区别,这2个词长得有点像哦,新手稍不注意就用混了,中文意思也有点类似,一个是档案室,一个是仓库,都可以放东西。

一般来说,docker registry需要与nginx去添加基本鉴权功能,才是一个合格的secure私有镜像库,但有时我并没有这么做。

已经下载到本地的镜像默认是存储在/var/lib/docker路径下的。

1
2
3
4
5
6
7
8
cd /var/lib/docker/image/devicemapper
ls -ltr

total 4
drwx------ 4 root root 37 May 8 15:21 imagedb
drwx------ 5 root root 45 May 8 15:22 layerdb
drwx------ 4 root root 58 May 8 15:29 distribution
-rw------- 1 root root 1269 Jun 17 09:21 repositories.json

使用Docker镜像

dangling image doesn’t have name and tag (present as <none>), docker commit sometimes can generate dangling image, you can use filter to show dangling:

1
docker images --filter "dangling=true"

只显示image ID:

1
docker images -q

Remove all dangling images:

1
2
3
# if no use {} in xargs
# the input arguments will be placed at end: docker rmi xxx
docker images --filter "dangling=true" -q | xargs docker rmi

There is a tool dockviz can do image analysis job. 可以图形化的展示image的层次。

docker load用于被docker save导出的镜像,还有一个docker import用于导入包含根文件系统的归档,并将之变为镜像, docker import常用来制作Docker baseimage。

1
2
docker save -o busybox.tar busybox
docker load -i busybox.tar

docker commit用于增量生成镜像,效率较低,一般用于前期测试(比如当时non-root开发),最终确定步骤后,可以用docker build

镜像的组织结构

可以用这2个命令去窥探一下镜像结构和元数据

1
2
docker history
docker inspect

镜像扩展知识

Docker引入了联合挂载Union mount技术,使镜像分层成为可能: 发展路径:

1
unionfs -> aufs -> overlayfs 

写时复制 (copy-on-write) 是Docker image强大的一个重要原因,操作系统中也广泛用到了,比如fork.当父进程fork子进程时,并没有真正分配内存给子进程,而是共享,当2者之一修改共享内存时,触发缺页异常才导致真正的内存分配。这样加速了子进程的创建,也减少了内存消耗。

联合文件系统是实现写时复制的基础,Ubuntu自带aufs, Red Hat和Suse采用deivcemapper方案(在/var/lib/docker/image下就是这个东西),作为Docker的存储驱动,它们的存储结构和性能都有显著差异,要根据实际情况选用。

Chapter 4 仓库进阶

对Docker registry的API访问,传输对象主要包括镜像layer的块数据(blob)和表单(manifest)。layer数据以二进制方式存放于registry中。主要讲了一下用API进行pull, push的过程,步骤。其实都划分了很多步。

List and Dlete Image 可以参考我这篇博客<<Docker Registry API>>

鉴权机制,这里使用的是Docker Engine, Registry和Auth Server协作完成。Auth Server由Registry开发者部署搭建,Registry完全信任Auth Server.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
+---------------+               +------------------------+
| +-------+ | |
| Registry | | | Authorization Service |
| +<--+ | | |
+-+--+----------+ | | +---------------+--+-----+
^ | | | ^ |
| | 5 | | 6 | |
| | | | | |
1 | | 2 | | 3 | | 4
| | +---+---v---------+ | |
| +--------->+ +-------------+ |
| | Docker Daemon | |
+-------------+ +<---------------+
+--------+--------+
^
|
+-------------+----------------+
| Docker Client |
| $docker pull busybox |
| |
+------------------------------+
  1. Docker Engine试图赋予HTTP请求一个鉴权的token,如果没有,Daemon会试图fetch/refresh一个新的token。
  2. 如果请求没有做过认证且不含token则Registry会返回401 Unauthorized状态
  3. 用户带着Registry返回的信息以及证书去访问Auth Server申请token (这里具体怎么操作?证书在哪里?
  4. Auth Server后端账户记录着用户名和密码,获取用户请求后,会将鉴权信息的token返回给用户
  5. 用户带着token再次访问Regisrty, HEADER中包含:
    1
    Authorization: Bearer <token content>
  6. Registry检验token,如通过则开始工作

构建私有仓库

In general, we can simply run a private docker registry:

1
2
3
4
5
6
docker run -d \
--hostname localhost \
--name registry-v2 \
-v /opt/data:/var/lib/registry \
-p 5000:5000 \
registry:2.0

这里将本地目录/opt/data挂载到容器镜像存储目录/var/lib/registry,方便查看和管理镜像数据,在DataStage Installer中也是如此。这时的registry是不安全的,能访问本机5000端口的人都可以上传和下载镜像。

需要为其加上HTTPS反向代理,这里以Nginx来实现。然后代理服务器会接受HTTPS请求,然后将请求转发给内部网络上的registry服务器,并将registry访问结果返回给用户。

可以参考我的这篇blog关于如何搭建Secure Docker Registry.

Chapter 5 Docker网络

OpenShift version: 3.10

I have some doubts about Security Context Constraint (SCC) in OpenShift, for example, I give privileged SCC to service account, but some containers are still running as non-root user.

First what is SCC used for: control the actions that a pod can perform and what it has the ability to access, also very useful for managing access to persistent storage.

Prerequisite

Spin up a fresh OpenShift Enterprise cluster with version:

1
2
openshift v3.9.31
kubernetes v1.9.1+a0ce1bc657

Create regular user demo1

1
htpasswd -b /etc/origin/master/htpasswd demo1 demo1

After login as demo1, you have its records in cluster, if run as system:admin user:

1
2
oc get user
oc get identity

You will get demo1 information.

Fetch integrated docker registry address and port from system:admin user:

1
2
3
oc get svc -n default | grep -E "^docker-registry"

docker-registry ClusterIP 172.30.159.11 <none> 5000/TCP 1h

Experiment

This experiment will show you:

  1. How to enable pulling image from other project.
  2. How to run container as root user.

Start with demo1, login by oc login -u demo1 and create 2 projects:

1
2
oc new-project demo1-proj ## this one is for deploying app
oc new-project demo1-ds ## this one is for storing imagestream

Pull busybox and update entrypoint to tail /dev/null:

1
docker pull busybox
1
2
3
4
5
docker run -d \
--name mybb \
--entrypoint=/bin/sh \
busybox \
-c 'tail -f /dev/null'

Then commit:

1
docker commit <container id> busybox

Then docker tag to add docker registry address prefix:

1
docker tag docker.io/busybox 172.30.159.11:5000/demo1-ds/busybox:v1

Docker login to integrated docker registry and push:

1
2
docker login -u openshift -p `oc whoami -t` 172.30.159.11:5000
docker push 172.30.159.11:5000/demo1-ds/busybox:v1

Go back to demo1-proj project by oc project demo1-proj, write a simple deployment yaml bb-deploy.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: apps/v1
kind: Deployment
metadata:
name: bb-deployment
labels:
app: bb
spec:
replicas: 1
selector:
matchLabels:
app: bb
template:
metadata:
labels:
app: bb
spec:
containers:
- name: bb
image: 172.30.159.11:5000/demo1-ds/busybox:v1

Enable Pull from Other Projects

If now run:

1
oc apply -f bb-deploy.yml

It will fail to pull the image from demo1-ds (describe pod can see) because we deploy objects on project demo1-proj, it doesn’t have the permission to pull image from other project, let’s enable it:

1
2
3
oc policy add-role-to-user \
system:image-puller system:serviceaccount:demo1-proj:default \
--namespace=demo1-ds

Note that if you run this several times, it will create severl duplicate system:image-puller.

Then if you check rolebindings in demo1-ds, you see there is a new binding system:image-puller with service account demo1-proj/default:

1
2
3
4
5
6
7
8
oc get rolebindings -n demo1-ds

NAME ROLE USERS GROUPS SERVICE ACCOUNTS SUBJECTS
admin /admin demo1
system:deployers /system:deployer deployer
system:image-builders /system:image-builder builder
system:image-puller /system:image-puller demo1-proj/default
system:image-pullers /system:image-puller system:serviceaccounts:demo1-ds

Ok, then we can deploy the busubox in demo1-proj project:

1
2
NAME                            READY     STATUS    RESTARTS   AGE
bb-deployment-78bdb8c4f-lzqfj 1/1 Running 0 6s

Let’s next see the container UID:

1
2
3
4
kbc exec -it bb-deployment-78bdb8c4f-lzqfj sh

/ $ id
uid=1000130000 gid=0(root) groups=1000130000

Set Security Context Constraint

Correct, OpenShift by default doesn’t spin up container run as root user due to security issue. Acutally this is about SCC, let’s dig deeper:

These 2 articles cover lots of things for SCC and service account: Managing Security Context Constraints Security Context Constraints official Configuring Service Accounts

you must have cluster-admin privilege to manage SCCs (you can grant cluster-admin privilege to regular user), there are 7 SCCs:

1
2
3
4
5
6
7
8
9
10
oc get scc

NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES
anyuid false [] MustRunAs RunAsAny RunAsAny RunAsAny 10 false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret]
hostaccess false [] MustRunAs MustRunAsRange MustRunAs RunAsAny <none> false [configMap downwardAPI emptyDir hostPath persistentVolumeClaim projected secret]
hostmount-anyuid false [] MustRunAs RunAsAny RunAsAny RunAsAny <none> false [configMap downwardAPI emptyDir hostPath nfs persistentVolumeClaim projected secret]
hostnetwork false [] MustRunAs MustRunAsRange MustRunAs MustRunAs <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret]
nonroot false [] MustRunAs MustRunAsNonRoot RunAsAny RunAsAny <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret]
privileged true [*] RunAsAny RunAsAny RunAsAny RunAsAny <none> false [*]
restricted false [] MustRunAs MustRunAsRange MustRunAs RunAsAny <none> false [configMap downwardAPI emptyDir persistentVolumeClaim projected secret]

By default, when a container or pod does not request a user ID under which it should be run, the effective UID depends on the SCC that emits this pod. Because restricted SCC is granted to all authenticated users by default, it will be available to all users and service accounts and used in most cases.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
oc describe scc restricted

Name: restricted
Priority: <none>
Access:
Users: <none>
Groups: system:authenticated
Settings:
Allow Privileged: false
Default Add Capabilities: <none>
Required Drop Capabilities: KILL,MKNOD,SETUID,SETGID
Allowed Capabilities: <none>
Allowed Seccomp Profiles: <none>
Allowed Volume Types: configMap,downwardAPI,emptyDir,persistentVolumeClaim,projected,secret
Allowed Flexvolumes: <all>
Allow Host Network: false
Allow Host Ports: false
Allow Host PID: false
Allow Host IPC: false
Read Only Root Filesystem: false
Run As User Strategy: MustRunAsRange
UID: <none>
UID Range Min: <none>
UID Range Max: <none>
SELinux Context Strategy: MustRunAs
User: <none>
Role: <none>
Type: <none>
Level: <none>
FSGroup Strategy: MustRunAs
Ranges: <none>
Supplemental Groups Strategy: RunAsAny
Ranges: <none>

The restricted SCC uses MustRunAsRange strategy for constraining and defaulting the possible values of the securityContext.runAsUser field. The admission plug-in will look for the openshift.io/sa.scc.uid-range annotation on the current project to populate range fields, as it does not provide this range. In the end, a container will have runAsUser equal to the first value of the range that is hard to predict because every project has different ranges.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
oc describe project demo1-proj

Name: demo1-proj
Created: 3 hours ago
Labels: <none>
Annotations: openshift.io/description=
openshift.io/display-name=
openshift.io/requester=demo1
openshift.io/sa.scc.mcs=s0:c11,c10
openshift.io/sa.scc.supplemental-groups=1000130000/10000
openshift.io/sa.scc.uid-range=1000130000/10000
Display Name: <none>
Description: <none>
Status: Active
Node Selector: <none>
Quota: <none>
Resource limits: <none>

You see, here openshift.io/sa.scc.uid-range start from 1000130000, is the UID of our busybox container.

SCCs are not granted directly to a project. Instead, you add a service account to an SCC and either specify the service account name on your pod or, when unspecified, run as the default service account.

Add and Remove SCC

Add service account default in project demo1-proj to SCC privileged

1
2
3
oc adm policy add-scc-to-user privileged system:serviceaccount:demo1-proj:default

scc "privileged" added to: ["system:serviceaccount:demo1-proj:default"]

Where to examine the result:

1
2
3
4
5
6
7
oc describe scc privileged

Name: privileged
Priority: <none>
Access:
Users: system:admin,system:serviceaccount:openshift-infra:build-controller,system:serviceaccount:management-infra:management-admin,system:serviceaccount:management-infra:inspector-admin,system:serviceaccount:glusterfs:default,system:serviceaccount:glusterfs:router,system:serviceaccount:glusterfs:heketi-storage-service-account,system:serviceaccount:demo1-proj:default
...

In the Users field, we now have system:serviceaccount:demo1-proj:default.

How to remove the SCC from a service account?

1
2
3
oc adm policy remove-scc-from-user privileged system:serviceaccount:demo1-proj:default

scc "privileged" removed from: ["system:serviceaccount:demo1-proj:default"]

Deploy Again

Then login as demo1, go to demo1-proj, deploy again:

1
oc apply -f bb-deploy.yml
1
2
3
4
oc exec -it bb-deployment-78bdb8c4f-rgj88 sh

/ $ id
uid=1000130000 gid=0(root) groups=1000130000

Why the UID is still 1000130000? We have applied privileged right? Because privileged is just a constraint, you need to Ensure that at least one of the pod’s containers is requesting a privileged mode in the security context.

So update the yaml file, add securityContext, and deploy again:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
apiVersion: apps/v1
kind: Deployment
metadata:
name: bb-deployment
labels:
app: bb
spec:
replicas: 1
selector:
matchLabels:
app: bb
template:
metadata:
labels:
app: bb
spec:
containers:
- name: bb
image: 172.30.159.11:5000/demo1-ds/busybox:v1
securityContext:
runAsUser: 0

Now if you check the UID, it’s 0:

1
2
3
4
oc rsh bb-deployment-58cb44b56b-zmdcw

/ # id
uid=0(root) gid=0(root) groups=10(wheel)

Note that oc rsh is the same as kubectl exec -it ... sh

Conclusion

Add privileged SCC to service account is not enough, need to specify runAsUser: 0 in yaml file.

现在想想,可以使用cert-manager 以及 Let’ encrypt去做,这样就不用再去配置cert trust in OS以及可以自动更新certificate~ 唉,当时完全不知道!

This post is about configuring your own secure docker registry in the form of docker container, check this to set up a secured docker registry in K8s.

More about SSL please check my blog SSL Demystify. It contains the theory, workflow and practice.

Securing access to your docker images is paramount, the docker registry natively supports TLS and basic authentication, let’s do it.

Generate Self-signed Certificate

See document from docker.

1
2
3
4
5
6
7
8
mkdir -p /root/certs

## generate domain.key and self-signed domain.crt
## I use -days 3650
openssl req \
-newkey rsa:4096 -nodes -x509 -sha256\
-keyout certs/domain.key -out certs/domain.crt -days 3650 \
-subj "/C=US/ST=CA/L=San Jose/O=<Company Name>/OU=Org/CN=chengdol.registry.com"

Notice that the CN=chengdol.registry.com must be the registry access url, no port number suffix needed.

Parameters explanation from here:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
openssl req: 
The req command primarily creates and processes certificate requests in PKCS#10 format. It can additionally create self signed certificates for use as root CAs for example.
-newkey:
this option creates a new certificate request and a new private key.
rsa:nbits:
where nbits is the number of bits, generates an RSA key nbits in size.
-nodes:
if this option is specified then if a private key is created it will not be encrypted.
-x509:
this option outputs a self signed certificate instead of a certificate request. This is typically used to generate a test certificate or a self signed root CA .
-[digest]:
this specifies the message digest to sign the request with (such as -md5, -sha1, -sha256)
-keyout:
this gives the filename to write the newly created private key to.
-out:
this specifies the output filename to write to or standard output by default.
-days:
when the -x509 option is being used this specifies the number of days to certify the certificate for. The default is 30 days.
-subj:
replaces subject field of input request with specified data and outputs modified request. The arg must be formatted as /type0=value0/type1=value1/type2=..., characters may be escaped by \ (backslash), no spaces are skipped.

There are mutli-way to do the same thing,一步一步的构造self-signed certificate: OpenSSL Essentials: Working with SSL Certificates, Private Keys and CSRs

Setup Secure Docker Registry

see document from docker.

We have the certs folder with crt and key created by openssl. Start the docker registry container, using TLS certificate:

1
2
3
4
5
6
7
8
9
docker run -d \
--restart=always \
--name registry \
-v /root/certs:/certs \
-e REGISTRY_HTTP_ADDR=0.0.0.0:443 \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
-p 443:443 \
registry:2

Here we overwrite some env variables to change the default configuration.

Also follow the instruction in docker web, instruct every docker daemon to trust that certificate. The way to do this depends on your OS, for Linux:

1
2
3
mkdir -p /etc/docker/certs.d/<docker registry domain>/
## copy domain.crt (generate by openssl) to this folder on every Docker host
cp /root/certs/domain.crt /etc/docker/certs.d/<docker registry domain>/

注意,Docker官方文档中用的是<docker registry domain>:5000/文件夹名,但如果你配置的是443端口,则会出错,通过Docker daemon中的log,发现对于443端口,这里不需要:5000. 但如果设置了basic authentication且用的是5000端口,则需要了。

当时还发生了奇怪的事情,我发现不需要这个trust操作居然也能进行push,后来才发现原来是旧配置在docker daemon json 文件中设置了insecure registry,这样一来根本就不会检查证书了。

If you don’t do this, when run docker push you will get this error:

1
Error response from daemon: Get https://chengdol.registry.com/v2/: x509: certificate signed by unknown authority

If docker still complains about the certificate when using authentication? When using authentication, some versions of Docker also require you to trust the certificate at the OS level.

For RedHat, do:

1
2
cp certs/domain.crt /etc/pki/ca-trust/source/anchors/myregistrydomain.com.crt
update-ca-trust

Now you can push and pull like below, no need to specify port number, it will use 443 port:

1
2
3
4
docker pull ubuntu
docker tag ubuntu chengdol.registry.com/ubuntu:v1
docker push chengdol.registry.com/ubuntu:v1
docker pull chengdol.registry.com/ubuntu:v1

So far, secure configuration is done, now the docker registry will use HTTPS in 443 port to communciate with docker client. If you want to setup basic authentication, see below:

Setup Basic Authrntication

Warning: You cannot enable authentication that send credentials as clear text. You must configure TLS first for authentication to work.

Use htpasswd to create the user info:

1
2
mkdir -p /root/auth
htpasswd -Bbn demo demo > /root/auth/htpasswd

Then, we switch back to 5000 port: (注意这里没用443端口)

1
2
3
4
5
6
7
8
9
10
11
12
docker run -d \
-p 5000:5000 \
--restart=always \
--name registry \
-v /root/auth:/auth \
-e "REGISTRY_AUTH=htpasswd" \
-e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" \
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd \
-v /root/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
registry:2

Do the same trust thing in every docker host, under /etc/docker/certs.d/ directory, we create a folder <docker registry domain>:5000 and put domain.crt in it:

1
2
3
mkdir -p /etc/docker/certs.d/<docker registry domain>:5000/
## copy domain.crt (generate by openssl) to this folder on every Docker host
cp domain.crt /etc/docker/certs.d/<docker registry domain>:5000/

Then, you need to first login to push or pull:

1
docker login <docker registry domain>:5000 -u demo -p demo

Conclusion

OK, now a secure docker registry container with basic authentication is up and running. You can push and pull after docker login.

From JFrog:

  • A Docker repository is a hosted collection of tagged images that, together, create the file system for a container
  • A Docker registry is a host that stores Docker repositories
  • An Artifactory repository is a hosted collection of Docker repositories, effectively, a Docker registry in every way, and one that you can access transparently with the Docker client.

https://docs.docker.com/registry/configuration/

If you change the setting in docker registry running container, try

1
docker restart <container id>

It’s very tedious to type full command when you operate on K8s cluster, this blog is the summary of alias I adopt in daily work.

Put these alias to $HOME/.bashrc or $HOME/.zshrc, then sourcing it or next time when you login they will take effect.

Some quick commands used to create resources:

1
2
3
4
5
6
7
8
9
10
11
12
13
## create nodeport svc to expose access
## --type default is ClusterIP
## --target-port default is --port
kubectl expose pod <podname> [--type=NodePort] --port=80 [--target-port=80] --name=<svcname>

## run command in pod
kubectl exec -n <namespace> <podname> -- sh -c "commands"

## check log
kubectl logs -f <pod name>

## scale up/down
kubectl scale deploy <deploy name> --replicas=5

These are some aliases used to run test:

1
2
3
4
## exit will delete the pod automatically
## --restart=Never: create a pod instead of deployment
## praqma/network-multitool: network tools image
alias kbt='kubectl run testpod -it --rm --restart=Never --image=praqma/network-multitool -- /bin/sh'

很多都是当时在IBM 时候的命令了,留着做个纪念吧.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
## docker shortcut
alias di='docker images'
alias dp='docker ps -a'
alias dri='docker rmi -f'
alias drp='docker rm -f'

alias k='kubectl'
alias kbn='kubectl get nodes'
## can replace with your working namespace
## pods
alias kbp='kubectl get pods --all-namespaces'
## deployments
alias kbd='kubectl get deploy -n zen | grep -E "xmeta|services"'
## statefulsets
alias kbsts='kubectl get sts -n zen | grep -E "conductor|compute"'
## services
alias kbs='kubectl get svc -n test-1'


## get into pods
kbl()
{
pod=$1
## get namepace
ns=$(kubectl get pod --all-namespaces | grep $pod | awk {'print $1'})
kubectl exec -it $pod sh -n $ns
}
### for fixed pod name
alias gocond='kubectl exec -it is-en-conductor-0 bash -n test-1'
alias gocomp0='kubectl exec -it is-engine-compute-0 bash -n test-1'
### for dynamic pod name
goxmeta()
{
isxmetadockerpod=`kubectl get pods --field-selector=status.phase=Running -n test-1 | grep is-xmetadocker-pod | awk {'print $1'}`
kubectl exec -it ${isxmetadockerpod} bash -n test-1
}

gosvc()
{
isservicesdockerpod=`kubectl get pods --field-selector=status.phase=Running -n test-1 | grep is-servicesdocker-pod | awk {'print $1'}`
kubectl exec -it ${isservicesdockerpod} bash -n test-1
}

## clean pods
alias rmxmeta='kubectl delete svc is-xmetadocker -n test-1; kubectl delete deploy is-xmetadocker-pod -n test-1; rm -rf /mnt/IIS_test-1/Repository/*'
alias rmsvc='kubectl delete svc is-servicesdocker -n test-1; kubectl delete deploy is-servicesdocker-pod -n test-1; rm -rf /mnt/IIS_test-1/Services/*'
alias rmcond='kubectl delete svc is-en-conductor-0 -n test-1; kubectl delete svc en-cond -n test-1; kubectl delete statefulset is-en-conductor -n test-1; rm -rf /mnt/IIS_test-1/Engine/test-1/is-en-conductor-0/'
alias rmcomp='kbc delete svc conductor-0 -n test-1; kbc delete statefulset is-engine-compute -n test-1; rm -rf /mnt/IIS_test-1/Engine/test-1/is-engine-compute*'

Sometimes after I git pull from the master branch, if I run git status, there are some files (for me it’s mainly xxx.dsx) are modified that need to be added and committed, that’s strange.

It seems the format issue that can be solved by editting the top level .gitattributes in your local repository. Open .gitattributes, comment out the formulas, for example:

1
#* text=auto

Now if I run git status again, the clutters are gone and git outputs only show

1
modified:   .gitattributes

Then revert back to origin in .gitattributes and run git status again, the branch will be clean.

Acutally there are some commands can exterminate editor operation:

1
2
3
4
sed -i -e 's/^\(\*[ ][ ]*text.*\)/#\1/' .gitattributes
git status
sed -i -e 's/^#\(\*[ ][ ]*text.*\)/\1/' .gitattributes
git status
0%