//TODO:

  1. python socket module demos: https://pymotw.com/2/socket/uds.html

最近在做Proxy的工作,重新回顾和学习了很多相关的东西,这里把Socket的分类和概念梳理一下。

Socket

https://en.wikipedia.org/wiki/Socket

  • Network socket, an end-point in a bidirectional communication across a network or the Internet
  • Unix domain socket, an end-point in local bidirectional inter-process communication
  • socket(), a system call defined by the Berkeley sockets API

Unix socket vs Network socket

Unix Domain Socket

https://en.wikipedia.org/wiki/Unix_domain_socket A Unix domain socket or IPC socket (inter-process communication socket) is a data communications endpoint for exchanging data between processes executing on the same host operating system.

IPC 也是一种概念,有多种实现的方式。

The API for Unix domain sockets is similar to that of an Internet socket, but rather than using an underlying network protocol, all communication occurs entirely within the operating system kernel. Unix domain sockets may use the file system as their address name space. (Some operating systems, like Linux, offer additional namespaces.) Processes reference Unix domain sockets as file system inodes, so two processes can communicate by opening the same socket.

Valid socket types in the UNIX domain are:

  • SOCK_STREAM (compare to TCP) – for a stream-oriented socket
  • SOCK_DGRAM (compare to UDP) – for a datagram-oriented socket that preserves message boundaries (as on most UNIX implementations, UNIX domain datagram sockets are always reliable and don’t reorder datagrams)
  • SOCK_SEQPACKET (compare to SCTP) – for a sequenced-packet socket that is connection-oriented, preserves message boundaries, and delivers messages in the order that they were sent

Network Socket

仔细读一下维基, network socket是指一种连接,这种连接有很多具体的实现方法。 https://en.wikipedia.org/wiki/Network_socket Also referred to as Internet socket.

A network socket is a software structure within a network node of a computer network that serves as an endpoint for sending and receiving data across the network. The structure and properties of a socket are defined by an application programming interface (API) for the networking architecture. Sockets are created only during the lifetime of a process of an application running in the node.

Socket Address is comprised of:

  • protocol type
  • IP address
  • port number

On Unix-like operating systems and Microsoft Windows, the command-line tools netstat or ss are used to list established sockets and related information.

Several types of Internet socket are available:

  • Datagram sockets: connectless sockets, which use UDP.
  • Stream sockets: connection-oriented sockets, which use TCP.
  • Raw sockets: IP packet.

Berkeley Sockets

https://en.wikipedia.org/wiki/Berkeley_sockets Berkeley sockets is an application programming interface (API) for Internet sockets and Unix domain sockets, used for inter-process communication (IPC). 是以上几个socket的一种统一的抽象表示。

The term POSIX sockets is essentially synonymous with Berkeley sockets, but they are also known as BSD sockets.

Demo

Git repo, create basic and secure websocket server and client via Python websockets module: https://github.com/chengdol/websocket-demo

Introduction

Difference between socket and websocket?

如果你已经理解了HTTP, HTTP CONNECT method 以及Proxy的知识,那么维基百科的描述已经比较清楚了. WebSocket Wiki WebSocket is a computer communications protocol, providing full-duplex communication channels over a single TCP connection.

WebSocket is distinct from HTTP (half-duplex, clients issue request). Both protocols are located at layer 5 in the OSI model and depend on TCP at layer 4. Although they are different, RFC 6455 states that WebSocket “is designed to work over HTTP ports 443 and 80 as well as to support HTTP proxies and intermediaries,” thus making it compatible with the HTTP protocol. To achieve compatibility, the WebSocket handshake uses the HTTP(1.1 version) Upgrade header to change from the HTTP protocol to the WebSocket protocol.

Some proxies also support WebSocket, for example, Envoy, some may not.

WebSocket handshake ws:// or wss:// (WebSocket Secure):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// client request
GET /chat HTTP/1.1
Host: server.example.com
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Key: x3JJHMbDL1EzLkh9GBhXDw==
Sec-WebSocket-Protocol: chat, superchat
Sec-WebSocket-Version: 13
Origin: http://example.com

// server response
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: HSmrc0sMlYUkAGmm5OPpG2HaGWk=
Sec-WebSocket-Protocol: chat

WebSockets use cases:

  • chatting
  • live feed
  • multiplayer gaming
  • showing client progress/logging

Pros:

  • full-duplex (no polling)
  • HTTP compatible
  • firewall friendly

Cons:

  • lots of proxies and transparent proxies don’t support it yet
  • L7, LB challenging (timeouts)
  • stateful, difficult to horizontal scale

Do you have to use Web Sockets ? (alternatives) It is important to note that WebSockets is not the only HTTP realtime based solution, there are other ways to achieve real time such as eventsource, and long polling.

Seeing the demo in websocket, Python websockets module is taking advantage of asyncio which provides an elegant coroutine-based API.

So what is co-routine (协程)? 这个建议多看几遍如果忘了, 把多进程,多线程和协程优缺点都讲到了, 然后从generator + yield 出发,升级到asyncio: Python perspective of view on coroutine Demo code github: https://github.com/jreese/pycon

A variant of functions that enables concurrency via cooperative multitasking (task yields control when they are waiting for external resources 也就是在做完重要工作需要其他资源的时候,再转换到其他任务,所以协程很适合IO-bound tasks). We can run all tasks in one thread in one process (就是在一个线程中模拟多线程,本质还是单线程,注意Python threading module也是如此), better than multi-threading and multi-processing in some situations(因为切换内核态和用户态要消耗系统时间和资源, 协程由用户自己控制切换,不用陷入系统内核态).

to analyze function bytecode, the Python dis module can help, for example:

1
2
3
4
5
from dis import dis
def func():
pass

dis(func)

视频中举了个naive python code实现的例子, 用到了yield keyword, 它不仅可以输出数据,也可以通过generator的send() method接收外部的参数. 当yield 整个过程结束的时候,会有一个StopIteration exception 抛出。这实现了AsyncIO的基本思想。

For example, asyncio with aiohttp:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
import asyncio
import time
## 支持async io 的http 库
from aiohttp import request

URLS = [
"https://2019.northbaypython.org",
"https://duckduckgo.com",
"https://jreese.sh",
"https://news.ycombinator.com",
"https://python.org",
]

# Coroutines with aiohttp
async def fetch(url: str) -> str:
async with request("GET", url) as r:
return await r.text("utf-8")


async def main():
coros = [fetch(url) for url in URLS]
## gather will run tasks concurrently
## *coros: unpacking waitable objects
results = await asyncio.gather(*coros)
for result in results:
print(f"{result[:20]!r}")

if __name__ == "__main__":
asyncio.run(main())

前几天收到了vehicle registration renewal的通知,这次需要做smog check (对于比较老的非电动 车辆,一般2年一次), 在google map上选择了一家评分较好又比较近的smog check station, 要知道 美国这边smog check都是私人承包的,可以大概在评论中看一下价格如何,此外记得携带DMV的通知单去现 场。

一共花了50刀,smog check做完之后你不需要做其他任何事情,只需要到网上缴付Renewal fee即可,但 不要做完立即支付,因为DMV那里还没有收到你的smog check记录 (如果你去支付的页面,会发现有一个警 告如此)。一般等待一天左右警告会消失,说明DMV已经收到记录,这时就可以进行支付了。

Aside: 上次遇到过sticker被寄到了之前的地址的情况,所以在搬家renew驾照的时候一定要记得提醒 更改renewal的地址。

此外,每年renew car sticker也不要忘了,否则超时会被罚款。去DMV可以网上交钱renew然后过一两 周就会给你寄过来了。

因为需要验证Envoy CONNECT feature的缘故,我打算自己设置一个server with SSL/TLS测试:

1
2
client <--------------> envoy proxy <--------------> Apache server
(docker container) (certbot)

目前遇到的问题:

  1. localhost conigure domain name: https://www.youtube.com/watch?v=gBfZdJFxjew
  2. localhost set https: certbot
  3. apache set user/password authentication: https://www.youtube.com/watch?v=HVt2E_Ny4po

Tool involved:

  1. httpd apache
  2. certbot
  3. vagrant
  4. docker container
  5. envoy

other links:

  1. set virtualhost domain: https://httpd.apache.org/docs/2.4/vhosts/examples.html

  2. apache config file in centos: https://www.liquidweb.com/kb/apache-configuration-centos/

Wget is a free utility for non-interactive download of files from the Web. It supports HTTP, HTTPS, and FTP protocols, as well as retrieval through HTTP proxies. It is fault-tolerance, has big overlap with curl, both heavily using.

wget -h and man page are informative.

值得注意的是,不同Linux版本wget支持的选项可能不完整。

Continue download

1
2
3
## assume the url is disrupted and leave a incomplete file in current folder
## -c,--continue: continue
wget <url> -c

Authz

If you proxy, http_proxy for http connection, https_proxy for https connection. Wget basic authentication challenge

1
2
3
4
5
6
7
8
9
10
11
## http use http_proxy flag

## other options:
## --no-check-certificate: Don't check the server certificate against the available certificate authorities
## --connect-timeout: seconds, TCP connections that take longer to establish will be aborted
## -e, --execute: commands, env varaible
## --auth-no-challenge: for server never sends HTTP authentication challenges, not recommended
wget --no-check-certificate --connect-timeout 10 -e http_proxy="envoy:10000" http://www.httpbin.org/ip --auth-no-challenge --user xxx --password xxx

## https use https_proxy flag
wget --no-check-certificate --connect-timeout 10 -e https_proxy="envoy:10000" https://www.httpbin.org/ip --auth-no-challenge --user xxx --password xxx

最近为了在局域网中传输文件,用Python搭建了一个很简单的 HTTP web server to serve files, 其实有其他软件也可以完成类似功能,甚至scp command也行。不过这个简单的HTTP web server里面的 文件结构一目了然,下载非常方便。

后续还可以添加basic auth service, 甚至SSL/TLS support, 作为其他服务的测试工具。

How to:

  1. virtualenvwrapper 先搭建一个python3 项目和对应的环境。
  2. 激活环境后, 写一个shell script,输出运行时的网址和端口

比如在Mac中,使用如下脚本:

1
2
3
4
5
6
7
8
9
#!/bin/bash

# Use the output in browser in another machine
echo "$(ifconfig | grep -A 10 ^en0 | grep inet | grep -v inet6 | cut -d" " -f2)":8000

# Launch the server
# port number: 8000
# --directory: old version python may not support this option
python3 -m http.server 8000 --bind 0.0.0.0 --directory <absolute path to share folder>

Basic Auth

https://gist.github.com/fxsjy/5465353 You can package it in container or run on virtualenv.

1
2
3
4
# Works with python2
# port: 8080
# auth: user:password
python2 -m main.py 8080 chengdol:123456

HTTPS Basic Auth

https://github.com/tianhuil/SimpleHTTPAuthServer You can run the pip in python2 virtualenv or build a docker container.

Modify to build a image with https auth server, for example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# works on python2
FROM python:2

WORKDIR /usr/src/app

USER root
# pip install
RUN pip install SimpleHTTPAuthServer
# to setup self-signed certificate, need to pre-generated key and cert
RUN mkdir -p /root/.ssh
# From source code it use .ssh folder to store the pem.
RUN openssl req \
-newkey rsa:2048 -new -nodes -x509 -days 3650 \
-keyout /root/.ssh/key.pem -out /root/.ssh/cert.pem \
-subj "/C=US/ST=CA/L=San Jose/O=GOOG/OU=Org/CN=localhost"

# copy other files to WORKDIR
COPY file.txt .

EXPOSE 8080
# start with https
CMD ["python", "-m", "SimpleHTTPAuthServer", "8080", "chengdol:123456", "--https"]

Then go to firefox, type and hit https://localhost:8080.

Docker Compose vs Docker Swarm The difference lies mostly in the backend, where docker-compose deploys container on a single Docker host, Docker Swarm deploys it across multiple nodes.

现在想想,当时组员在本地搭建DataStage也应该用docker compose, it is especially good for web development:

  • accelerate onboarding
  • eliminate app conflicts
  • environment consistency
  • ship software faster

Install

https://docs.docker.com/compose/install/ For Mac docker-compose is self-contained with Docker desktop:

1
2
3
4
## check location
which docker-compose
## show version
docker-compose version

For Linux, first install Docker Engine, then download docker-compose to executable path:

1
2
3
## 1.26.2 is current stable version
sudo curl -L "https://github.com/docker/compose/releases/download/1.26.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

or download binary from github release directly: https://github.com/docker/compose/releases

Uninstall docker-compose, remove binary:

1
sudo rm -f /usr/local/bin/docker-compose

Getting Started

https://docs.docker.com/compose/gettingstarted/ If you know how to write yaml file for Kubernetes, then quick easy to understand the docker-compose.yml.

Some basic commands:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
## build or rebuild service images
## if not specify service name, will build all images
docker-compose build [--no-cache] [service name]

## run services
## will build images if not yet
## -d: detch
## --no-deps: just bring up specified service
docker-compose up [-d | --no-deps] [service name]

## check logs
docker-compose logs [-f] <service name>

## see what is running
docker-compose ps

## one-off command in container
docker-compose run <service name> <commands>

## start/stop service, will not remove container
docker-compose start
docker-compose stop

## remove stopped containers
docker-compose rm [service name]

## shutdown, similar to docker rm -f ...
## -v,--volumes: remove the mounted data volume
## --rmi all: remove all images
docker-compose down [-v] [--rmi all]

Command Completion

https://docs.docker.com/compose/completion/#install-command-completion for oh-my-zsh, add docker and docker-compose to plugins list in ~/.zshrc:

1
plugins=(... docker docker-compose)

Config File

遇到没见过的指令,查阅这里, see left sidebar version 3.

Define services relationship in docker-compose.yml file.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
## docker-compose.yml can parse env variables in current running environment
## 通过这些环境变量控制读取的文件类型,比如development, staging, production
## export APP_ENV=development

## 这里假设下面的文件夹和文件都存在,比如dockerfile 等等

## docker-compuse version
version: "3.7"
services:
web:
container_name: web
build:
## relative path is base on docker-compose.yml directory
## can be url
context: .
## args used in dockerfile, must declaure using ARG in dockerfile
## only seen in build process!
args:
- buildno=1
## relative path to context
dockerfile: ./web/web.dockerfile
## specify built image and tag
## if no image here, docker-compose will use it's name convension
image: webapp:v1

## dependencies
## docker-compose will start mongo and redis first
depends_on:
- mongo
- redis

## host: container
## 外界访问采用host port
## container 之间互相访问port 不用在这里expose
ports:
- "80:80"
- "443:443"
- "9090-9100:8080-8100"

## add env vars from a file
env_file:
- web.env/app.${APP_ENV}.env
- /opt/runtime_opts.env

## other form of env vars
environment:
RACK_ENV: development
## boolean muct be quoted
SHOW: 'true'
## value is from current running environment
SESSION_SECRET:

## short syntax
## https://docs.docker.com/compose/compose-file/#short-syntax-3
volumes:
## host path, relative to docker-compose.yml
- "./web:/opt/web"
## named volume
- "mydata:/opt/data"

## overwrite WORKDIR in Dockerfile
working_dir: /opt/web

## containers in the same network are reachable by others
networks:
- nodeapp-network

mongo:
container_name: mongo
build:
context: .
dockerfile: mongo/mongo-dockerfile
ports:
- "27017"
env_file:
- mongo.env/mongo.${APP_ENV}.env
networks:
nodeapp-network:
## can be accessed by `mongo` or `db` in nodeapp-network
aliases:
- db

redis:
container_name: redis
## pull image from other places
image: redis:latest

## long syntax
## https://docs.docker.com/compose/compose-file/#long-syntax-3
volumes:
- type: volume
source: dbdata
target: /data
volume:
nocopy: true

ports:
- "6379"
env_file:
- redis.env/redis.${APP_ENV}.env
networks:
- nodeapp-network

networks:
nodeapp-network:
## docker defaults to use bridge driver in single host
driver: bridge

## named volumes
volumes:
## default use 'local' driver
mydata:
dbdata:

More about environement variables: By default, the docker-compose command will look for a file named .env in the directory you run the command. By passing the file as an argument, you can store it anywhere and name it appropriately, for example, .env.ci, .env.dev, .env.prod. Passing the file path is done using the --env-file option:

1
docker-compose --env-file ./config/.env.dev up 

.env contains key=value format equaltions.

Storage

Image is set of read-only layers (shared), whereas container has its unique thin read write layer but it is ephemeral.

关于storage的讲解: https://docs.docker.com/storage/ 这里主要弄清楚volumes, bind mounts and tmpfs的区别和使用:

如果docker host的文件系统和docker container使用的不一样,bind mounts如何处理呢? 并且内外user, group都不一样,如果在docker container中新建一个文件,bind mounts 在host中如何映射呢?

Docker Compose Volume: https://docs.docker.com/compose/compose-file/#volumes 这里讲了如何mount host path or named volumes.

Volumes

https://docs.docker.com/storage/volumes/ 类似于K8s的volume claim, general preferred.

Stored in a part of the host filesystem which is managed by Docker (/var/lib/docker/volumes/ on Linux). Non-Docker processes should not modify this part of the filesystem. Volumes are the best way to persist data in Docker.

A given volume can be mounted into multiple containers simultaneously. When no running container is using a volume, the volume is still available to Docker and is not removed automatically.

1
2
3
4
5
6
docker volume create <volume name>
docker volume ls
docker volume rm <volume name>
docker volume inspect
## remove unused volumes
docker volume prune

When you mount a volume, it may be named or anonymous.

Volumes also support the use of volume drivers, which allow you to store your data on remote hosts or cloud providers, among other possibilities.

If you need to specify volume driver options, you must use --mount, -v 的表示比较局限, 这里只是一个简单的例子, 实际上用--mount的配置选项很多:

1
2
3
4
5
6
7
8
9
10
11
## docker will create volume myvol2 automatically if it does exist
docker run -d \
--name devtest \
-v myvol2:/app[:ro] \
nginx:latest

## or
docker run -d \
--name devtest \
--mount source=myvol2,target=/app[,readonly] \
nginx:latest

If the container has files or directories in the directory to be mounted (such as /app/ above), the directory’s contents are copied into the volume, other containers which use the volume also have access to the pre-populated content.

Bind Mounts

https://docs.docker.com/storage/bind-mounts/ 类似于K8s的hostpath, use case for example, mount source code for development.

May be stored anywhere on the host system. They may even be important system files or directories. Non-Docker processes on the Docker host or a Docker container can modify them at any time.

The file or directory does not need to exist on the Docker host already. It is created on demand if it does not yet exist.

Bind mounts are very performant, but they rely on the host machine’s filesystem having a specific directory structure available.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
docker run -d \
-it \
--name devtest \
-v "$(pwd)"/target:/app[:ro] \
nginx:latest
## or
docker run -d \
-it \
--name devtest \
--mount type=bind,source="$(pwd)"/target,target=/app[,readonly] \
nginx:latest

## check Mounts section
docker inspect devtest

If you bind-mount into a non-empty directory on the container, the directory’s existing contents are obscured by the bind mount.

tmpfs

https://docs.docker.com/storage/tmpfs/ Only Linux has this option, useful to temporarily store sensitive files. Stored in the host system’s memory only, and are never written to the host system’s filesystem.

1
2
3
4
5
6
7
8
9
10
11
docker run -d \
-it \
--name tmptest \
--tmpfs /app \
nginx:latest
## or
docker run -d \
-it \
--name tmptest \
--mount type=tmpfs,destination=/app,tmpfs-mode=1770,tmpfs-size=1024 \
nginx:latest

Network

这个章节讲到了所有docker network的类型,作用,区别: https://docs.docker.com/network/

Docker network labs: https://github.com/docker/labs/tree/master/networking

From docker’s perspective, Steps to create a container network:

  1. create a custom bridge network
1
2
3
4
5
6
7
8
9
## create a bridge network isolated_network
docker network create --driver bridge isolated_network

## see created network, the driver is bridge
## you can see host and none are there
docker network ls

## inspect
docker network inspect isolated_network
  1. run containers in the network and ping each other by container name
1
2
3
4
5
6
7
8
9
10
## create 2 busybox in the same network
docker run -d --name=test1 --net=isolated_network --entrypoint=/bin/sh busybox -c "tail -f /dev/null"
docker run -d --name=test2 --net=isolated_network --entrypoint=/bin/sh busybox -c "tail -f /dev/null"

## you can see containers in this network
docker network inspect isolated_network

## ping each other
docker exec test1 ping -c 5 test2
docker exec test2 ping -c 5 test1
  1. remove network created
1
2
docker network rm isolated_network
docker network ls

同样的思路,可以用docker command查看docker-compose中建立的network的信息。

列出了docker compose中top-level networks 创建时的options, after creating top-level networks, they can be referenced by service-level to use: https://docs.docker.com/compose/compose-file/#network-configuration-reference

这篇文章说得很明白, docker compose中network是如何作为的: https://docs.docker.com/compose/networking/

这个例子很有意思,把frontend, backend的网络分开了, only app can reach both networks: https://docs.docker.com/compose/networking/#specify-custom-networks

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
version: "3"
services:

proxy:
build: ./proxy
networks:
- frontend
app:
build: ./app
networks:
- frontend
- backend
db:
image: postgres
networks:
- backend

networks:
frontend:
# Use a custom driver
driver: custom-driver-1
backend:
# Use a custom driver which takes special options
driver: custom-driver-2
driver_opts:
foo: "1"
bar: "2"

Resource

类似于K8s, 也有quota的配置在deploy key下面,但是docker compose file v3 并不支持: ignored by docker-compose up and docker-compose run,虽然可以转换成v2,比如:

1
docker-compose --compatibility up

但是是best effort , 见这里讨论: How to specify Memory & CPU limit in docker compose version 3

Docker Compose file version 2是支持quote设置的: https://docs.docker.com/compose/compose-file/compose-file-v2/#cpu-and-other-resources

//TODO [ ] https tunnel setup [ ] docker image build and test

Introduction

http://www.squid-cache.org/ Squid is a caching proxy for the Web supporting HTTP, HTTPS, FTP, and more. It reduces bandwidth and improves response times by caching and reusing frequently-requested web pages. Squid has extensive access controls and makes a great server accelerator.

Squid docker image github, includes Dockerfile and entrypoint.sh: https://github.com/sameersbn/docker-squid dockerhub: https://hub.docker.com/r/sameersbn/squid

Features useful: [1] store cache on disk(persistent data) vs. in memory. [2] fine tune cache size and age. [3] user authz and authn [4] filter traffic

Github

This repository contains squid demo as both HTTP(s) forward proxy and reverse proxy: https://github.com/chengdol/proxy/tree/main/squid

最近在做migrating Squid to Envoy的工作,其中涉及到了很多HTTP的内容。趁着这次机会深入学习一下,还有就是一些proxy的内容,已经单独拿出来总结了。

Introduction

非常不错的Tutorial, 把各部分都讲得很详细: MDN web docs: HTTP

很快速的把基础部分过了一下: HTTP Crash Course & Exploration The typical HTTP format:

1
2
3
4
METHOD PATH PROTOCOL
HEADERS

BODY

HTTP status code:

  • 1xx, informational
  • 2xx, success
  • 3xx, redirect
  • 4xx, client error
  • 5xx, server error

HTTP/2, faster and more efficient & secure, request and response multiplexing.

Other Tools

  1. httpbin: A simple HTTP/HTTPS Request & Response Service.
  2. ip4.me: check your public IPv4 address.
  3. noip.com, free hostame + domain <-> public IP mapping. 如果要配置这个hostname对应到router的public IP, 需要设置router把这个流量转移到自己的笔记本某个端口上。

CONNECT Method

Connect主要是用在建立Tunnel. Tunneling can allow communication using a protocol that normally wouldn’t be supported on the restricted network. Tunnel 只是一个通道,里面可以支持一些传输协议, 并不是说tunnel 必须是ssl/tls. 举个例子,你通过一个forward proxy 访问一个服务器,使用HTTPS协议,假设Proxy是一个善良的中间人,它并不知道加密后的流量内容是什么,就不可能像HTTP一样去窥探,拆解packet,于是client会发送一个CONNECT HTTP请求,设立一个Tunnel经过proxy和server进行通信。

–>> When should one use CONNECT With SSL(HTTPS), only the two remote end-points understand the requests, and the proxy cannot decipher them. Hence, all it does is open that tunnel using CONNECT, and lets the two end-points (webserver and client) talk to each other directly.

–>> MDN web docs: CONNECT Some proxy servers might need authority to create a tunnel. See also the Proxy-Authorization header.

For example, the CONNECT method can be used to access websites that use SSL (HTTPS). The client asks an HTTP Proxy server to tunnel the TCP connection to the desired destination. The server then proceeds to make the connection on behalf of the client. Once the connection has been established by the server, the Proxy server continues to proxy the TCP stream to and from the client.

这篇文章很不错: –>> MDN web docs: proxy servers and tunneling There are two types of proxies: forward proxies (or tunnel, or gateway) and reverse proxies (used to control and protect access to a server for load-balancing, authentication, decryption or caching).

Forward proxies can hide the identities of clients whereas reverse proxies can hide the identities of servers.

The HTTP protocol specifies a request method called CONNECT. It starts two-way communications with the requested resource and can be used to open a tunnel. This is how a client behind an HTTP proxy can access websites using SSL (i.e. HTTPS, port 443). Note, however, that not all proxy servers support the CONNECT method or limit it to port 443 only.

Basic Authorization

这里提一下authz and authn的区别:

  • authz: authorization,授权, what are allowed to do.
  • authn: authentication, 鉴权, who you are.

这里是讲了HTTP 基本的authz操作. https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication HTTP provides a general framework for access control and authentication. This page is an introduction to the HTTP framework for authentication, and shows how to restrict access to your server using the HTTP “Basic” schema.

The Basic authz is not secure, send in plain text, although base64. can be decode for example:

1
echo <base64 string> | base64 --decode

But if over https, the traffic is encrypted. You can demonstrate it in wireshark locally. Is BASIC-Auth secure if done over HTTPS?

0%