Skip to main content

Proxy with gvisor service mesh

As shown in the diagram below, User A and User B use the kubevpn proxy command to proxy the same service authors respectively:

  • User A: kubevpn proxy deployment/authors --headers user=A
  • User B: kubevpn proxy deployment/authors --headers user=B

When the authors service in the cluster receives traffic:

  • Traffic with user: A in the HTTP header will hit User A's local computer.
  • Traffic with user: B in the HTTP header will hit User B's local computer.
  • Unmatched traffic in the HTTP header will hit the original authors service in the cluster.

The principle is to use envoy as the data plane and implement a control plane for envoy.

Default mode(needs Privileged: true and cap NET_ADMIN)​

The key is how to implement the function bellow.

When the authors service in the cluster receives traffic

default mode use iptables DNAT traffic to port :15006, so works on Pod level, best experience.

example:

kubevpn proxy deployment/authors --headers user=A

Gvisor mode​

gvisor mode modify k8s service targetPort to envoy listener port. eg:

apiVersion: v1
kind: Service
metadata:
labels:
app: authors
service: authors
name: authors
namespace: default
spec:
clusterIP: 172.21.5.157
clusterIPs:
- 172.21.5.157
ports:
- name: http
port: 9080
protocol: TCP
targetPort: 64071
selector:
app: authors
sessionAffinity: None
type: ClusterIP

so works on k8s service level, needs to access via service. if Pod registry their IP to registration center and access via registration center, this mode will not work.

example:

kubevpn proxy deployment/authors --headers user=A --netstack gvisor

we can use this mode on AWS Fargate node. because Fargate node not support Privileged: true and cap NET_ADMIN

gvisor-mesh.svg