Skip to main content

Can not access Service IP or Service name, but can access Pod IP ?

Answer:

K8s cluster kube-proxy maybe using ipvs mode, so kubevpn needs to use option --netstack gvisor.

eg:

  • connect mode
    • kubevpn connect --netstack gvisor
  • proxy mode
    • kubevpn proxy deployment/authors --netstack gvisor
  • clone mode
    • kubevpn clone deployment/authors --netstack gvisor
  • dev mode
    • kubevpn dev deployment/authors --netstack gvisor

Why:

kube-proxy with ipvs mode makes iptables SNAT not work, but kubepvn depend on iptables SNAT to access service IP, so can not access service IP if kube-proxy is in ipvs mode. (Access to Pod IP is not affected.)

Solution:

kubevpn use gVisor to access service IP, not relays to iptables

Reference:

https://kubernetes.io/docs/reference/networking/virtual-ips/#proxy-modes

The kube-proxy starts up in different modes, which are determined by its configuration. On Linux nodes, the available modes for kube-proxy are:

iptables

A mode where the kube-proxy configures packet forwarding rules using iptables.

In this mode, kube-proxy configures packet forwarding rules using the iptables API of the kernel netfilter subsystem. For each endpoint, it installs iptables rules which, by default, select a backend Pod at random.

ipvs

A mode where the kube-proxy configures packet forwarding rules using ipvs.

In ipvs mode, kube-proxy uses the kernel IPVS and iptables APIs to create rules to redirect traffic from Service IPs to endpoint IPs.

The IPVS proxy mode is based on netfilter hook function that is similar to iptables mode, but uses a hash table as the underlying data structure and works in the kernel space. That means kube-proxy in IPVS mode redirects traffic with lower latency than kube-proxy in iptables mode, with much better performance when synchronizing proxy rules. Compared to the iptables proxy mode, IPVS mode also supports a higher throughput of network traffic.

nftables

A mode where the kube-proxy configures packet forwarding rules using nftables.

In this mode, kube-proxy configures packet forwarding rules using the nftables API of the kernel netfilter subsystem. For each endpoint, it installs nftables rules which, by default, select a backend Pod at random.

The nftables API is the successor to the iptables API and is designed to provide better performance and scalability than iptables. The nftables proxy mode is able to process changes to service endpoints faster and more efficiently than the iptables mode, and is also able to more efficiently process packets in the kernel (though this only becomes noticeable in clusters with tens of thousands of services).

Architecture

use gVisor to access k8s service-cidr and pod-cidr network

still send inner-cidr network traffic to tun device

connect_network_stack_gvisor.svg