- Preparing your virtual network for Kubernetes
- Current virtual network setup
- Target network scenario
- Creating an isolated Linux bridge
- Bridges management
- Relevant system paths
- References
- Navigation
The upcoming chapters explain how to setup a small Kubernetes cluster run on virtual machines. Those VMs will need to be networked with each other and also with your LAN. Therefore, you need revise the virtual network setup you have and make it fit for the needs you will face later.
The network setup of your Proxmox VE standalone system is kept at its node level. To see it, you need to get into your PVE web console and browse to the System > Network view of your pve node:
In the capture above you can see the setup on this guide's Proxmox VE host, which has these network interfaces:
-
enp3s0
Is the host's real Ethernet NIC. -
vmbr0
Is the Linux bridge generated in the installation of Proxmox VE. It holds the IP of this host, and "owns" theenp3s0NIC. If you remember, all this was set up back in the Proxmox VE installation.
Your system should have, at least, one en* device and the vmbr0 Linux bridge.
Any changes you make in this Network page are saved in the /etc/network/interfaces file of your PVE host. Open a remote shell as mgrsys, then make a backup of that file before you start changing your PVE network:
$ sudo cp /etc/network/interfaces /etc/network/interfaces.origThe idea is to create a small Kubernetes cluster run with a few virtual machines. Each of those VMs will have two network cards. Why two network cards? To separate the internal communications that a Kubernetes cluster has between its nodes from the traffic between the cluster and the external or LAN network.
The VMs will have access to the external or LAN network and be reachable through one NIC, and communicate only with each other for cluster-related tasks through the other NIC. This is achieved simply by setting up the NICs on different IP subnets but, to guarantee true isolation for the internal-cluster-communication NICs you can emulate how it would be done if you were using real hardware: by setting up another Linux bridge not connected to the external network and connecting the internal-cluster-communication NICs to it.
Creating a new and isolated Linux bridge in your Proxmox VE system is rather simple through the web console:
-
Browse to the
System > Networkof yourpvenode. Then click on theCreatebutton to unfold a list of options:Notice that there are two options groups in the unfolded list:
-
Linuxoptions
Networking technology included in the Linux kernel, meaning that it is already available in your PVE system. -
OVSoptions
Relative to Open vSwitch technology. Since it is not installed in your system, these options cannot work in your setup.
-
-
Click on the
Linux Bridgeoption to raise theCreate: Linux Bridgeform window:The default values are just fine:
-
Name
You could put a different name if you wanted to but, since Proxmox VE follows a naming convention for device names like these, it is better to leave the default one to avoid potential issues. -
IPv4/CIDRandGateway (IPv4)
Left empty because you do not really need an IP assigned to a bridge for it to do its job at the MAC level. -
IPv6/CIDRandGateway (IPv6)
Also left empty for the same reason as with the IPv4 values, plus you are not even using IPv6 in the setup explained in this guide. -
Autostart
You want this bridge to be always available when the system boots up. -
VLAN aware
For the scenario contemplated in this guide series, there is no need for you to use VLANs at all. In fact, the existingvmbr0bridge does not have this option enabled either. -
Bridge ports
Here will be listed all the interfaces connected to this bridge. Right now this list has to be left empty in this bridge. Notice that, in thevmbr0bridge, theenp3s0interface appears listed in this field. -
Comment
Here you could enter a string likeK3s cluster inner networking(Rancher K3s will be the Kubernetes distribution used to set up the cluster later).
-
-
Click on
Createto see your newvmbr1Linux bridge added to the list of network devices:Realize that:
- The
Apply Configurationbutton has been enabled. - Your new Linux bridge has been added to the network list, but it is not active.
- A log console has appeared right below the network devices list, showing you the "pending changes" you have to apply.
- The
-
Press on the
Apply Configurationbutton to make the underlyingifupdown2commands apply the changes. This action demands confirmation in the window shown below:Press on
Yes, and a small fast progress window appears: -
The
Networkpage refreshes automatically, showing your newvmbr1Linux bridge active in the devices list: -
You can also check out the changes applied at the
/etc/network/interfacesconfiguration file of your PVE host. Open a shell asmgrsysand open the file withless:$ less /etc/network/interfaces
The file should look now like this:
# network interface settings; autogenerated # Please do NOT modify this file directly, unless you know what # you're doing. # # If you want to manage parts of the network configuration manually, # please utilize the 'source' or 'source-directory' directives to do # so. # PVE will preserve these directives, but will NOT read its network # configuration from sourced files, so do not attempt to move any of # the PVE managed interfaces into external files! auto lo iface lo inet loopback iface enp3s0 inet manual auto vmbr0 iface vmbr0 inet static address 10.1.0.1/8 gateway 10.0.0.1 bridge-ports enp3s0 bridge-stp off bridge-fd 0 auto vmbr1 iface vmbr1 inet manual bridge-ports none bridge-stp off bridge-fd 0 #K3s cluster inner networking source /etc/network/interfaces.d/*
Find your new
vmbr1bridge appended to thisinterfacesfile with a set ofbridge-options similar to the originalvmbr0bridge.
You can handle your bridges through your Proxmox VE web console, but that is a rather limited tool for solving more complex situations:
-
You can use the
ipcommand to handle the bridges like any other network device. For instance, you can compare the traffic statistics of your newvmbr1bridge with the ones fromvmbr0.$ ip -s link show vmbr0 3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether 98:ee:cb:03:05:a3 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped missed mcast 3682639 12855 0 0 0 5660 TX: bytes packets errors dropped carrier collsns 3750287 4968 0 0 0 0 $ ip -s link show vmbr1 4: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/ether 32:06:ef:79:b5:9d brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped missed mcast 0 0 0 0 0 0 TX: bytes packets errors dropped carrier collsns 0 0 0 0 0 0
See how, at this point, the
vmbr1bridge has no traffic whatsoever whilevmbr0has some networking flow going through it. -
There is a command with specific functionality meant for managing bridges, called
bridge. It is already installed in your Proxmox VE system, so you can use it right away. Also, be aware that thebridgecommand requiressudoto be executed.[!NOTE] To understand the bridge command, you also need to study the particularities of bridges in general
Please take a look to the references linked at the end of this chapter.
/etc/network
/etc/network/interfaces/etc/network/interfaces.orig
- Linux Expert. Deep Guide to Bridge Command Line in Linux
- Linux-Blog – Dr. Mönchmeyer / anracon. Fun with veth devices, Linux virtual bridges, KVM, VMware – attach the host and connect bridges via veth
- Kernel Virtual Machine. Networking
- nixCraft. Howto. Debian Linux. How to setup and configure network bridge on Debian Linux
- Hechao's Blog. Linux Bridge - Part 1
- Hechao's Blog. Mini Container Series Part 5
- ServerFault. Linux: bridges, VLANs and RSTP
- Ubuntu. Community Help Wiki. NetworkConnectionBridge. Bridging Ethernet Connections (as of Ubuntu 16.04)
<< Previous (G016. Host optimization 02) | +Table Of Contents+ | Next (G018. K3s cluster setup 01) >>






