Skip to content

Commit 6f74ef1

Browse files
Xartosdavidumea
andauthored
Upcloud: Add possibility to setup cluster using nodes with no public IPs (#11696)
* terraform upcloud: Added possibility to set up nodes with only private IPs * terraform upcloud: add support for gateway in private zone * terraform upcloud: split LB proxy protocol config per backend * terraform upcloud: fix flexible plans * terraform upcloud: Removed overview of cluster setup --------- Co-authored-by: davidumea <david.andersson@elastisys.com>
1 parent fe2ab89 commit 6f74ef1

File tree

9 files changed

+296
-119
lines changed

9 files changed

+296
-119
lines changed

contrib/terraform/upcloud/README.md

Lines changed: 29 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -2,35 +2,6 @@
22

33
Provision a Kubernetes cluster on [UpCloud](https://upcloud.com/) using Terraform and Kubespray
44

5-
## Overview
6-
7-
The setup looks like following
8-
9-
```text
10-
Kubernetes cluster
11-
+--------------------------+
12-
| +--------------+ |
13-
| | +--------------+ |
14-
| --> | | | |
15-
| | | Master/etcd | |
16-
| | | node(s) | |
17-
| +-+ | |
18-
| +--------------+ |
19-
| ^ |
20-
| | |
21-
| v |
22-
| +--------------+ |
23-
| | +--------------+ |
24-
| --> | | | |
25-
| | | Worker | |
26-
| | | node(s) | |
27-
| +-+ | |
28-
| +--------------+ |
29-
+--------------------------+
30-
```
31-
32-
The nodes uses a private network for node to node communication and a public interface for all external communication.
33-
345
## Requirements
356

367
* Terraform 0.13.0 or newer
@@ -100,6 +71,8 @@ terraform destroy --var-file cluster-settings.tfvars \
10071
* `template_name`: The name or UUID of a base image
10172
* `username`: a user to access the nodes, defaults to "ubuntu"
10273
* `private_network_cidr`: CIDR to use for the private network, defaults to "172.16.0.0/24"
74+
* `dns_servers`: DNS servers that will be used by the nodes. Until [this is solved](https://github.com/UpCloudLtd/terraform-provider-upcloud/issues/562) this is done using user_data to reconfigure resolved. Defaults to `[]`
75+
* `use_public_ips`: If a NIC connencted to the Public network should be attached to all nodes by default. Can be overridden by `force_public_ip` if this is set to `false`. Defaults to `true`
10376
* `ssh_public_keys`: List of public SSH keys to install on all machines
10477
* `zone`: The zone where to run the cluster
10578
* `machines`: Machines to provision. Key of this object will be used as the name of the machine
@@ -108,6 +81,8 @@ terraform destroy --var-file cluster-settings.tfvars \
10881
* `cpu`: number of cpu cores
10982
* `mem`: memory size in MB
11083
* `disk_size`: The size of the storage in GB
84+
* `force_public_ip`: If `use_public_ips` is set to `false`, this forces a public NIC onto the machine anyway when set to `true`. Useful if you're migrating from public nodes to only private. Defaults to `false`
85+
* `dns_servers`: This works the same way as the global `dns_severs` but only applies to a single node. If set to `[]` while the global `dns_servers` is set to something else, then it will not add the user_data and thus will not be recreated. Useful if you're migrating from public nodes to only private. Defaults to `null`
11186
* `additional_disks`: Additional disks to attach to the node.
11287
* `size`: The size of the additional disk in GB
11388
* `tier`: The tier of disk to use (`maxiops` is the only one you can choose atm)
@@ -139,6 +114,7 @@ terraform destroy --var-file cluster-settings.tfvars \
139114
* `port`: Port to load balance.
140115
* `target_port`: Port to the backend servers.
141116
* `backend_servers`: List of servers that traffic to the port should be forwarded to.
117+
* `proxy_protocol`: If the loadbalancer should set up the backend using proxy protocol.
142118
* `router_enable`: If a router should be connected to the private network or not
143119
* `gateways`: Gateways that should be connected to the router, requires router_enable is set to true
144120
* `features`: List of features for the gateway
@@ -171,3 +147,27 @@ terraform destroy --var-file cluster-settings.tfvars \
171147
* `server_groups`: Group servers together
172148
* `servers`: The servers that should be included in the group.
173149
* `anti_affinity_policy`: Defines if a server group is an anti-affinity group. Setting this to "strict" or yes" will result in all servers in the group being placed on separate compute hosts. The value can be "strict", "yes" or "no". "strict" refers to strict policy doesn't allow servers in the same server group to be on the same host. "yes" refers to best-effort policy and tries to put servers on different hosts, but this is not guaranteed.
150+
151+
## Migration
152+
153+
When `null_resource.inventories` and `data.template_file.inventory` was changed to `local_file.inventory` the old state file needs to be cleaned of the old state.
154+
The error messages you'll see if you encounter this is:
155+
156+
```text
157+
Error: failed to read schema for null_resource.inventories in registry.terraform.io/hashicorp/null: failed to instantiate provider "registry.terraform.io/hashicorp/null" to obtain schema: unavailable provider "registry.terraform.io/hashicorp/null"
158+
Error: failed to read schema for data.template_file.inventory in registry.terraform.io/hashicorp/template: failed to instantiate provider "registry.terraform.io/hashicorp/template" to obtain schema: unavailable provider "registry.terraform.io/hashicorp/template"
159+
```
160+
161+
This can be fixed with the following lines
162+
163+
```bash
164+
terraform state rm -state=terraform.tfstate null_resource.inventories
165+
terraform state rm -state=terraform.tfstate data.template_file.inventory
166+
```
167+
168+
### Public to Private only migration
169+
170+
Since there's no way to remove the public NIC on a machine without recreating its private NIC it's not possible to inplace change a cluster to only use private IPs.
171+
The way to migrate is to first set `use_public_ips` to `false`, `dns_servers` to some DNS servers and then update all existing servers to have `force_public_ip` set to `true` and `dns_severs` set to `[]`.
172+
After that you can add new nodes without `force_public_ip` and `dns_servers` set and create them.
173+
Add the new nodes into the cluster and when all of them are added, remove the old nodes.

contrib/terraform/upcloud/cluster-settings.tfvars

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -122,11 +122,11 @@ k8s_allowed_remote_ips = [
122122
master_allowed_ports = []
123123
worker_allowed_ports = []
124124

125-
loadbalancer_enabled = false
126-
loadbalancer_plan = "development"
127-
loadbalancer_proxy_protocol = false
125+
loadbalancer_enabled = false
126+
loadbalancer_plan = "development"
128127
loadbalancers = {
129128
# "http" : {
129+
# "proxy_protocol" : false
130130
# "port" : 80,
131131
# "target_port" : 80,
132132
# "backend_servers" : [

contrib/terraform/upcloud/main.tf

Lines changed: 22 additions & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -20,24 +20,26 @@ module "kubernetes" {
2020
username = var.username
2121

2222
private_network_cidr = var.private_network_cidr
23+
dns_servers = var.dns_servers
24+
use_public_ips = var.use_public_ips
2325

2426
machines = var.machines
2527

2628
ssh_public_keys = var.ssh_public_keys
2729

28-
firewall_enabled = var.firewall_enabled
29-
firewall_default_deny_in = var.firewall_default_deny_in
30-
firewall_default_deny_out = var.firewall_default_deny_out
31-
master_allowed_remote_ips = var.master_allowed_remote_ips
32-
k8s_allowed_remote_ips = var.k8s_allowed_remote_ips
33-
master_allowed_ports = var.master_allowed_ports
34-
worker_allowed_ports = var.worker_allowed_ports
30+
firewall_enabled = var.firewall_enabled
31+
firewall_default_deny_in = var.firewall_default_deny_in
32+
firewall_default_deny_out = var.firewall_default_deny_out
33+
master_allowed_remote_ips = var.master_allowed_remote_ips
34+
k8s_allowed_remote_ips = var.k8s_allowed_remote_ips
35+
bastion_allowed_remote_ips = var.bastion_allowed_remote_ips
36+
master_allowed_ports = var.master_allowed_ports
37+
worker_allowed_ports = var.worker_allowed_ports
3538

36-
loadbalancer_enabled = var.loadbalancer_enabled
37-
loadbalancer_plan = var.loadbalancer_plan
38-
loadbalancer_outbound_proxy_protocol = var.loadbalancer_proxy_protocol ? "v2" : ""
39-
loadbalancer_legacy_network = var.loadbalancer_legacy_network
40-
loadbalancers = var.loadbalancers
39+
loadbalancer_enabled = var.loadbalancer_enabled
40+
loadbalancer_plan = var.loadbalancer_plan
41+
loadbalancer_legacy_network = var.loadbalancer_legacy_network
42+
loadbalancers = var.loadbalancers
4143

4244
router_enable = var.router_enable
4345
gateways = var.gateways
@@ -52,32 +54,12 @@ module "kubernetes" {
5254
# Generate ansible inventory
5355
#
5456

55-
data "template_file" "inventory" {
56-
template = file("${path.module}/templates/inventory.tpl")
57-
58-
vars = {
59-
connection_strings_master = join("\n", formatlist("%s ansible_user=ubuntu ansible_host=%s ip=%s etcd_member_name=etcd%d",
60-
keys(module.kubernetes.master_ip),
61-
values(module.kubernetes.master_ip).*.public_ip,
62-
values(module.kubernetes.master_ip).*.private_ip,
63-
range(1, length(module.kubernetes.master_ip) + 1)))
64-
connection_strings_worker = join("\n", formatlist("%s ansible_user=ubuntu ansible_host=%s ip=%s",
65-
keys(module.kubernetes.worker_ip),
66-
values(module.kubernetes.worker_ip).*.public_ip,
67-
values(module.kubernetes.worker_ip).*.private_ip))
68-
list_master = join("\n", formatlist("%s",
69-
keys(module.kubernetes.master_ip)))
70-
list_worker = join("\n", formatlist("%s",
71-
keys(module.kubernetes.worker_ip)))
72-
}
73-
}
74-
75-
resource "null_resource" "inventories" {
76-
provisioner "local-exec" {
77-
command = "echo '${data.template_file.inventory.rendered}' > ${var.inventory_file}"
78-
}
79-
80-
triggers = {
81-
template = data.template_file.inventory.rendered
82-
}
57+
resource "local_file" "inventory" {
58+
content = templatefile("${path.module}/templates/inventory.tpl", {
59+
master_ip = module.kubernetes.master_ip
60+
worker_ip = module.kubernetes.worker_ip
61+
bastion_ip = module.kubernetes.bastion_ip
62+
username = var.username
63+
})
64+
filename = var.inventory_file
8365
}

0 commit comments

Comments
 (0)