diff --git a/.github/workflows/link-checker.yaml b/.github/workflows/link-checker.yaml index c55e7933e7a3..feb88d1a2d43 100644 --- a/.github/workflows/link-checker.yaml +++ b/.github/workflows/link-checker.yaml @@ -21,4 +21,4 @@ jobs: uses: actions/checkout@v2 - name: Check external links using lychee run: | - lychee -v '**/*.md' --exclude-mail --exclude '.*?ref=.*' + lychee -v '**/*.md' --exclude-mail --exclude-path examples/interdomain/nsm_consul_vl3/README.md --exclude '.*?ref=.*' diff --git a/examples/interdomain/README.md b/examples/interdomain/README.md index 8ce24bf8ddcc..a5004472d200 100644 --- a/examples/interdomain/README.md +++ b/examples/interdomain/README.md @@ -15,4 +15,5 @@ This setup is basic for interdomain examples on two clusters. This setup can be ## Includes - [NSM + Consul](./nsm_consul) +- [NSM + Consul](./nsm_consul_vl3) - [NSM + Istio](./nsm_istio) diff --git a/examples/interdomain/nsm_consul_vl3/README.md b/examples/interdomain/nsm_consul_vl3/README.md new file mode 100644 index 000000000000..995d73016561 --- /dev/null +++ b/examples/interdomain/nsm_consul_vl3/README.md @@ -0,0 +1,470 @@ +# NSM + Consul + vl3 interdomain example over kind clusters + +This example shows how Consul can be used over NSM with vl3. + + +## Requires + +- [Load balancer](../loadbalancer) +- [Interdomain DNS](../dns) +- [Interdomain spire](../spire) +- [Interdomain nsm](../nsm) + + +## Run + +References: +https://learn.hashicorp.com/tutorials/consul/deployment-guide?in=consul/production-deploy +https://learn.hashicorp.com/tutorials/consul/tls-encryption-secure +https://learn.hashicorp.com/tutorials/consul/service-mesh-with-envoy-proxy?in=consul/developer-mesh + + +Start vl3, install Consul control plane and counting service on the first cluster +```bash +kubectl --kubeconfig=$KUBECONFIG1 create ns ns-nsm-consul-vl3 +kubectl --kubeconfig=$KUBECONFIG1 apply -k ./examples/interdomain/nsm_consul_vl3/cluster1 +``` +Install Consul dashboard service on the second cluster +```bash +kubectl --kubeconfig=$KUBECONFIG2 create ns ns-nsm-consul-vl3 +kubectl --kubeconfig=$KUBECONFIG2 apply -k ./examples/interdomain/nsm_consul_vl3/cluster2 +``` + +Wait for pods to be ready: +```bash +kubectl --kubeconfig=$KUBECONFIG1 wait --for=condition=ready --timeout=5m pod -l app=nse-vl3-vpp -n ns-nsm-consul-vl3 +kubectl --kubeconfig=$KUBECONFIG1 wait --for=condition=ready --timeout=5m pod -l app=vl3-ipam -n ns-nsm-consul-vl3 +kubectl --kubeconfig=$KUBECONFIG1 wait --for=condition=ready --timeout=5m pod -l name=control-plane -n ns-nsm-consul-vl3 +kubectl --kubeconfig=$KUBECONFIG1 wait --for=condition=ready --timeout=5m pod counting -n ns-nsm-consul-vl3 +kubectl --kubeconfig=$KUBECONFIG2 wait --for=condition=ready --timeout=5m pod dashboard -n ns-nsm-consul-vl3 +``` + +```bash +export CP=$(kubectl --kubeconfig=$KUBECONFIG1 get pods -n ns-nsm-consul-vl3 -l name=control-plane --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}') +``` + +(On the control plane pod) Generate the gossip encryption key: +```bash +ENCRYPTION_KEY=$(kubectl --kubeconfig=$KUBECONFIG1 -n ns-nsm-consul-vl3 exec -it ${CP} -c ubuntu -- consul keygen) +``` + +(On the control plane pod) Get CP vl3 IP +```bash +CP_IP_VL3_ADDRESS=$(kubectl --kubeconfig=$KUBECONFIG1 -n ns-nsm-consul-vl3 exec -it ${CP} -c ubuntu -- ifconfig nsm-1 | grep -Eo 'inet addr:[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'| cut -c 11-) +``` + +(On the control plane pod) Initialize Consul CA +```bash +kubectl --kubeconfig=$KUBECONFIG1 -n ns-nsm-consul-vl3 exec -it ${CP} -c ubuntu -- consul tls ca create +``` + +(On the control plane pod) Create the server certificates +```bash +kubectl --kubeconfig=$KUBECONFIG1 -n ns-nsm-consul-vl3 exec -it ${CP} -c ubuntu -- consul tls cert create -server -dc dc1 +``` + +Update control plane configuration. Use here the saved encryption key and CP vl3 IP address +```bash +cat > consul.hcl < server.hcl </dev/null 2>&1 &' +``` + +Check that Consul Server has started: +```bash +kubectl --kubeconfig=$KUBECONFIG1 -n ns-nsm-consul-vl3 exec ${CP} -c ubuntu -- consul members +``` + +Port-forward Consul UI +```bash +kubectl --kubeconfig=$KUBECONFIG2 -n ns-nsm-consul-vl3 port-forward $CP 8500:8500 & +``` +(On the counting pod) Set the counting and control plane pods vl3 IP +```bash +COUNTING_IP_VL3_ADDRESS=$(kubectl --kubeconfig=$KUBECONFIG1 -n ns-nsm-consul-vl3 exec -it counting -c ubuntu -- ifconfig nsm-1 | grep -Eo 'inet [0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'| cut -c 6-) +``` +Copy certificates from the Control plane into Counting Pod +```bash +kubectl --kubeconfig=$KUBECONFIG1 cp ns-nsm-consul-vl3/${CP}:consul-agent-ca.pem consul-agent-ca.pem +kubectl --kubeconfig=$KUBECONFIG1 cp ns-nsm-consul-vl3/${CP}:consul-agent-ca-key.pem consul-agent-ca-key.pem + +kubectl --kubeconfig=$KUBECONFIG1 cp consul-agent-ca.pem ns-nsm-consul-vl3/counting:/etc/consul.d +kubectl --kubeconfig=$KUBECONFIG1 cp consul-agent-ca-key.pem ns-nsm-consul-vl3/counting:/etc/consul.d +``` + +Update countign configuration. Use here the saved encryption key and the Counting service pod vl3 IP address +```bash +cat > consul-counting.hcl < consul.service < counting.hcl < consul-envoy.service < consul-dashboard.hcl < consul.service < dashboard.hcl < consul-envoy.service <