This is a follow-up post on [1], which showed how to connect two Kubernetes-based hybrid clouds (Google GKE and AWS EKS) with JGroups' TUNNEL and GossipRouter.
Meanwhile, I've discovered Skupper, which (1) simplifies this task and (as a bonus) (2) encrypts the data exchanged between different clouds.
In this post, I'm going to provide step-by-step instructions on how to connect a Google Kubernetes Engine (GKE) cluster with a cluster running on my local box.
To run the demo yourself, you must have Skupper installed and a GKE account. However, any other cloud provider works, too.
For the local cloud, I'm using docker-desktop. Alternatively, minikube could be used.
So let's get cracking, and start the GKE cluster. To avoid having to switch contexts with kubectl all the time, I suggest start 2 separate shells and set KUBECONFIG for the public (GKE) cloud to a copy of config:
Shell 1 (GKE): cp .kube/config .kube/gke; export KUBECONFIG=$HOME/.kube/gke
Now start a GKE cluster (in shell 1):
gcloud container clusters create gke --num-nodes 4
NOTE: if you use a different cloud, simply start your cluster and set kubectl's context to point to your cluster. The rest of the instructions below apply regardless of the specific cloud.
This sets the Kubernetes context (shell 1):
kubectl config current-context
gke_ispnperftest_us-central1-a_gke
In shell 2, confirm that the context is local:
kubectl config current-context
docker-desktop
This shows Kubernetes is pointing to docker-desktop.
Let's now start a GossipRouter in both clouds. To do this, we have to modify the YAML used in [1] slightly:
curl https://raw.githubusercontent.com/belaban/jgroups-docker/master/yaml/gossiprouter.yaml > gossiprouter.yaml
Now comment lines 42-43:
spec:
# type: LoadBalancer
# externalTrafficPolicy: Local
This is needed by Skupper which requires a service to be exposed as a ClusterIP and not a LoadBalancer.
Now deploy it in both shells:
kubectl apply -f gossiprouter.yaml
deployment.apps/gossiprouter created
service/gossiprouter created
Now it is time to initialize Skupper in both shells:
skupper init
Waiting for LoadBalancer IP or hostname...
Skupper is now installed in namespace 'default'. Use 'skupper status' to get more information.
This installs some pods and services/proxies:
kubectl get po,svc
NAME READY STATUS RESTARTS AGE
pod/gossiprouter-6d6dcd6d79-q9p2f 1/1 Running 0 4m6s
pod/skupper-proxy-controller-dcf99c6bf-whns4 1/1 Running 0 86s
pod/skupper-router-7976948d9f-b58wn 1/1 Running 0 2m50s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/gossiprouter ClusterIP 10.27.252.196 <none> 8787/TCP,9000/TCP,12001/TCP 4m6s
service/kubernetes ClusterIP 10.27.240.1 <none> 443/TCP 27m
service/skupper-controller LoadBalancer 10.27.241.112 35.223.80.171 8080:30508/TCP 2m49s
service/skupper-internal LoadBalancer 10.27.243.17 35.192.126.100 55671:30671/TCP,45671:31522/TCP 2m48s
service/skupper-messaging ClusterIP 10.27.247.95 <none> 5671/TCP 2m49s
Next, we create a connection token in one of the clouds. This creates a file containing a certificate and keys that allows a Skupper instance in one cluster to connect to a Skupper instance in another cluster.
Note that this file must be kept secret as it contains the private keys of the (server) Skupper instance!
We only need to connect from one cloud to the other, Skupper will automatically create a bi-directional connection.
Let's pick the public cloud (shell 1):
skupper connection-token gke.secret
Connection token written to gke.secret
We now need to copy this file to the other (local) cloud. In my example, I'm using the home directory, but in real-life, this would have to be done secretly.
The local Skupper instance now uses this file to connect to the Skupper instance in the public cluster and establish an encrypted VPN tunnel:
kupper connect gke.secret
Skupper is now configured to connect to 35.192.126.100:55671 (name=conn1)
Now, we have to expose the GossipRouter service in each cloud to Skupper, so Skupper can create a local proxy of the service that transparently connects to the other cloud, via a symbolic name:
Shell 1:
skupper expose deployment gossiprouter --port 12001 --address gossiprouter-1
Shell 2:
skupper expose deployment gossiprouter --port 12001 --address gossiprouter-2
The symbolic names gossiprouter-1 and gossiprouter-2 are now available to any pod in both clusters.
Traffic sent from the local cluster to gossiprouter-1 in the public cluster is transparently (and encryptedly) forwarded by Skupper between the sites!
This means, we can set TUNNEL_INITIAL_HOSTS (as used in the bridge cluster) to
gossiprouter1[12001],gossiprouter-2[12001].
This is used in bridge.xml:
<TUNNEL bind_addr="match-interface:eth0,site-local" gossip_router_hosts="${TUNNEL_INITIAL_HOSTS:127.0.0.1[12001]}"
...
Let's now run RelayDemo in the public and local clusters. This is the same procedure as in [1].
Shell 1:
curl https://raw.githubusercontent.com/belaban/jgroups-docker/master/yaml/nyc.yaml > public.yaml
Shell 2:
curl https://raw.githubusercontent.com/belaban/jgroups-docker/master/yaml/sfc.yaml > local.yaml
In both YAML files, change the number of replicas to 3 and the value of TUNNEL_INITIAL_HOSTS to "gossiprouter-1[12001],gossiprouter-2[12001]".
Then start 3 pods in the public (NYC) and local (SFC) clusters:
Shell 1:
kubectl apply -f public.yaml
deployment.apps/nyc created
service/nyc created
Shell 2:
kubectl apply -f local.yaml
deployment.apps/sfc created
service/sfc created
Verify that there are 3 pods running in each cluster.
Let's now run RelayDemo on the local cluster:
Shell 2:
> kubectl get pods |grep sfc-
sfc-7f448b7c94-6pb9m 1/1 Running 0 2m44s
sfc-7f448b7c94-d7zkp 1/1 Running 0 2m44s
sfc-7f448b7c94-ddrhs 1/1 Running 0 2m44s
> kubectl exec -it sfc-7f448b7c94-6pb9m bash
bash-4.4$ relay.sh -props sfc.xml -name Local
-------------------------------------------------------------------
GMS: address=Local, cluster=RelayDemo, physical address=10.1.0.88:7801
-------------------------------------------------------------------
View: [sfc-7f448b7c94-6pb9m-4056|3]: sfc-7f448b7c94-6pb9m-4056, sfc-7f448b7c94-ddrhs-52643, sfc-7f448b7c94-d7zkp-11827, Local
: hello
: << hello from Local
<< response from sfc-7f448b7c94-6pb9m-4056
<< response from sfc-7f448b7c94-ddrhs-52643
<< response from sfc-7f448b7c94-d7zkp-11827
<< response from Local
<< response from nyc-6b4846f777-g2gqk-7743:nyc
<< response from nyc-6b4846f777-7jm9s-23105:nyc
<< response from nyc-6b4846f777-q2wrl-38225:nyc
We're first listing all pods, then exec into one of them.
Next, we're running RelayDemo and send a message to all members of the local and remote clusters. We can see that we get a response from self (Local) and the other 3 members of the local (SFC) cluster, and we also get responses from the 3 members of the remote public cluster (NYC).
JGroups load-balances messages across one of the two GossipRouters. Each time, the router is remote, Skupper forwards the traffic transparently over its VPN tunnel to the other site.
[1] http://belaban.blogspot.com/2019/12/spanning-jgroups-kubernetes-based.html
[2] https://skupper.io/
No comments:
Post a Comment