Ted Hahn is an SRE for hire working on planet-scale distributed systems. His clients include Epic Games and startups in Seattle and New York.
Mark Hahn is Director of Cloud Strategies and DevOps for Ciber Global, a consulting firm. In this role he is responsible for all things related to software velocity.
we are father son team that put this idea and demonstration together
Your core infrastructure should included opinoonated web server which includes middleware for metrics, logging, security
We propose a Kubenetes controller (and libraries for using) which allow you to easily stand up secure pod-pod commuications
The controller creates certificates for each pod allowing pods to easily setup TLS, and support mTLS
Using certificates and TLS is all goodness and light, but getting the certificate setup and distirbuted is hard work
a developer can easily follow these steps to create certificates for development, and used them for testing but getting certificates for test environments or production can be much harder
A common example is to use CFSSL, the Cloud Flair SSL tool to operate your own certificate authority
this is painful process this particully when you are running the CA and are the client of the CA; it is easy to get the role mixed up and distirbute the wrong parts to the wrong people
This a pain in CFSSL, as there's a bunch of moving parts and the two roles get confused. This example lives in branch ted/mtlsexample /docs/manual-ca/
You will find more detailed examples in our repository
This is a pain in OpenSSL.
There's:
- config files in Windows old INI format
- cobbled support for batch mode
- support for newer x509 extenstions is wonky (Netscape anyone?)
we use TLS through out, call us out if we accidentialy say SSL
by the way we should always say TLS and not SSL. call us out if we forget to call it TLS
this is a simpe golanger server which . . .
but all servers or serivvces should use TLS and we will show how that is done
these are the modifications needed to enable the server to use TLS
This code shouldn't be copy+pasted throughout your repositories: It should be in one simple central place
But creating the PKCS.12 Java Key Store is antiquated, and it is encrypted with DES.
introduce the topic of MTLS and movtivate difficultly with certs
Reader's note: The view that the cluster's CA is the definition of the cluster is shared by older documentation, but changes in 1.19 seem to disagree
ALTS = Application Layer Transport Security
this is how kubelets already join the cluster; they generate a private key and CSR which is sent to the master
the CSR is turned into a certificate which is then used to secure all furhter control plane communication
So, certificates are already distributed to each node, so why shouldn't they be for each pod
How we have done this is we have created a mutating webhook
service which is called by k8s whenever a pod is created the webhook
dynmaincly creates and attaches a secret to your pod's container.
The secret contains the necessary certificate and private key
necessary for inter pod communication.
demo : two tabs in terminal: one in infocoinatiner base ; one in grpc branch
Demo prep: Working KubeTLS server, then: Delete the mutatingwebhookconfigurations (have a backup!), delete secret (and matching csr) infocontainers are using, restart infocontainers (so they have no secret). Remove grpc pods
Show pods; show running KubeTLS server
Show a pod with no cert (infocontainer)
install KubeTLS server as the mutating webhook controller
watchthe KubeTLS logs
kill pod
look at pod with new secret (infocontainer)
This is the body of the request to the mutating webhook controller,
but hand pretty-printed to get it to fit on the slide. The full pod objet
is provided in the object: field.
this is the main Go code for the logic of the KubeTLS Mutating
Webhook Controller. All the error checking has been deleted so it fits
on one screen. This follows the bullet point plan we outlined earlier
in our presentation.
Demo
In ted/mtlsexample:
cd kubessl/tlslib/example/golang-grpc
kubectl apply -f greeter_server.yaml
kubectl create -f greeter_client.yaml
(copy the created job name)
kubectl logs CTRL+V
RootCAs is the certificate pool used for verifying servers as a client. Clienct CAs is the certificate pool used for verifying clients as a server
There's example code in our repository, as well as libraries.
The libraries provide features like authentication based on the requesting pod's ServiceAccount
the magic here is not in the libraries but in the distriution of the certificates to every place that needs them
Serivce Meshes: Ishtio, linkerd, Consul, etc.
<!-- Ted makes the point that this is service mesh done Kubernetes native
Service mesh adds too many point of contact
Article on itshtio and wireshare
Mark makes the point that service mesh takes the responsibility away from developers
Developers throw their requests into the stream, rather than understand and coordinate with teams they are in direct contact with
This complements tools like [KubeResolver](https://github.com/everflow-io/kuberesolver), which allow load-balancing by using the Kubernetes API
Why not Istio?
I recently read [an article](https://shrayk.medium.com/ten-tips-for-running-istio-in-production-4ea2b158440a) in which the first piece of advice is "Use Wireshark to debug". This is pretty much the opposite of one of the most stated purposes of Istio - To prevent to sniffing or interception traffic.
# FAQ:
## What does the KubeTLS Certificate look like?
The Certificate generated is unique to the set of services that the pod matches, as well as the serviceAccount the pod is running as. The relevant bits are such:
*SANs*: A list of services matching the pod, i.e. if a pod can be found in the "nginx" and "web-frontend" endpoints objects in the "default" namespace, it will have ["nginx", "web-frontend"]. Support for namespace specific suffixes and cluster-specific suffixes is planned.
*CN*: ServiceAccount, in the format "system:serviceaccount:Namespace:ServiceAccount", i.e. "system:serviceaccount:default:nginx"
The commonName of the certificate is inteded for use as the client's name. Use of CommonName as an alternate DNS validation has long been deprecated, and starting in Go 1.15 the system no longer accepts Server certificates without a SubjectAlertnativeName field.
## Future of the KubeTLS Certificate:
The KubeTLS certificate should be unique per-pod. Generating a unique certificate per-pod is a future goal of KubeTLS, and the current method will be deprecated. The following changes will be made:
*SANs*: Will include a pod-specific identifier. Will *not* include the pod's IP address, for both security (IP Addresses may be reused) and practical (KubeTLS does not know the Pod IP at webhook time) reasons.
*CN*: Will stay as is, representing the ServiceAccount.
*Subject*: Will include some specific OU or other structure specifying the pod ID.
## Future of KubeTLS:
Kubernetes 1.19 deprecated the `certificates.k8s.io/v1beta1` API. Included in the new `certificates.k8s.io/v1` API are Signing Profiles, specified by the `signerName` field. This represents a significant change to the way certificates are generated and stored. KubeTLS will no longer be able to create dual-use "client auth" and "server-auth" certificates, so those functionalities may be split into separate files.
In the farther future, we believe that this functionality belongs in the kubelet - That the Kubelet should be responsible for the generation of the certificate, and it's presentation to the Kubernetes APIserver for signing.
Kubernetes simplifes this a lot: You don't need to keep your highly-sensitive CA Certificate in an accessible place or have a special signing computer. You simply call the k8s API
This example follows from our k8s/bootstrap directory