Create Self-Signed Multi-Domain (SAN) Certificates

TL;DR

The SAN-extension is removed during signing, if not respecified explicitly.
To create a private CA with self-signed multi-domain certificats for your development setup, you simply have to:

  1. Run create-ca.sh to generate the root-certificate for your private CA.
  2. Run gencert.sh NAME to generate selfsigned certificates for the CN NAME with an exemplary SAN-extension.

Subject Alternative Name (SAN) And Self-Signed Certificates

Multi-Domain certificates are implemented as a certificate-extension called Subject Alternative Name (SAN).
One can simply specify the additional domains (or IP’s) when creating a certificate.

The following example shows the syntax for the keytool-command, that comes with the JDK and is frequently used by Java-programmers to create certificates:

keytool \
 -keystore test.jks -storepass confidential -keypass confidential \
 -genkey -alias test -validity 365 \
 -dname "CN=test,OU=security,O=juplo,L=Juist,ST=Niedersachsen,C=DE" \
 -ext "SAN=DNS:test,DNS:localhost,IP:127.0.0.1"

If you list the content of the newly created keystore with…

keytool -list -v -keystore test.jks

…you should see a section like the following one:

#1: ObjectId: 2.5.29.17 Criticality=false
SubjectAlternativeName [
  DNSName: test
  DNSName: localhost
  IPAddress: 127.0.0.1
]

The certificate is also valid for this additionally specified domains and IP’s.

The problem is, that it is not signed and will not be trusted, unless you publicize it explicitly through a truststore.
This is feasible, if you just want to authenticate and encrypt one point-2-point communication.
But if more clients and/or servers have to be authenticated to each other, updating and distributing the truststore will soon become hell.

The common solution in this situation is, to create a private CA, that can sign newly created certificates.
This way, only the root-certificate of that private CA has to be distributed.
Clients, that know the root-certificate of the private CA will automatically trust all certificates, that are signed by that CA.

But unfortunatly, if you sign your certificate, the SAN-extension vanishes: the signed certificate is only valid for the CN.
(One may think, that you just have to specify the export of the SAN-extension into the certificate-signing-request – which is not exported by default – but the SAN will still be lost after signing the extended request…)

This removal of the SAN-extension is not a bug, but a feature.
A CA has to be in control, which domains and IP’s it signes certificates for.
If a client could write arbitrary additional domains in the SAN-extension of his certificate-signing-request, he could fool the CA into signing a certificate for any domain.
Hence, all entries in a SAN-extension are removed by default during signing.

This default behavior is very annoying, if you just want to run your own private CA, to authenticate all your services to each other.

In the following sections, I will walk you through a solution to circumvent this pitfall.
If you just need a working solution for your development setup, you may skip the explanation and just download the scripts, that combine the presented steps.

Recipe To Create A Private CA With Self-Signed Multi-Domain Certificates

Create And Distribute The Root-Certificate Of The CA

We are using openssl to create the root-certificate of our private CA:

openssl req \
  -new -x509 -subj "/C=DE/ST=Niedersachsen/L=Juist/O=juplo/OU=security/CN=Root-CA" \
  -keyout ca-key -out ca-cert -days 365 -passout pass:extraconfidential

This should create two files:

  • ca-cert, the root-certificate of your CA
  • ca-key, the private key of your CA with the password extraconfidential

Be sure to protect ca-key and its password, because anyone who has access to both of them, can sign certificates in the name of your CA!

To distribute the root-certificate, so that your Java-clients can trust all certificates, that are signed by your CA, you have to import the root-certificate into a truststore and make that truststore available to your Java-clients:

keytool \
  -keystore truststore.jks -storepass confidential \
  -import -alias ca-root -file ca-cert -noprompt

Create A Certificate-Signing-Request For Your Certificat

We are reusing the already created certificate here.
If you create a new one, there is no need to specify the SAN-extension, since it will not be exported into the request and this version of the certificate will be overwritten, when the signed certificate is reimported:

keytool \
  -keystore test.jks -storepass confidential \
  -certreq -alias test -file cert-file

This will create the file cert-file, which contains the certificate-signing-request.
This file can be deleted, after the certificate is signed (which is done in the next step).

Sign The Request, Adding The Additional Domains In A SAN-Extension

We use openssl x509 to sign the request:

openssl x509 \
  -req -CA ca-cert -CAkey ca-key -in cert-file -out test.pem \
  -days 365 -CAcreateserial -passin pass:extraconfidential \
  -extensions SAN -extfile <(printf "\n[SAN]\nsubjectAltName=DNS:test,DNS:localhost,IP:127.0.0.1")

This can also be done with openssl ca, which has a slightly different and little bit more complicated API.
openssl ca is ment to manage a real full-blown CA.
But we do not need the extra options and complexity for our simple private CA.

The important part here is all that comes after -extensions SAN.
It specifies the Subject-Alternative-Name-section, that we want to include additionally into the signed certificate.
Because we are in full control of our private CA, we can specify any domains and/or IP’s here, that we want.
The other options are ordinary certificate-signing-stuff, that is already better explained elswhere.

We use a special syntax with the option -extfile, that allows us to specify the contents of a virtual file as part of the command.
You can as well write your SAN-extension into a file and hand over the name of that file here, as it is done usually.
If you want to specify the same SAN-extension in a file, that file would have to contain:

[SAN]
subjectAltName=DNS:test,DNS:localhost,IP:127.0.0.1

Note, that the name that you give the extension on the command-line with -extension SAN has to match the header in the (virtual) file ([SAN]).

As a result of the command, the file test.pem will be created, which contains the signed x509-certificate.
You can disply the contents of that certificate in a human readable form with:

openssl x509 -in test.pem -text

It should display something similar to this example-output

Import The Root-Certificate Of The CA And The Signed Certificate Into The Keystore

If you want your clients, that do only know the root-certificate of your CA, to trust your Java-service, you have to build up a Chain-of-Trust, that leads from the known root-certificate to the signed certificate, that your service uses to authenticate itself.
(Note: SSL-encryption always includes the authentication of the service a clients connects to through its certificate!)
In our case, that chain only has two entries, because our certificate was directly signed by the root-certificate.
Therefore, you have to import the root-certificate (ca-cert) and your signed certificate (test.pem) into a keystore and make that keystore available to the Java-service, in order to enable it to authentificate itself using the signed certificate, when a client connects.

Import the root-certificate of the CA:

keytool \
 -keystore test.jks -storepass confidential \
 -import -alias ca-root -file ca-cert -noprompt

Import the signed certificate (this will overwrite the unsigned version):

keytool \
 -keystore test.jks -storepass confidential \
 -import -alias test -file test.pem

That’s it: we are done!

You can validate the contents of the created keystore with:

keytool \
 -keystore test.jks -storepass confidential \
 -list -v

It should display something similar to this example-output

To authenticate service A against client B you will have to:

  • make the keystore test.jks available to the service A
  • make the truststore truststore.jks available to the client B

If you want, that your clients also authentificate themselfs to your services, so that only clients with a trusted certificate can connect (2-Way-Authentication), client B also needs its own signed certificate to authenticate against service A and service A also needs access to the truststore, to be able to trust that certificate.

Simple Example-Scripts To Create A Private CA And Self-Signed Certificates With SAN-Extension

The following two scripts automate the presented steps and may be useful, when setting up a private CA for Java-development:

  • Run create-ca.sh to create the root-certificate for the CA and import it into a truststore (creates ca-cert and ca-key and the truststore truststore.p12)
  • Run gencert.sh CN to create a certificate for the common name CN, sign it using the private CA (also exemplarily adding alternative names) and building up a valid Chain-of-Trust in a keystore (creates CN.pem and the keystore CN.p12)
  • Global options can be set in the configuration file settings.conf

Read the source for more options…

Differing from the steps shown above, these scripts use the keystore-format PKCS12.
This is, because otherwise, keytool is nagging about the non-standard default-format JKS in each and every step.

Note: PKCS12 does not distinguish between a store-password and a key-password. Hence, only a store-passwort is specified in the scripts.

Encrypt Communication Between Kafka And ZooKeeper With TLS

TL;DR

  1. Download and unpack zookeeper+tls.tgz.
  2. Run README.sh for a fully automated example of the presented setup.

Copy and paste to execute the two steps on Linux:

curl -sc - https://juplo.de/wp-uploads/zookeeper+tls.tgz | tar -xzv && cd zookeeper+tls && ./README.sh

A german translation of this article can be found on http://trion.de.

Current Kafka Cannot Encrypt ZooKeeper-Communication

Up until now (Version 2.3.0 of Apache Kafka) it is not possible, to encrypt the communication between the Kafka-Brokers and their ZooKeeper-ensemble.
This is not possiible, because ZooKeeper 3.4.13, which is shipped with Apache Kafka 2.3.0, lacks support for TLS-encryption.

The documentation deemphasizes this, with the observation, that usually only non-sensitive data (configuration-data and status information) is stored in ZooKeeper and that it would not matter, if this data is world-readable, as long as it can be protected against manipulation, which can be done through proper authentication and ACL’s for zNodes:

The rationale behind this decision is that the data stored in ZooKeeper is not sensitive, but inappropriate manipulation of znodes can cause cluster disruption. (Kafka-Documentation)

This quote obfuscates the elsewhere mentioned fact, that there are use-cases that store sensible data in ZooKeeper.
For example, if authentication via SASL/SCRAM or Delegation Tokens is used.
Accordingly, the documentation often stresses, that usually there is no need to make ZooKeeper accessible to normal clients.
Nowadays, only admin-tools need direct access to the ZooKeeper-ensemble.
Hence, it is stated as a best practice, to make the ensemble only available on a local network, hidden behind a firewall or such.

In cleartext: One must not run a Kafka-Cluster, that spans more than one data-center — or at least make sure, that all communication is tunneled through a virtual private network.

ZooKeeper 3.5.5 To The Rescue

On may the 20th 2019, version 3.5.5 of ZooKeeper has been released.
Version 3.5.5 is the first stable release of the 3.5.x branch, that introduces the support for TLS-encryption, the community has yearned for so long.
It supports the encryption of all communication between the nodes of a ZooKeeper-ensemble and between ZooKeeper-Servers and -Clients.

Part of ZooKeeper is a sophisticated client-API, that provide a convenient abstraction for the communication between clients and servers over the Atomic Broadcast Protocol.
The TLS-encryption is applied by this API transparently.
Because of that, all client-implementations can profit from this new feature through a simple library-upgrade from 3.4.13 to 3.5.5.

This article will walk you through an example, that shows how to carry out such a library-upgrade for Apache Kafka 2.3.0 and configure a cluster to use TLS-encryption, when communicating with a standalone ZooKeeper.

Disclaimer

The presented setup is ment for evaluation only!

It fiddles with the libraries, used by Kafka, which might cause unforseen issues.
Furthermore, using TLS-encryption in ZooKeeper requires one to switch from the battle-tested NIOServerCnxnFactory, which uses the NIO-API directly, to the newly introduced NettyServerCnxnFactory, which is build on top of Netty.

Recipe To Enable TLS Between Broker And ZooKeeper

The article will walk you step by step through the setup now.
If you just want to evaluate the example, you can jump to the download-links.

All commands must be executed in the same directory.
We recommend, to create a new directory for that purpose.

Download Kafka and ZooKeeper

First of all: Download version 2.3.0 of Apache Kafka and version 3.5.5 of Apache ZooKeeper:

curl -sc - http://ftp.fau.de/apache/zookeeper/zookeeper-3.5.5/apache-zookeeper-3.5.5-bin.tar.gz | tar -xzv
curl -sc - http://ftp.fau.de/apache/kafka/2.3.0/kafka_2.12-2.3.0.tgz | tar -xzv

Switch Kafka 2.3.0 from ZooKeeper 3.4.13 to ZooKeeper 3.5.5

Remove the 3.4.13-version from the libs-directory of Apache Kafka:

rm -v kafka_2.12-2.3.0/libs/zookeeper-3.4.14.jar

Then copy the JAR’s of the new version of Apache ZooKeeper into that directory. (The last JAR is only needed for CLI-clients, like for example zookeeper-shell.sh.)

cp -av apache-zookeeper-3.5.5-bin/lib/zookeeper-3.5.5.jar kafka_2.12-2.3.0/libs/
cp -av apache-zookeeper-3.5.5-bin/lib/zookeeper-jute-3.5.5.jar kafka_2.12-2.3.0/libs/
cp -av apache-zookeeper-3.5.5-bin/lib/netty-all-4.1.29.Final.jar kafka_2.12-2.3.0/libs/
cp -av apache-zookeeper-3.5.5-bin/lib/commons-cli-1.2.jar kafka_2.12-2.3.0/libs/

That is all there is to do to upgrade ZooKeeper.
If you run one of the Kafka-commands, it will use ZooKeeper 3.5.5. from now on.

Create A Private CA And The Needed Certificates


You can read more about setting up a private CA in this post

Create the root-certificate for the CA and store it in a Java-truststore:

openssl req -new -x509 -days 365 -keyout ca-key -out ca-cert -subj "/C=DE/ST=NRW/L=MS/O=juplo/OU=kafka/CN=Root-CA" -passout pass:superconfidential
keytool -keystore truststore.jks -storepass confidential -import -alias ca-root -file ca-cert -noprompt

The following commands will create a self-signed certificate in zookeeper.jks.
What happens is:

  1. Create a new key-pair and certificate for zookeeper
  2. Generate a certificate-signing-request for that certificate
  3. Sign the request with the key of private CA and also add a SAN-extension, so that the signed certificate is also valid for localhost
  4. Import the root-certificate of the private CA into the keystore zookeeper.jks
  5. Import the signed certificate for zookeeper into the keystore zookeeper.jks


You can read more about creating self-signed certificates with multiple domains and building a Chain-of-Trust here

NAME=zookeeper
keytool -keystore $NAME.jks -storepass confidential -alias $NAME -validity 365 -genkey -keypass confidential -dname "CN=$NAME,OU=kafka,O=juplo,L=MS,ST=NRW,C=DE"
keytool -keystore $NAME.jks -storepass confidential -alias $NAME -certreq -file cert-file
openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out $NAME.pem -days 365 -CAcreateserial -passin pass:superconfidential -extensions SAN -extfile <(printf "\n[SAN]\nsubjectAltName=DNS:$NAME,DNS:localhost")
keytool -keystore $NAME.jks -storepass confidential -import -alias ca-root -file ca-cert -noprompt
keytool -keystore $NAME.jks -storepass confidential -import -alias $NAME -file $NAME.pem

Repeat this with:

  • NAME=kafka-1
  • NAME=kafka-2
  • NAME=client

Now we have signed certificates for all participants in our small example, that are stored in separate keystores, each with a Chain-of-Trust set up, that is rooting in our private CA.
We also have a truststore, that will validate all these certificates, because it contains the root-certificate of the Chain-of-Trust: the certificate of our private CA.

Configure And Start ZooKeeper

We hightlight/explain only the configuration-options here, that are needed for TLS-encryption!

In our setup, the standalone ZooKeeper essentially needs two specially tweaked configuration files, to use encryption.

Create the file java.env:

SERVER_JVMFLAGS="-Xms512m -Xmx512m -Dzookeeper.serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory"
ZOO_LOG_DIR=.
  • The Java-Environmentvariable zookeeper.serverCnxnFactory switches the connection-factory to use the Netty-Framework.
    Without this, TLS is not possible!

Create the file zoo.cfg:

dataDir=/tmp/zookeeper
secureClientPort=2182
maxClientCnxns=0
authProvider.1=org.apache.zookeeper.server.auth.X509AuthenticationProvider
ssl.keyStore.location=zookeeper.jks
ssl.keyStore.password=confidential
ssl.trustStore.location=truststore.jks
ssl.trustStore.password=confidential
  • secureClientPort: We only allow encrypted connections!
    (If we want to allow unencrypted connections too, we can just specify clientPort additionally.)
  • authProvider.1: Selects authentification through client certificates
  • ssl.keyStore.*: Specifies the path to and password of the keystore, with the zookeeper-certificate
  • ssl.trustStore.*: Specifies the path to and password of the common truststore with the root-certificate of our private CA

Copy the file log4j.properties into the current working directory, to enable logging for ZooKeeper (see also java.env):

cp -av apache-zookeeper-3.5.5-bin/conf/log4j.properties .

Start the ZooKeeper-Server:

apache-zookeeper-3.5.5-bin/bin/zkServer.sh --config . start
  • --config .: The script should search in the current directory for the configration data and certificates.

Konfigure And Start The Brokers


We hightlight/explain only the configuration-options and start-parameters here, that are needed to encrypt the communication between the Kafka-Brokers and the ZooKeeper-Server!

The other parameters shown here, that are concerned with SSL are only needed for securing the communication between the Brokers itself and between Brokers and Clients.
You can read all about them in the standard documentation.
In short: This example is set up, to use SSL for authentication between the brokers and SASL/PLAIN for client-authentification — both channels are encrypted with TLS.

TLS for the ZooKeeper Client-API is configured through Java-Environmentvariables.
Hence, most of the SSL-configuration for connecting to ZooKeeper has to be specified, when starting the broker.
Only the address and port for the connction itself is specified in the configuration-file.

Create the file kafka-1.properties:

broker.id=1
zookeeper.connect=zookeeper:2182
listeners=SSL://kafka-1:9193,SASL_SSL://kafka-1:9194
security.inter.broker.protocol=SSL
ssl.client.auth=required
ssl.keystore.location=kafka-1.jks
ssl.keystore.password=confidential
ssl.key.password=confidential
ssl.truststore.location=truststore.jks
ssl.truststore.password=confidential
listener.name.sasl_ssl.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required user_consumer="pw4consumer" user_producer="pw4producer";
sasl.enabled.mechanisms=PLAIN
log.dirs=/tmp/kafka-1-logs
offsets.topic.replication.factor=2
transaction.state.log.replication.factor=2
transaction.state.log.min.isr=2
  • zookeeper.connect: If you allow unsecure connections too, be sure to specify the right port here!
  • All other options are not relevant for encrypting the connections to ZooKeeper

Start the broker in the background and remember its PID in the file KAFKA-1:

(
  export KAFKA_OPTS="
    -Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
    -Dzookeeper.client.secure=true
    -Dzookeeper.ssl.keyStore.location=kafka-1.jks
    -Dzookeeper.ssl.keyStore.password=confidential
    -Dzookeeper.ssl.trustStore.location=truststore.jks
    -Dzookeeper.ssl.trustStore.password=confidential
  "
  kafka_2.12-2.3.0/bin/kafka-server-start.sh kafka-1.properties & echo $! > KAFKA-1
) > kafka-1.log &

Check the logfile kafka-1.log to confirm that the broker starts without errors!

  • zookeeper.clientCnxnSocket: Switches from NIO to the Netty-Framework.
    Without this, the ZooKeeper Client-API (just like the ZooKeeper-Server) cannot use TLS!
  • zookeeper.client.secure=true: Switches on TLS-encryption, for all connections to any ZooKeeper-Server
  • zookeeper.ssl.keyStore.*: Specifies the path to and password of the keystore, with the kafka-1-certificate
  • zookeeper.ssl.trustStore.*: Specifies the path to and password of the common truststore with the root-certificate of our private CA


Do the same for kafka-2!
And do not forget, to adapt the config-file accordingly — or better: just download a copy...

Configure And Execute The CLI-Clients

All scripts from the Apache-Kafka-Distribution that connect to ZooKeeper are configured in the same way as seen for kafka-server-start.sh.
For example, to create a topic, you will run:

export KAFKA_OPTS="
  -Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
  -Dzookeeper.client.secure=true
  -Dzookeeper.ssl.keyStore.location=client.jks
  -Dzookeeper.ssl.keyStore.password=confidential
  -Dzookeeper.ssl.trustStore.location=truststore.jks
  -Dzookeeper.ssl.trustStore.password=confidential
"
kafka_2.12-2.3.0/bin/kafka-topics.sh \
  --zookeeper zookeeper:2182 \
  --create --topic test \
  --partitions 1 --replication-factor 2

Note: A different keystore is used here (client.jks)!

CLI-clients, that connect to the brokers, can be called as usual.

In this example, they use an encrypted listener on port 9194 (for kafka-1) and are authenticated using SASL/PLAIN.
The client-configuration is kept in the files consumer.config and producer.config.
Take a look at that files and compare them with the broker-configuration above.
If you want to lern more about securing broker/client-communication, we refere you to the official documentation.


If you have trouble to start these clients, download the scripts and take a look at the examples in README.sh

TBD: Further Steps To Take...

This recipe only activates TLS-encryption between Kafka-Brokers and a Standalone ZooKeeper.
It does not show, how to enable TLS between ZooKeeper-Nodes (which should be easy) or if it is possible to authenticate Kafka-Brokers via TLS-certificates. These topics will be covered in future articles...

Fully Automated Example Of The Presented Setup

Download and unpack zookeeper+tls.tgz for an evaluation of the presented setup:

curl -sc - https://juplo.de/wp-uploads/zookeeper+tls.tgz | tar -xzv

The archive contains a fully automated example.
Just run README.sh in the unpacked directory.

It downloads the required software, carries out the library-upgrade, creates the required certificates and starts a standalone ZooKeeper and two Kafka-Brokers, that use TLS to encrypt all communication.
It also executes a console-consumer and a console-producer, that read and write to a topic, and a zookeeper-shell, that communicates directly with the ZooKeeper-node, to proof, that the setup is working.
The ZooKeeper and the Brokers-instances are left running, to enable the evaluation of the fully encrypted cluster.

Usage

  • Run README.sh, to execute the automated example
  • After running README.sh, the Kafka-Cluster will be still running, so that one can experiment with commands from README.sh by hand
  • README.sh can be executed repeatedly: it will skip all setup-steps, that are already done automatically
  • Run README.sh stop, to stop the Kafka-Cluster (it can be restarted by re-running README.sh)
  • Run README.sh cleanup, to stop the Cluster and remove all created files and data (only the downloaded packages will be left untouched)

Separate Downloads For The Packaged Files