Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Preliminaries

Set Up a Kafka ClusterCluster 

See Download Kafka: https://kafka.apache.org/downloads

Unpack to some location, like //adityasridhar.com/posts/how-to-easily-install-kafka-without-zookeeper for how-to.

Security configuration for Kafka. TODO

Configuring Kafka Users

TODO how to do this in Kraft?

Configure a Kafka admin user for the shrineNetworkLifecycle.

...

languagebash
themeRDark

Configure a Kafka user for the hub and each downstream node.

...

languagebash
themeRDark

...

opt/kafka, which will be referred to as <kafkaInstallationDir>.

Kafka has historically leveraged the Zookeeper software for its controller layer, but as of Kafka 3.3.1 (Oct 2022) it’s recommended to use the native Kafka Raft (“KRaft”) which improves performance and consolidates the server configuration files and process to 1 each (per node).

Configure server.properties

In KRaft mode, the relevant configuration file is located at <kafkaInstallationDir>/config/kraft/server.properties.

A minimum server cluster size of 3 is recommended for production environments, in which each server node functions as both a broker and controller. This is enabled by setting the process.roles to broker,controller, as noted in the Server Basics section:


Code Block
languagejs
themeRDark
titleserver.properties
# The role of this server. Setting this puts us in KRaft mode
process.roles=broker,controller


node.id is one of the few parameters that must be unique to each server node. 1, 2, 3, etc.

controller.quorum.voters is a comma-separated list of all server nodes’ id, hostname or IP, and port, and can be consistently across nodes if using FQDNS:

Code Block
languagejs
themeRDark
titleserver.properties
controller.quorum.voters=1@kafka-1.yourDomain.com:9093,2@kafka-2.yourDomain.com:9093,3@kafka-3.yourDomain.com:9093

Assuming the server nodes all reside in the same private IP space, to ensure traffic remains within that space you may wish to use local DNS to map hostnames to localhost and private IP addresses. Alternatively, use localhost and private IPs in this parameter.

server.properties provides examples of possible configurations of listeners, protocols, and ports. We recommend one for the broker function and one for the controller function as follows:


Code Block
languagejs
themeRDark
titleserver.properties
listeners=BROKER://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093
inter.broker.listener.name=BROKER
advertised.listeners=BROKER://<thisServersFQDN>:9092
listener.security.protocol.map=BROKER:SASL_SSL,CONTROLLER:SASL_PLAINTEXT


where advertised.listeners is the second parameter (after node.id) that is unique to each server node.

SASL_SSL on the broker listener is required to enforce client/server user authentication and authorization since that traffic is traversing the public internet. SASL_SSL may also be enabled for the controller listener with properly configured keystores and truststores; however if the server nodes communicate exclusively in private network space (as described above), then SASL_PLAINTEXT may be considered sufficient.

log.dirs specifies not the location of server logs (that is in <kafkaInstallationDir/logs>, but actual topic data, so provide a reliable location outside the default /tmp; for example /var/opt/kafka. 

Set the SASL mechanism parameters to PLAIN:


Code Block
languagejs
themeRDark
titleserver.properties
sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.mechanism.controller.protocol=PLAIN


Set the authorizer (there are multiple to choose from; StandardAuthorizer is recommended for KRaft mode):

Code Block
languagejs
themeRDark
titleserver.properties
authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer


Come up with an admin username, e.g. “kafka-admin”, to be used for admin tasks from the SHRINE hub instance and for inter-broker communication. We’ll create the user later. Set it as a super user:


Code Block
languagejs
themeRDark
titleserver.properties
super.users=User:kafka-admin

Configure Kafka Users

Kafka supports SASL authentication with a few possible user-management mechanisms, but version 3.3.1 in KRaft mode only makes available the PLAIN mechanism (which is not to be confused with the insecure PLAINTEXT protocol). The more full-featured SCRAM mechanism will be available for KRaft in a future release. User authentication via PLAIN mechanism consults a static user list in <kafkaInstallationDir>/config/kraft/kafka_server_jaas.conf file present on each server node.

Define the admin user/password to be used for admin tasks and for inter-broker communication, as well as one user/password for the SHRINE hub and one for each SHRINE node in the network.


Code Block
languagejs
themeRDark
titlekafka_server_jaas.conf
KafkaServer {
    org.apache.kafka.common.security.plain.PlainLoginModule required
    serviceName="kafka"
    username="kafka-admin”
    password="<yourKafkaAdminPassword>"
    user_kafka-admin=”<yourKafkaAdminPassword>"
    user_<yourShrineHubUser>=”<yourShrineHubUserPassword>”
    user_<shrineNode1User>=”<shrineNode1UserPassword>”
    user_<shrineNode2User>=”<shrineNode2UserPassword>”;
};


The username and password lines define the user to be used by this broker for inter-broker communication. All lines beginning with user_ define the users that can be authenticated by clients (including other brokers in the inter-broker communication context). When sharing SHRINE node user credentials be sure to use a secure transfer mechanism.

Provide the Kafka application the path to this file using the KAFKA_OPTS environment variable:



Code Block
languagejs
themeRDark
export KAFKA_OPTS="-Djava.security.auth.login.config=<kafkaInstallationDir/config/kraft/kafka_server_jaas.conf"



Changes to this file (user additions or removals) requires a Kafka process restart on each node (one drawback to the PLAIN mechanism which will be alleviated when SCRAM becomes available for KRaft).

Create Server Keystores and Truststores

TODO: bit about creating a keystore unique to each server node, and a common truststore. Catalyst has its own CA with self-signed wildcard certs, enabling 1 shared server keystore and 1 shared truststore. Production clusters must get certs signed by a real CA with no wildcard CNs. This is yet-untested on internal systems so we have no proven documentation for it.

Add the keystore and truststore locations and passwords to the end of server.properties:


Code Block
languagejs
themeRDark
titleserver.properties
# ssl.key.password= needed if using real CA?
ssl.keystore.location=<path/to/this/servers/keystore.pkcs12>
ssl.keystore.password=<thisServersKeystorePassword>
ssl.truststore.location=<path/to/shared/server/truststore.pkcs12>
ssl.truststore.password=<sharedServerTruststorePassword>


Format the Kafka storage directories

Now that server.properties is complete, format the Kafka storage directories.

On one server node, generate a cluster UUID:

Code Block
languagejs
themeRDark
<kafkaInstallationDir>/bin/kafka-storage.sh random-uuid

On all server nodes, format the storage directories:



Code Block
languagejs
themeRDark
<kafkaInstallationDir/bin/kafka-storage.sh format --cluster-id <uuid> --config <kafkaInstallationDir>/config/kraft/server.properties



Run Kafka

On all server nodes, run:



Code Block
languagejs
themeRDark
<kafkaInstallationDir>/bin/kafka-server-start.sh -daemon <kafkaInstallationDir>/config/kraft/server.properties




Configure the Hub's shrine.conf

...