How to Install Apache Kafka on Linux without Zookeeper? (KRaft mode)
Start and run Apache Kafka on Linux without Zookeeper.
Install Java JDK version 11
Download Apache Kafka v2.8+ from https://kafka.apache.org/downloads under Binary
Downloads
Extract the contents on Linux
Generate a cluster ID and format the storage using kafka-storage.sh
Start Kafka using the binaries
Setup the $PATH environment variables for easy access to the Kafka binaries
To install Apache Kafka on Linux, Java 11 is the only prerequisite.
Navigate to Amazon Corretto 11 Linux install page and follow the steps, which work for Debian, RPM, Alpine and Amazon Linux. Alternatively, you can download from the Amazon Corretto 11 download page and install the correct package for your Linux distribution (x64, aarch64, x86, arch32, etc...).
For example on Ubuntu (Debian-based systems)
1
2
3
wget -O- https://apt.corretto.aws/corretto.key | sudo apt-key add -
sudo add-apt-repository 'deb https://apt.corretto.aws stable main'
sudo apt-get update; sudo apt-get install -y java-11-amazon-corretto-jdk
Please follow the instructions here to verify your installation of Amazon Corretto 11 and set the JDK as your default Java in your Linux system.
Upon completion, you should get a similar output when doing java -version
:
1
2
3
openjdk version "11.0.10" 2021-01-19 LTS
OpenJDK Runtime Environment Corretto-11.0.10.9.1 (build 11.0.10+9-LTS)
OpenJDK 64-Bit Server VM Corretto-11.0.10.9.1 (build 11.0.10+9-LTS, mixed mode)shell
1. Download the latest version of Apache Kafka from https://kafka.apache.org/downloads under Binary downloads.
2. Click on any of the binary downloads (it is preferred to choose the most recent Scala version - example 2.13). For this illustration, we will assume version 2.13-3.0.0
.
Alternatively you can run a wget command
3. Download and extract the contents to a directory of your choice, for example ~/kafka_2.13-3.0.0
.
4. Open a Shell and navigate to the root directory of Apache Kafka. For this example, we will assume that the Kafka download is expanded into the ~/kafka_2.13-3.0.0
directory.
The first step is to generate a new ID for your cluster
This returns a UUID, for example 76BLQI7sT_ql1mBfKsOk9Q
Next, format your storage directory (replace <uuid> by your UUID obtained above)
1
~/kafka_2.13-3.0.0/bin/kafka-storage.sh format -t <uuid> -c ~/kafka_2.13-3.0.0/config/kraft/server.properties
This will format the directory that is in the log.dirs
in the config/kraft/server.properties
file (by default /tmp/kraft-combined-logs
)
Now you can launch the broker itself in daemon mode by running this command.
Don’t close this shell window as it will shutdown the broker.
Congratulations, the broker is now running on its own in KRaft mode!
In order to easily access the Kafka binaries, you can edit your PATH variable by adding the following line to your system run commands (for example ~/.zshrc
if you use zshrc):
PATH="$PATH:~/kafka_2.13-3.0.0/bin"
This ensures that you can now run the kafka commands without prefixing them.
After reloading your shell, the following should work from any directory
You can read and learn more about the KRaft mode in Kafka here.
Get a free Kafka cluster with Conduktor Platform
Conduktor Platform provides the easiest way to get started with Apache Kafka. Just sign up to our web app and you'll get a free managed Kafka cluster that you can use to learn and experiment. There is no trial period or constant pestering, just head to the signup page and get started