How to Install Apache Kafka on Linux without Zookeeper? (KRaft mode)

Start and run Apache Kafka on Linux without Zookeeper.

Kafka with KRaft (without Zookeeper) on Linux

  1. Install Java JDK version 11

  2. Download Apache Kafka v2.8+ from under Binary


  4. Extract the contents on Linux

  5. Generate a cluster ID and format the storage using

  6. Start Kafka using the binaries

  7. Setup the $PATH environment variables for easy access to the Kafka binaries

Installing Java JDK 11

To install Apache Kafka on Linux, Java 11 is the only prerequisite.

  1. Navigate to Amazon Corretto 11 Linux install page and follow the steps, which work for Debian, RPM, Alpine and Amazon Linux. Alternatively, you can download from the Amazon Corretto 11 download page and install the correct package for your Linux distribution (x64, aarch64, x86, arch32, etc...).

  2. For example on Ubuntu (Debian-based systems)

1 2 3 wget -O- | sudo apt-key add - sudo add-apt-repository 'deb stable main' sudo apt-get update; sudo apt-get install -y java-11-amazon-corretto-jdk

Please follow the instructions here to verify your installation of Amazon Corretto 11 and set the JDK as your default Java in your Linux system.

Upon completion, you should get a similar output when doing java -version:

1 2 3 openjdk version "11.0.10" 2021-01-19 LTS OpenJDK Runtime Environment Corretto- (build 11.0.10+9-LTS) OpenJDK 64-Bit Server VM Corretto- (build 11.0.10+9-LTS, mixed mode)shell

Install Apache Kafka

1. Download the latest version of Apache Kafka from under Binary downloads.

The download page for Apache Kafka where you can download and install Kafka.

2. Click on any of the binary downloads (it is preferred to choose the most recent Scala version - example 2.13). For this illustration, we will assume version 2.13-3.0.0.

Alternatively you can run a wget command

1 wget

3. Download and extract the contents to a directory of your choice, for example ~/kafka_2.13-3.0.0 .

1 2 tar xzf kafka_2.12-3.0.0.tgz mv kafka_2.12-3.0.0 ~

4. Open a Shell and navigate to the root directory of Apache Kafka. For this example, we will assume that the Kafka download is expanded into the ~/kafka_2.13-3.0.0 directory.

Start Kafka

The first step is to generate a new ID for your cluster

1 ~/kafka_2.13-3.0.0/bin/ random-uuid

This returns a UUID, for example 76BLQI7sT_ql1mBfKsOk9Q

Next, format your storage directory (replace <uuid> by your UUID obtained above)

1 ~/kafka_2.13-3.0.0/bin/ format -t <uuid> -c ~/kafka_2.13-3.0.0/config/kraft/

This will format the directory that is in the log.dirs in the config/kraft/ file (by default /tmp/kraft-combined-logs)

Now you can launch the broker itself in daemon mode by running this command.

1 ~/kafka_2.13-3.0.0/bin/ ~/kafka_2.13-3.0.0/config/kraft/
Screenshot of Terminal text showing how to start Zookeeper for Apache Kafka when installing

Don’t close this shell window as it will shutdown the broker.

Congratulations, the broker is now running on its own in KRaft mode!

Setup the $PATH environment variable

In order to easily access the Kafka binaries, you can edit your PATH variable by adding the following line to your system run commands (for example ~/.zshrc if you use zshrc):


This ensures that you can now run the kafka commands without prefixing them.

After reloading your shell, the following should work from any directory


Read more about Kafka KRaft

You can read and learn more about the KRaft mode in Kafka here.

Was this content helpful?
PreviousHow to Install Apache Kafka on Linux?
NextHow to Install Apache Kafka on Windows?

Get a free Kafka cluster with Conduktor Platform

Conduktor Platform provides the easiest way to get started with Apache Kafka. Just sign up to our web app and you'll get a free managed Kafka cluster that you can use to learn and experiment. There is no trial period or constant pestering, just head to the signup page and get started

Screenshot show how to start a Kafka cluster quickly with Conduktor
Start a local cluster in a couple of clicks
Live local cluster with Schema Registry in a few seconds
Your cluster is live and ready to go