Wiki
Apache Kafka is a distributed streaming platform. A Kafka cluster is not only highly scalable and fault-tolerant, but it also has a much higher throughput compared to other message brokers such as ActiveMQ and RabbitMQ. Though it is generally used as a pub/sub messaging system, a lot of organizations also use it for log aggregation because it offers persistent storage for published messages.
Environment
Ubuntu Server 17.10 Artful Aardvark 4GB RAM Scala 2.11 Kafka 1.0.0
Install Java
Before installing additional packages, update the list of available packages so you are installing the latest versions available in the repository:
$ sudo apt-get update
As Apache Kafka needs a Java runtime environment, use apt-get
to install the default-jre
package:
$ sudo apt-get install default-jre $ java -version openjdk version "1.8.0_151" OpenJDK Runtime Environment (build 1.8.0_151-8u151-b12-0ubuntu0.17.10.2-b12) OpenJDK 64-Bit Server VM (build 25.151-b12, mixed mode)
Create a user called kafka
using the useradd
command:
$ sudo useradd kafka -m
Set its password using passwd:
$ sudo passwd kafka
Add it to the sudo
group so that it has the privileges required to install Kafka's dependencies. This can be done using the adduser
command:
$ sudo adduser kafka sudo
The Kafka user is now ready. Log into it using su
:
$ su - kafka
Install ZooKeeper
Apache ZooKeeper is an open source service built to coordinate and synchronize configuration information of nodes that belong to a distributed system. A Kafka cluster depends on ZooKeeper to perform—among other things—operations such as detecting failed nodes and electing leaders.
Since the ZooKeeper package is available in Ubuntu's default repositories, install it using apt-get
.
$ sudo apt-get install zookeeperd
After the installation completes, ZooKeeper will be started as a daemon automatically. By default, it will listen on port 2181
.
To make sure that it is working, connect to it via Telnet:
$ telnet localhost 2181
At the Telnet prompt, type in ruok
and press ENTER.
If everything's fine, ZooKeeper will say imok
and end the Telnet session.
$ echo ruok | nc localhost 2181
Download and Extract Kafka Binaries
Now that Java and ZooKeeper are installed, it is time to download and extract Kafka.
To start, create a directory called Downloads
to store all your downloads.
$ mkdir -p ~/Downloads
Use wget
to download the Kafka binaries.
$ wget "http://mirror.bit.edu.cn/apache/kafka/1.0.0/kafka_2.11-1.0.0.tgz" -O ~/Downloads/kafka.tgz
Create a directory called kafka
and change to this directory. This will be the base directory of the Kafka installation.
$ mkdir -p ~/kafka && cd ~/kafka
Extract the archive you downloaded using the tar command.
$ tar -xvzf ~/Downloads/kafka.tgz --strip 1
Configure the Kafka Server
The next step is to configure the Kakfa server, open server.properties
using text editor:
$ vim ~/kafka/config/server.properties
By default, Kafka doesn't allow you to delete topics. To be able to delete topics, add the following line at the end of the file:
delete.topic.enable=true
Save the file and exit the text editor.
Start the Kafka Server
Run the kafka-server-start.sh
script using nohup
to start the Kafka server (also called Kafka broker) as a background process that is independent of your shell session.
$ ~/kafka/bin/kafka-server-start.sh ~/kafka/config/server.properties
Run service in daemon:
$ nohup ~/kafka/bin/kafka-server-start.sh ~/kafka/config/server.properties > ~/kafka/kafka.log 2>&1 &
Wait for a few seconds for it to start. You can be sure that the server has started successfully when you see the following messages in ~/kafka/kafka.log
:
[2017-12-09 17:16:29,521] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
You now have a Kafka server which is listening on port 9092
.
Test the Installation
Let us now create a topic named test
with a single partition and only one replica:
$ ~/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
We can now see that topic if we run the list topic command:
$ ~/kafka/bin/kafka-topics.sh --list --zookeeper localhost:2181
Alternatively, instead of manually creating topics you can also configure your brokers to auto-create topics when a non-existent topic is published to.
Send some messages
Run the producer and then type a few messages into the console to send to the server.
$ ~/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test >This is a message >This is another message
Kafka also has a command line consumer that will dump out messages to standard output.
$ ~/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning >This is a message >This is another message
Restrict the Kafka User
Now that all installations are done, you can remove the kafka
user's admin privileges. Before you do so, log out and log back in as any other non-root sudo
user. If you are still running the same shell session you started the installation, simply type exit
.
To remove the Kafka user's admin privileges, remove it from the sudo
group.
$ sudo deluser kafka sudo
To further improve your Kafka server's security, lock the kafka
user's password using the passwd
command. This makes sure that nobody can directly log into it.
$ sudo passwd kafka -l
At this point, only root or a sudo user can log in as kafka
by typing in the following command:
$ sudo su - kafka
In the future, if you want to unlock it, use passwd with the -u
option:
$ sudo passwd kafka -u
Conclusion
You now have a secure Apache Kafka running on your Ubuntu server. You can easily make use of it in your projects by creating Kafka producers and consumers using Kafka clients which are available for most programming languages. To learn more about Kafka, do go through its documentation.