Creating a Kafka Click-through Application

Apache Kafka is a distributed streaming platform. You can use Kafka to stream data directly from an application into the MapD Core Database.

This is an example of a bare bones click-through application that captures user activity.

This example assumes you have already installed and configured Apache Kafka. See the Kafka website. The FlavorPicker example also has dependencies on Swing/AWT classes. See the Oracle Java SE website.

Creating a Kafka Producer sends the choice of Chocolate, Strawberry, or Vanilla to the Kafka broker. This example uses only one column of information, but the mechanism is the same for records of any size.

package flavors;

// Swing/AWT Interface classes
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.awt.EventQueue;
import javax.swing.JButton;
import javax.swing.JFrame;
import javax.swing.JLabel;

// Generic Java properties object
import java.util.Properties;

// Kafka Producer-specific classes
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;

public class FlavorPicker{

    private JFrame frmFlavors;
    private Producer<String, String> producer;
     * Launch the application.
    public static void main(String[] args) {
       EventQueue.invokeLater(new Runnable() {
          public void run() {
             try {
                FlavorPicker window = new FlavorPicker(args);
             } catch (Exception e) {

     * Create the application.
    public FlavorPicker(String[] args) {

     * Initialize the contents of the frame.
    private void initialize(String[] args) {
       frmFlavors = new JFrame();
       frmFlavors.setBounds(100, 100, 408, 177);

       final JLabel lbl_yourPick = new JLabel("You picked nothing.");
       lbl_yourPick.setBounds(130, 85, 171, 15);

       JButton button = new JButton("Strawberry");
       button.addActionListener(new ActionListener() {
          public void actionPerformed(ActionEvent arg0) {
             lbl_yourPick.setText("You picked strawberry.");
       button.setBounds(141, 12, 114, 25);

       JButton btnVanilla = new JButton("Vanilla");
       btnVanilla.addActionListener(new ActionListener() {
          public void actionPerformed(ActionEvent e) {
             lbl_yourPick.setText("You picked vanilla.");
       btnVanilla.setBounds(278, 12, 82, 25);

       JButton btnChocolate = new JButton("Chocolate");
       btnChocolate.addActionListener(new ActionListener() {
          public void actionPerformed(ActionEvent e) {
             lbl_yourPick.setText("You picked chocolate.");
             pick(args, 0);

       btnChocolate.setBounds(12, 12, 105, 25);
    public void pick(String[] args,int x) {
         String topicName = args[0].toString();
         String[] value = {"chocolate","strawberry","vanilla"};

         // Set the producer configuration properties.
         Properties props = new Properties();
            props.put("bootstrap.servers", "localhost:9097");// 9097 to avoid Immerse:9092
            props.put("acks", "all");
            props.put("retries", 0);
            props.put("batch.size", 100);
        props.put("", 1);
        props.put("buffer.memory", 33554432);

     // Instantiate a producerSampleJDBC
     producer = new KafkaProducer<String, String>(props);

     // Send a 1000 record stream to the Kafka Broker
     for (int y=0; y<1000; y++){
         producer.send(new ProducerRecord<String, String>(topicName, value[x]));

Creating a Kafka Consumer polls the Kafka broker periodically, pulls any new topics added since the last poll, and loads them to the MapD Core Database. Ideally, each batch should be fairly substantial in size, minimally 1,000 rows or more, so as not to overburden the server.

package flavors;

import java.util.Properties;
import java.util.Arrays;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.ConsumerRecord;

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.SQLException;

// Usage:\nFlavorConsumer <kafka-topic-name> <mapd-database-password>

public class FlavorConsumer {
    public static void main(String[] args) throws Exception {
       if (args.length < 2) {
          System.out.println("Usage:\n\nFlavorConsumer <kafka-topic-name> <mapd-database-password>");
       // Configure the Kafka Consumer
       String topicName = args[0].toString();
       Properties props = new Properties();

       props.put("bootstrap.servers", "localhost:9097"); // Use 9097 so as not
                                              // to collide with
                                              // MapD Immerse
       props.put("", "test");
       props.put("", "true");
       props.put("", "1000");
       props.put("", "30000");
       props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
       props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
       KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);

       // Subscribe the Kafka Consumer to the topic.

       // print the topic name
       System.out.println("Subscribed to topic " + topicName);

       String flavorValue = "";

       while (true) {
          ConsumerRecords<String, String> records = consumer.poll(1000);

          // Create connection and prepared statement objects
          Connection conn = null;
          PreparedStatement pstmt = null;

          try {
             // JDBC driver name and database URL
             final String JDBC_DRIVER = "com.mapd.jdbc.MapDDriver";
             final String DB_URL = "jdbc:mapd:localhost:9091:mapd";

             // Database credentials
             final String USER = "mapd";
             final String PASS = args[1].toString();

             // STEP 1: Register JDBC driver

             // STEP 2: Open a connection
             conn = DriverManager.getConnection(DB_URL, USER, PASS);

             // STEP 3: Prepare a statement template
             pstmt = conn.prepareStatement("INSERT INTO flavors VALUES (?)");

             // STEP 4: Populate the prepared statement batch
             for (ConsumerRecord<String, String> record : records) {
                flavorValue = record.value();
                pstmt.setString(1, flavorValue);

             // STEP 5: Execute the batch statement (send records to MapD
             // Core Database)

             // Commit and close the connection.

          } catch (SQLException se) {
             // Handle errors for JDBC

          } catch (Exception e) {
             // Handle errors for Class.forName
          } finally {

             try {
                if (pstmt != null) {
             } catch (SQLException se2) {
             } // nothing we can do

             try {
                if (conn != null) {
             } catch (SQLException se) {
             } // end finally try

          } // end try
       } // end main
}// end FlavorConsumer}

Running the Kafka Click-through Application

To run the application, you need to perform the following tasks:

  • Compile and
  • Create a table in MapD Core Database
  • Start the Zookeeper server
  • Start the Kafka server
  • Start the Kafka consumer
  • Start the Kafka producer
  • View the results using mapdql and MapD Immerse
  1. Compile and, storing the resulting class files in $MAPD_PATH/SampleCode/kafka-clickthrough/bin.

  2. Using mapdql, create the table flavors with one column, flavor, in the MapD Core Database. See mapdql for more information.

    mapdql> CREATE TABLE flavors (flavor TEXT ENCODING DICT);
  3. Open a new terminal window.

  4. Go to your kafka directory.

  5. Start the Zookeeper server with the following command.

    ./bin/ config/
  6. Open a new terminal window.

  7. Go to the kafka directory.

  8. Start the Kafka server with the following command.

    ./bin/ config/
  9. Open a new terminal window.

  10. Go to the kafka directory.

  11. Create a new Kafka topic with the following command. This starts a basic broker with only one replica and one partition. See the Kafka documentation for more information.

    bin/ --create --zookeeper localhost:2181 --replication-factor 1
    --partitions 1 --topic myflavors
  12. Open a new terminal window.

  13. Launch FlavorConsumer with the following command, substituting the actual path to the Kafka directory and your MapD Database password.

    java  -cp .:<kafka-directory-path>/libs/*:$MAPD_PATH/bin/*:$MAPD_PATH/SampleCode/kafka-clickthrough/bin flavors.FlavorConsumer myflavors <myPassword>
  14. Launch FlavorPicker with the following command.

    java  -cp .:<kafka-directory-path>/libs/*:$MAPD_PATH/bin/*:$MAPD_PATH/SampleCode/kafka-clickthrough/bin flavors.FlavorPicker myflavors
  15. Click to create several records for Chocolate, Strawberry, and Vanilla. Each click generates 1,000 records.

  16. Use mapdql to see that the results have arrived in MapD Core Database.

  17. Use MapD Immerse to visualize the results.