Skip to main content

HDFS2 sink connector

The HDFS2 sink connector pulls the messages from Pulsar topics and persists the messages to HDFS files.


The configuration of the HDFS2 sink connector has the following properties.


hdfsConfigResourcesStringtrueNoneA file or a comma-separated list containing the Hadoop file system configuration.

directoryStringtrueNoneThe HDFS directory where files read from or written to.
encodingStringfalseNoneThe character encoding for the files.

compressionCompressionfalseNoneThe compression code used to compress or de-compress the files on HDFS.

Below are the available options:
  • BZIP2
  • GZIP
  • LZ4
  • kerberosUserPrincipalStringfalseNoneThe principal account of Kerberos user used for authentication.
    keytabStringfalseNoneThe full pathname of the Kerberos keytab file used for authentication.
    filenamePrefixStringtrue, if compression is set to None.NoneThe prefix of the files created inside the HDFS directory.

    The value of topicA result in files named topicA-.
    fileExtensionStringtrueNoneThe extension added to the files written to HDFS.

    separatorcharfalseNoneThe character used to separate records in a text file.

    If no value is provided, the contents from all records are concatenated together in one continuous byte array.
    syncIntervallongfalse0The interval between calls to flush data to HDFS disk in milliseconds.
    maxPendingRecordsintfalseInteger.MAX_VALUEThe maximum number of records that hold in memory before acking.

    Setting this property to 1 makes every record send to disk before the record is acked.

    Setting this property to a higher value allows buffering records before flushing them to disk.
    subdirectoryPatternStringfalseNoneA subdirectory associated with the created time of the sink.
    The pattern is the formatted pattern of directory's subdirectory.

    See DateTimeFormatter for pattern's syntax.


    Before using the HDFS2 sink connector, you need to create a configuration file through one of the following methods.

    • JSON

      "configs": {
      "hdfsConfigResources": "core-site.xml",
      "directory": "/foo/bar",
      "filenamePrefix": "prefix",
      "fileExtension": ".log",
      "compression": "SNAPPY",
      "subdirectoryPattern": "yyyy-MM-dd"

    • YAML

      hdfsConfigResources: "core-site.xml"
      directory: "/foo/bar"
      filenamePrefix: "prefix"
      fileExtension: ".log"
      compression: "SNAPPY"
      subdirectoryPattern: "yyyy-MM-dd"