How to develop Pulsar connectors
This guide describes how to develop Pulsar connectors to move data between Pulsar and other systems.
Pulsar connectors are special Pulsar Functions, so creating a Pulsar connector is similar to creating a Pulsar function.
Pulsar connectors come in two types:
Type | Description | Example |
---|---|---|
Source | Import data from another system to Pulsar. | RabbitMQ source connector imports the messages of a RabbitMQ queue to a Pulsar topic. |
Sink | Export data from Pulsar to another system. | Kinesis sink connector exports the messages of a Pulsar topic to a Kinesis stream. |
Developβ
You can develop Pulsar source connectors and sink connectors.
Sourceβ
Developing a source connector is to implement the Source interface, which means you need to implement the open method and the read method.
-
Implement the open method.
/**
* Open connector with configuration
*
* @param config initialization config
* @param sourceContext
* @throws Exception IO type exceptions when opening a connector
*/
void open(final Map<String, Object> config, SourceContext sourceContext) throws Exception;This method is called when the source connector is initialized.
In this method, you can retrieve all connector specific settings through the passed-in
config
parameter and initialize all necessary resources.For example, a Kafka connector can create a Kafka client in this
open
method.Besides, Pulsar runtime also provides a
SourceContext
for the connector to access runtime resources for tasks like collecting metrics. The implementation can save theSourceContext
for future use. -
Implement the read method.
/**
* Reads the next message from source.
* If source does not have any new messages, this call should block.
* @return next message from source. The return result should never be null
* @throws Exception
*/
Record<T> read() throws Exception;If nothing to return, the implementation should be blocking rather than returning
null
.The returned Record should encapsulate the following information, which is needed by Pulsar IO runtime.
-
Record should provide the following variables:
Variable Required Description TopicName
No Pulsar topic name from which the record is originated from. Key
No Messages can optionally be tagged with keys.
For more information, see Routing modes.Value
Yes Actual data of the record. EventTime
No Event time of the record from the source. PartitionId
No If the record is originated from a partitioned source, it returns its PartitionId
.PartitionId
is used as a part of the unique identifier by Pulsar IO runtime to deduplicate messages and achieve exactly-once processing guarantee.RecordSequence
No If the record is originated from a sequential source, it returns its RecordSequence
.RecordSequence
is used as a part of the unique identifier by Pulsar IO runtime to deduplicate messages and achieve exactly-once processing guarantee.Properties
No If the record carries user-defined properties, it returns those properties. DestinationTopic
No Topic to which message should be written. Message
No A class which carries data sent by users.
For more information, see Message.java. -
Record should provide the following methods:
Method Description ack
Acknowledge that the record is fully processed. fail
Indicate that the record fails to be processed.
-
Handle schema informationβ
Pulsar IO automatically handles the schema and provides a strongly typed API based on Java generics. If you know the schema type that you are producing, you can declare the Java class relative to that type in your sink declaration.
public class MySource implements Source<String> {
public Record<String> read() {}
}
If you want to implement a source that works with any schema, you can go with byte[]
(of ByteBuffer
) and use Schema.AUTO_PRODUCE_BYTES().
public class MySource implements Source<byte[]> {
public Record<byte[]> read() {
Schema wantedSchema = ....
Record<byte[]> myRecord = new MyRecordImplementation();
....
}
class MyRecordImplementation implements Record<byte[]> {
public byte[] getValue() {
return ....encoded byte[]...that represents the value
}
public Schema<byte[]> getSchema() {
return Schema.AUTO_PRODUCE_BYTES(wantedSchema);
}
}
}
To handle the KeyValue
type properly, follow the guidelines for your record implementation:
- It must implement Record interface and implement
getKeySchema
,getValueSchema
, andgetKeyValueEncodingType
- It must return a
KeyValue
object asRecord.getValue()
- It may return null in
Record.getSchema()
When Pulsar IO runtime encounters a KVRecord
, it brings the following changes automatically:
- Set properly the
KeyValueSchema
- Encode the Message Key and the Message Value according to the
KeyValueEncoding
(SEPARATED or INLINE)
For more information about how to create a source connector, see KafkaSource.
Sinkβ
Developing a sink connector is similar to developing a source connector, that is, you need to implement the Sink interface, which means implementing the open method and the write method.
-
Implement the open method.
/**
* Open connector with configuration
*
* @param config initialization config
* @param sinkContext
* @throws Exception IO type exceptions when opening a connector
*/
void open(final Map<String, Object> config, SinkContext sinkContext) throws Exception; -
Implement the write method.
/**
* Write a message to Sink
* @param record record to write to sink
* @throws Exception
*/
void write(Record<T> record) throws Exception;During the implementation, you can decide how to write the
Value
and theKey
to the actual source, and leverage all the provided information such asPartitionId
andRecordSequence
to achieve different processing guarantees.You also need to ack records (if messages are sent successfully) or fail records (if messages fail to send).
Handling Schema informationβ
Pulsar IO handles automatically the Schema and provides a strongly typed API based on Java generics. If you know the Schema type that you are consuming from you can declare the Java class relative to that type in your Sink declaration.
public class MySink implements Sink<String> {
public void write(Record<String> record) {}
}
If you want to implement a sink that works with any schema, you can you go with the special GenericObject interface.
public class MySink implements Sink<GenericObject> {
public void write(Record<GenericObject> record) {
Schema schema = record.getSchema();
GenericObject genericObject = record.getValue();
if (genericObject != null) {
SchemaType type = genericObject.getSchemaType();
Object nativeObject = genericObject.getNativeObject();
...
}
....
}
}
In the case of AVRO, JSON, and Protobuf records (schemaType=AVRO,JSON,PROTOBUF_NATIVE), you can cast the
genericObject
variable to GenericRecord
and use getFields()
and getField()
API.
You are able to access the native AVRO record using genericObject.getNativeObject()
.
In the case of KeyValue type, you can access both the schema for the key and the schema for the value using this code.
public class MySink implements Sink<GenericObject> {
public void write(Record<GenericObject> record) {
Schema schema = record.getSchema();
GenericObject genericObject = record.getValue();
SchemaType type = genericObject.getSchemaType();
Object nativeObject = genericObject.getNativeObject();
if (type == SchemaType.KEY_VALUE) {
KeyValue keyValue = (KeyValue) nativeObject;
Object key = keyValue.getKey();
Object value = keyValue.getValue();
KeyValueSchema keyValueSchema = (KeyValueSchema) schema;
Schema keySchema = keyValueSchema.getKeySchema();
Schema valueSchema = keyValueSchema.getValueSchema();
}
....
}
}
Testβ
Testing connectors can be challenging because Pulsar IO connectors interact with two systems that may be difficult to mockβPulsar and the system to which the connector is connecting.
It is recommended writing special tests to test the connector functionalities as below while mocking the external service.
Unit testβ
You can create unit tests for your connector.
Integration testβ
Once you have written sufficient unit tests, you can add separate integration tests to verify end-to-end functionality.
Pulsar uses testcontainers for all integration tests.
For more information about how to create integration tests for Pulsar connectors, see IntegrationTests.
Packageβ
Once you've developed and tested your connector, you need to package it so that it can be submitted to a Pulsar Functions cluster.
There are two methods to work with Pulsar Functions' runtime, that is, NAR and uber JAR.
If you plan to package and distribute your connector for others to use, you are obligated to
license and copyright your own code properly. Remember to add the license and copyright to all libraries your code uses and to your distribution.
If you use the NAR method, the NAR plugin automatically creates a
DEPENDENCIES
file in the generated NAR package, including the proper licensing and copyrights of all libraries of your connector.
NARβ
NAR stands for NiFi Archive, which is a custom packaging mechanism used by Apache NiFi, to provide a bit of Java ClassLoader isolation.
For more information about how NAR works, see here.
Pulsar uses the same mechanism for packaging all built-in connectors.
The easiest approach to package a Pulsar connector is to create a NAR package using nifi-nar-maven-plugin.
Include this nifi-nar-maven-plugin in your maven project for your connector as below.
<plugins>
<plugin>
<groupId>org.apache.nifi</groupId>
<artifactId>nifi-nar-maven-plugin</artifactId>
<version>1.5.0</version>
</plugin>
</plugins>
You must also create a resources/META-INF/services/pulsar-io.yaml
file with the following contents:
name: connector name
description: connector description
sourceClass: fully qualified class name (only if source connector)
sinkClass: fully qualified class name (only if sink connector)
For Gradle users, there is a Gradle Nar plugin available on the Gradle Plugin Portal.
For more information about an how to use NAR for Pulsar connectors, see TwitterFirehose.
Uber JARβ
An alternative approach is to create an uber JAR that contains all of the connector's JAR files and other resource files. No directory internal structure is necessary.
You can use maven-shade-plugin to create a uber JAR as below:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.1.1</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<filters>
<filter>
<artifact>*:*</artifact>
</filter>
</filters>
</configuration>
</execution>
</executions>
</plugin>
Monitorβ
Pulsar connectors enable you to move data in and out of Pulsar easily. It is important to ensure that the running connectors are healthy at any time. You can monitor Pulsar connectors that have been deployed with the following methods:
-
Check the metrics provided by Pulsar.
Pulsar connectors expose the metrics that can be collected and used for monitoring the health of Java connectors. You can check the metrics by following the monitoring guide.
-
Set and check your customized metrics.
In addition to the metrics provided by Pulsar, Pulsar allows you to customize metrics for Java connectors. Function workers collect user-defined metrics to Prometheus automatically and you can check them in Grafana.
Here is an example of how to customize metrics for a Java connector.
- Java
public class TestMetricSink implements Sink<String> {
@Override
public void open(Map<String, Object> config, SinkContext sinkContext) throws Exception {
sinkContext.recordMetric("foo", 1);
}
@Override
public void write(Record<String> record) throws Exception {
}
@Override
public void close() throws Exception {
}
}