# libgtest-dev version is 1.18.0 or above $ cd /usr/src/googletest $ sudo cmake . $ sudomake $ sudocp ./googlemock/libgmock.a ./googlemock/gtest/libgtest.a /usr/lib/ # less than 1.18.0 $ cd /usr/src/gtest $ sudo cmake . $ sudomake $ sudocp libgtest.a /usr/lib $ cd /usr/src/gmock $ sudo cmake . $ sudomake $ sudocp libgmock.a /usr/lib
Compile the Pulsar client library for C++ inside the Pulsar repository.
$ cd pulsar-client-cpp $ cmake . $ make
After you install the components successfully, the files libpulsar.so and libpulsar.a are in the lib folder of the repository. The tools perfProducer and perfConsumer are in the perf directory.
Since 2.1.0 release, Pulsar ships pre-built RPM and Debian packages. You can download and install those packages directly.
After you download and install RPM or DEB, the libpulsar.so, libpulsarnossl.so, libpulsar.a, and libpulsarwithdeps.a libraries are in your /usr/lib directory.
By default, they are built in code path ${PULSAR_HOME}/pulsar-client-cpp. You can build with the command below.
These libraries rely on some other libraries. If you want to get detailed version of dependencies, see RPM or DEB files.
libpulsar.so is a shared library, containing statically linked boost and openssl. It also dynamically links all other necessary libraries. You can use this Pulsar library with the command below.
libpulsarnossl.so is a shared library, similar to libpulsar.so except that the libraries openssl and crypto are dynamically linked. You can use this Pulsar library with the command below.
libpulsarwithdeps.a is a static library, based on libpulsar.a. It is archived in the dependencies of libboost_regex, libboost_system, libcurl, libprotobuf, libzstd and libz. You can use this Pulsar library with the command below.
The libpulsarwithdeps.a does not include library openssl related libraries libssl and libcrypto, because these two libraries are related to security. It is more reasonable and easier to use the versions provided by the local system to handle security issues and upgrade libraries.
After you install RPM successfully, Pulsar libraries are in the /usr/lib directory, for example:
lrwxrwxrwx 1 root root 18 Dec 3022:21 libpulsar.so -> libpulsar.so.2.9.1 lrwxrwxrwx 1 root root 23 Dec 3022:21 libpulsarnossl.so -> libpulsarnossl.so.2.9.1
note
If you get the error that libpulsar.so: cannot open shared object file: No such file or directory when starting Pulsar client, you may need to run ldconfig first.
Install the GCC and g++ using the following command, otherwise errors would occur in installing Node.js.
If you want to build RPM and Debian packages from the latest master, follow the instructions below. You should run all the instructions at the root directory of your cloned Pulsar repository.
There are recipes that build RPM and Debian packages containing a
statically linked libpulsar.so / libpulsarnossl.so / libpulsar.a / libpulsarwithdeps.a with all required dependencies.
To build the C++ library packages, you need to build the Java packages first.
Pulsar releases are available in the Homebrew core repository. You can install the C++ client library with the following command. The package is installed with the library and headers.
To connect Pulsar using client libraries, you need to specify a Pulsar protocol URL.
Pulsar protocol URLs are assigned to specific clusters, you can use the Pulsar URI scheme. The default port is 6650. The following is an example for localhost.
pulsar://localhost:6650
In a Pulsar cluster in production, the URL looks as follows.
pulsar://pulsar.us-west.example.com:6650
If you use TLS authentication, you need to add ssl, and the default port is 6651. The following is an example.
This example sends 100 messages using the blocking style. While simple, it does not produce high throughput as it waits for each ack to come back before sending the next message.
#include <pulsar/Client.h> #include <thread> using namespace pulsar; int main() { Client client("pulsar://localhost:6650"); Producer producer; Result result = client.createProducer("persistent://public/default/my-topic", producer); if (result != ResultOk) { std::cout << "Error creating producer: " << result << std::endl; return -1; } // Send 100 messages synchronously int ctr = 0; while (ctr < 100) { std::string content = "msg" + std::to_string(ctr); Message msg = MessageBuilder().setContent(content).setProperty("x", "1").build(); Result result = producer.send(msg); if (result != ResultOk) { std::cout << "The message " << content << " could not be sent, received code: " << result << std::endl; } else { std::cout << "The message " << content << " sent successfully" << std::endl; } std::this_thread::sleep_for(std::chrono::milliseconds(100)); ctr++; } std::cout << "Finished producing synchronously!" << std::endl; client.close(); return 0; }
This example sends 100 messages using the non-blocking style calling sendAsync instead of send. This allows the producer to have multiple messages inflight at a time which increases throughput.
The producer configuration blockIfQueueFull is useful here to avoid ResultProducerQueueIsFull errors when the internal queue for outgoing send requests becomes full. Once the internal queue is full, sendAsync becomes blocking which can make your code simpler.
Without this configuration, the result code ResultProducerQueueIsFull is passed to the callback. You must decide how to deal with that (retry, discard etc).
#include <pulsar/Client.h> #include <thread> #include <atomic> using namespace pulsar; std::atomic<uint32_t> acksReceived; void callback(Result code, const MessageId& msgId, std::string msgContent) { // message processing logic here std::cout << "Received ack for msg: " << msgContent << " with code: " << code << " -- MsgID: " << msgId << std::endl; acksReceived++; } int main() { Client client("pulsar://localhost:6650"); ProducerConfiguration producerConf; producerConf.setBlockIfQueueFull(true); Producer producer; Result result = client.createProducer("persistent://public/default/my-topic", producerConf, producer); if (result != ResultOk) { std::cout << "Error creating producer: " << result << std::endl; return -1; } // Send 100 messages asynchronously int ctr = 0; while (ctr < 100) { std::string content = "msg" + std::to_string(ctr); Message msg = MessageBuilder().setContent(content).setProperty("x", "1").build(); producer.sendAsync(msg, std::bind(callback, std::placeholders::_1, std::placeholders::_2, content)); std::this_thread::sleep_for(std::chrono::milliseconds(100)); ctr++; } // wait for 100 messages to be acked while (acksReceived < 100) { std::this_thread::sleep_for(std::chrono::milliseconds(100)); } std::cout << "Finished producing asynchronously!" << std::endl; client.close(); return 0; }
When scaling out a Pulsar topic, you may configure a topic to have hundreds of partitions. Likewise, you may have also scaled out your producers so there are hundreds or even thousands of producers. This can put some strain on the Pulsar brokers as when you create a producer on a partitioned topic, internally it creates one internal producer per partition which involves communications to the brokers for each one. So for a topic with 1000 partitions and 1000 producers, it ends up creating 1,000,000 internal producers across the producer applications, each of which has to communicate with a broker to find out which broker it should connect to and then perform the connection handshake.
You can reduce the load caused by this combination of a large number of partitions and many producers by doing the following:
use SinglePartition partition routing mode (this ensures that all messages are only sent to a single, randomly selected partition)
use non-keyed messages (when messages are keyed, routing is based on the hash of the key and so messages will end up being sent to multiple partitions)
use lazy producers (this ensures that an internal producer is only created on demand when a message needs to be routed to a partition)
With our example above, that reduces the number of internal producers spread out over the 1000 producer apps from 1,000,000 to just 1000.
Note that there can be extra latency for the first message sent. If you set a low send timeout, this timeout could be reached if the initial connection handshake is slow to complete.
Message chunking enables Pulsar to process large payload messages by splitting the message into chunks at the producer side and aggregating chunked messages at the consumer side.
The message chunking feature is OFF by default. The following is an example about how to enable message chunking when creating a producer.
You can avoid running a loop with blocking calls with an event based style by using a message listener which is invoked for each message that is received.
This example starts a subscription at the earliest offset and consumes 100 messages.
#include <pulsar/Client.h> #include <atomic> #include <thread> using namespace pulsar; std::atomic<uint32_t> messagesReceived; void handleAckComplete(Result res) { std::cout << "Ack res: " << res << std::endl; } void listener(Consumer consumer, const Message& msg) { std::cout << "Got message " << msg << " with content '" << msg.getDataAsString() << "'" << std::endl; messagesReceived++; consumer.acknowledgeAsync(msg.getMessageId(), handleAckComplete); } int main() { Client client("pulsar://localhost:6650"); Consumer consumer; ConsumerConfiguration config; config.setMessageListener(listener); config.setSubscriptionInitialPosition(InitialPositionEarliest); Result result = client.subscribe("persistent://public/default/my-topic", "consumer-1", config, consumer); if (result != ResultOk) { std::cout << "Failed to subscribe: " << result << std::endl; return -1; } // wait for 100 messages to be consumed while (messagesReceived < 100) { std::this_thread::sleep_for(std::chrono::milliseconds(100)); } std::cout << "Finished consuming asynchronously!" << std::endl; client.close(); return 0; }
You can limit the maximum number of chunked messages a consumer maintains concurrently by configuring the setMaxPendingChunkedMessage and setAutoAckOldestChunkedMessageOnQueueFull parameters. When the threshold is reached, the consumer drops pending messages by silently acknowledging them or asking the broker to redeliver them later.
The following is an example of how to configure message chunking.
If you use TLS authentication when connecting to Pulsar, you need to add ssl in the connection URLs, and the default port is 6651. The following is an example.