What’s New in Apache Pulsar 2.7.3
The Apache Pulsar community releases version 2.7.3! 34 contributors provided improvements and bug fixes that delivered 79 commits.
Cursor reads adhere to the dispatch byte rate limiter setting and no longer cause unexpected results. PR-11249
The ledger rollover scheduled task runs as expected. PR-11226
This blog walks through the most noteworthy changes. For the complete list including all enhancements and bug fixes, check out the Pulsar 2.7.3 Release Notes.
Notable bug fixes and enhancements
Issue: When using byte rates, the dispatch rates were not respected (regardless of being a namespace or topic policy).
Resolution: Fixed behavior of dispatch byte rate limiter setting. Cursor reads adhere to the setting and no longer cause unexpected results.
Issue: Previously, the ledger rollover scheduled task was executed before reaching the ledger maximum rollover time, which caused the ledger not to roll over in time.
Resolution: Fixed the timing of the ledger rollover schedule, so the task runs only after the ledger is created successfully.
Issue: Previously, when setting a topic-level retention policy for a topic and then restarting the broker, the topic-level retention policy did not work.
Resolution: Fixed behavior of the policy so it replays all policy messages after initiating
policyCacheInitMapand added a retention policy check test when restarting the broker.
Issue: Previously, there was a memory leak when calling the
lastMessageIdAPI, which caused the broker process to be stopped by Kubernetes.
Resolution: Added the missing entry.release() call to PersistentTopic.getLastMessageId to ensure the broker does not run out of memory.
Issue: When performing the admin operation to get the namespace of a tenant, ZooKeeper reads were issued on the ZooKeeper client and not getting cached by the brokers.
Resolution: Fixed ZooKeeper caching when fetching a list of namespaces for a tenant.
Monitoring threads that call
LeaderService.isLeader() are no longer blocked. PR-10512
LeaderServicechanged leadership status, it was locked with a
synchronizedblock, which also blocked other threads calling
Resolution: Fixed the deadlock condition on the monitoring thread so it is not blocked by
LeaderService.isLeader() by modifyingClusterServiceCoordinator
to check if it is a leader fromMembershipManager`.
hasMessageAvailable can read messages successfully. PR-10414
true, it could not read messages because messages were filtered by
Resolution: Fixed the race conditions by modifying
acknowledgmentsGroupingTrackerto filter duplicate messages, and then cleanup the messages when the connection is open.
Issue: Proxies were not creating partitions because they were using the current ZooKeeper metadata.
Resolution: Changed the proxy to handle
PartitionMetadataRequestby selecting and fetching from an available broker instead of using current ZooKeeper metadata.
Issue: When creating a partitioned topic in a replicated namespace, it did not create a metadata path
/managed-ledgerson replicated clusters.
Resolution: Added a flag (createLocalTopicOnly) to indicate whether or not to create a metadata path for a partitioned topic in replicated clusters.
Issue: Due to a redirect loop in a topic policy, you can set a policy for a non-existing topic or a partition of a partitioned topic.
Resolution: The fix added an authoritative flag for a topic policy to avoid a redirect loop. You can not set a topic policy for a non-existent topic or a partition of a partitioned topic. If you set a topic policy for a partition of a 0-partition topic, it redirects to the broker.
Issue: When using the lookup discovery service for a partitioned non-persistent topic, it returned zero rather than the number of partitions. The Pulsar client tried to connect to the topic as if it were a normal topic.
topicName.getDomain().value()rather than hard coding
persistent.Now you can use the discovery service for a partitioned, non-persistent topic successfully.
Other connectors can now use the Kinesis
Backoff class. PR-10744
Issue: The Kinesis sink connector
Backoffclass in the Pulsar client implementation project in combination with the dependency
org.apache.pulsar:pulsar-client-originalincreased the connector size.
Resolution: Added a new class
Backoffin the function io-core project so that the Kinesis sink connector and other connectors can use the class.
FLOW request with zero permits can not be sent. PR-10506
Issue: When a broker received a
FLOWrequest with zero permits, an exception was thrown and then the connection was closed. This triggered frequent reconnections and caused duplicate or out-of-order messages.
Resolution: Added a validation that verifies the permits of a
FLOWrequest before sending it. If the permit is zero, the
FLOWrequest can not be sent.
Function and connector
Issue: The Kinesis sink connector did not acknowledge messages after they were sent successfully.
Resolution: Added acknowledgement for the Kinesis sink connector once a message is sent successfully.
Issue: When using Kubernetes runtime, if a function was submitted with a valid length (less than 55 characters), a StatefulSet was created but it was unable to spawn pods.
Resolution: Changed the maximum length of a function name from 55 to 53 characters for Kubernetes runtime. With this fix, the length of a function name can not exceed 52 characters.
pulsar-admin connection to proxy is stable when TLS is enabled. PR-10907
pulsar-adminwas unstable over the TLS connection because of the Jetty bug in SSL buffering introduced in Jetty 9.4.39. It caused large function jar uploads to fail frequently.
Resolution: Upgraded Jetty to 9.4.42.v20210604, so that
pulsar-adminconnection to proxy is stable when TLS is enabled.
What is Next?
If you are interested in learning more about Pulsar 2.7.3, you can download 2.7.3 and try it out now!
The first-ever Pulsar Virtual Summit Europe 2021 will take place in October. Register now and help us make it an even bigger success by spreading the word on social!