Skip to content

Sketch Environment/Producer/Consumer API #1

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 46 commits into from
Aug 6, 2020
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
46 commits
Select commit Hold shift + click to select a range
93cd259
Sketch producer API
acogoluegnes Jul 9, 2020
8c9d2b7
Merge branch 'master' into producer-spike
acogoluegnes Jul 10, 2020
ba18b3b
Introduce Environment API
acogoluegnes Jul 15, 2020
1087abf
Add URI(s) parameters to build Environment
acogoluegnes Jul 17, 2020
e8f874d
Merge branch 'master' into producer-spike
acogoluegnes Jul 17, 2020
18390f6
Remove some publish methods in Client
acogoluegnes Jul 17, 2020
96dfc24
Organize classes between API/implementation packages
acogoluegnes Jul 17, 2020
be54e5c
Introduce Consumer API
acogoluegnes Jul 20, 2020
80f64e9
Handle publishing error in producer
acogoluegnes Jul 20, 2020
7baa62c
Support stream creation/deletion in environment
acogoluegnes Jul 20, 2020
bc5e567
Document Environment API
acogoluegnes Jul 21, 2020
c6b2d68
Document Producer API
acogoluegnes Jul 21, 2020
d8cc437
Document Consumer API
acogoluegnes Jul 21, 2020
91f52ab
Publish temp API documentation
acogoluegnes Jul 21, 2020
c5c45e2
Support sub-entry batching in producer
acogoluegnes Jul 27, 2020
d8cf412
Synchronize message accumulator access
acogoluegnes Jul 27, 2020
afb7203
Limit number of outstanding publish confirms
acogoluegnes Jul 27, 2020
ea960d0
Document maxUnconfirmedMessages and subEntrySize
acogoluegnes Jul 27, 2020
07f8681
Recover locator connection in environment
acogoluegnes Jul 28, 2020
504a68b
Close producers and consumers in environment
acogoluegnes Jul 28, 2020
7d3bea5
Document environment settings
acogoluegnes Jul 29, 2020
9e2300e
Use new API for sample application
acogoluegnes Jul 29, 2020
a0771a2
Use Consumer in performance tool
acogoluegnes Jul 29, 2020
147b8d5
Downsample latency calculation in performance tool
acogoluegnes Jul 29, 2020
b41aa94
Deal with stream unavailibility in consumer
acogoluegnes Jul 30, 2020
4bc6744
Add delay before consumer re-assignment after metadata update
acogoluegnes Jul 31, 2020
87b866d
Add unit tests for DefaultClientSubscriptions
acogoluegnes Jul 31, 2020
c5e3736
Unit test DefaultClientSubscriptions sub/unsub
acogoluegnes Jul 31, 2020
a4e5abe
Update data structure before subscription
acogoluegnes Jul 31, 2020
eae2887
More DefaultClientSubscriptions unit tests
acogoluegnes Jul 31, 2020
6782fa8
More DefaultClientSubscriptions unit tests
acogoluegnes Jul 31, 2020
cd5cf2f
Schedule candidates lookup on metadata update
acogoluegnes Aug 3, 2020
cc0806e
Create async retry utility for metadata update
acogoluegnes Aug 3, 2020
f0ded28
More DefaultClientSubscriptions unit tests
acogoluegnes Aug 3, 2020
c4fea33
Use async retry utility for locator recovery
acogoluegnes Aug 3, 2020
53ea1d9
Add some Environment unit tests
acogoluegnes Aug 4, 2020
5efe862
Handle connection loss in consumer
acogoluegnes Aug 5, 2020
83c3c12
Rename RecoveryBackOffDelayPolicy to BackOffDelayPolicy
acogoluegnes Aug 5, 2020
fe91c97
Add node failure test for consumer
acogoluegnes Aug 6, 2020
6afab81
Add unit for consumer connection recovery
acogoluegnes Aug 6, 2020
ea52a49
Disable a couple of recovery tests
acogoluegnes Aug 6, 2020
a6bea16
Remove Client documentation
acogoluegnes Aug 6, 2020
b3d4bc0
Improve wording in documentation
acogoluegnes Aug 6, 2020
0190f1d
Add some Javadoc to Client
acogoluegnes Aug 6, 2020
a2f2394
Kill connection instead of stopping node in test
acogoluegnes Aug 6, 2020
298826e
Publish documentation to snapshot directory
acogoluegnes Aug 6, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Handle connection loss in consumer
  • Loading branch information
acogoluegnes committed Aug 5, 2020
commit 5efe8621d49aae4b44556eb22214de4e6d6f997e
Original file line number Diff line number Diff line change
Expand Up @@ -23,4 +23,6 @@ interface ClientSubscriptions {

void unsubscribe(long id);

void close();

}
160 changes: 102 additions & 58 deletions src/main/java/com/rabbitmq/stream/impl/DefaultClientSubscriptions.java
Original file line number Diff line number Diff line change
Expand Up @@ -65,15 +65,17 @@ public long subscribe(StreamConsumer consumer, String stream, OffsetSpecificatio

long streamSubscriptionId = globalSubscriptionIdSequence.getAndIncrement();
// create stream subscription to track final and changing state of this very subscription
// we should keep this instance when we move the subscription from a client to another one
// we keep this instance when we move the subscription from a client to another one
StreamSubscription streamSubscription = new StreamSubscription(streamSubscriptionId, consumer, stream, messageHandler);

String key = keyForClientSubscriptionState(newNode);

SubscriptionState subscriptionState = clientSubscriptionStates.computeIfAbsent(key, s -> new SubscriptionState(environment
.clientParametersCopy()
.host(newNode.getHost())
.port(newNode.getPort())
SubscriptionState subscriptionState = clientSubscriptionStates.computeIfAbsent(key, s -> new SubscriptionState(
key,
environment
.clientParametersCopy()
.host(newNode.getHost())
.port(newNode.getPort())
));

subscriptionState.add(streamSubscription, offsetSpecification);
Expand Down Expand Up @@ -141,6 +143,14 @@ public void unsubscribe(long id) {
}
}

@Override
public void close() {
for (SubscriptionState subscription : this.clientSubscriptionStates.values()) {
subscription.close();
}

}

private static class StreamSubscription {

private final long id;
Expand Down Expand Up @@ -183,8 +193,9 @@ private class SubscriptionState {
private final Map<Integer, StreamSubscription> streamSubscriptions = new ConcurrentHashMap<>();
private final Map<String, Set<StreamSubscription>> streamToStreamSubscriptions = new ConcurrentHashMap<>();

private SubscriptionState(Client.ClientParameters clientParameters) {
private SubscriptionState(String name, Client.ClientParameters clientParameters) {
this.client = clientFactory.apply(clientParameters
.clientProperty("name", "rabbitmq-stream-consumer")
.chunkListener((client, subscriptionId, offset, messageCount, dataSize) -> client.credit(subscriptionId, 1))
.creditNotification((subscriptionId, responseCode) -> LOGGER.debug("Received credit notification for subscription {}: {}", subscriptionId, responseCode))
.messageListener((subscriptionId, offset, message) -> {
Expand All @@ -197,74 +208,103 @@ private SubscriptionState(Client.ClientParameters clientParameters) {
LOGGER.warn("Could not find stream subscription {}", subscriptionId);
}
})
.shutdownListener(shutdownContext -> {
if (shutdownContext.isShutdownUnexpected()) {
clientSubscriptionStates.remove(name);
LOGGER.debug("Unexpected shutdown notification on subscription client {}, scheduling consumers re-assignment", name);
environment.scheduledExecutorService().schedule(() -> {
for (Map.Entry<String, Set<StreamSubscription>> entry : streamToStreamSubscriptions.entrySet()) {
String stream = entry.getKey();
LOGGER.debug("Re-assigning consumers to stream {} after disconnection");
assignConsumersToStream(
entry.getValue(), stream,
attempt -> environment.recoveryBackOffDelayPolicy().delay(attempt + 1), // already waited once
Duration.ZERO
);
}
}, environment.recoveryBackOffDelayPolicy().delay(0).toMillis(), TimeUnit.MILLISECONDS);
}
})
.metadataListener((stream, code) -> {
LOGGER.debug("Received metadata notification for {}, stream is likely to have become unavailable",
stream);
Set<StreamSubscription> affectedSubscriptions = streamToStreamSubscriptions.remove(stream);
if (affectedSubscriptions != null && !affectedSubscriptions.isEmpty()) {
// scheduling consumer re-assignment, to give the system some time to recover
environment.scheduledExecutorService().schedule(() -> {

if (affectedSubscriptions != null) {
try {
LOGGER.debug("Trying to move {} subscription(s) (stream {})", affectedSubscriptions.size(), stream);
for (StreamSubscription affectedSubscription : affectedSubscriptions) {
streamSubscriptions.remove(affectedSubscription.subscriptionIdInClient);
}
assignConsumersToStream(
affectedSubscriptions, stream,
attempt -> attempt == 0 ? Duration.ZERO : metadataUpdateRetryDelay,
metadataUpdateRetryTimeout
);
} catch (Exception e) {
e.printStackTrace();
}

}, metadataUpdateInitialDelay.toMillis(), TimeUnit.MILLISECONDS);
}
}));
}

Runnable consumersClosingCallback = () -> {
for (StreamSubscription affectedSubscription : affectedSubscriptions) {
try {
affectedSubscription.consumer.closeAfterStreamDeletion();
streamSubscriptionRegistry.remove(affectedSubscription.id);
} catch (Exception e) {
LOGGER.debug("Error while closing consumer", e.getMessage());
void assignConsumersToStream(Collection<StreamSubscription> subscriptions, String stream,
Function<Integer, Duration> delayPolicy, Duration retryTimeout) {
Runnable consumersClosingCallback = () -> {
for (StreamSubscription affectedSubscription : subscriptions) {
try {
affectedSubscription.consumer.closeAfterStreamDeletion();
streamSubscriptionRegistry.remove(affectedSubscription.id);
} catch (Exception e) {
LOGGER.debug("Error while closing consumer", e.getMessage());
}
}
};

AsyncRetry.asyncRetry(() -> findBrokersForStream(stream))
.description("Candidate lookup to consume from " + stream)
.scheduler(environment.scheduledExecutorService())
.retry(ex -> !(ex instanceof StreamDoesNotExistException))
.delayPolicy(delayPolicy)
.timeout(retryTimeout)
.build()
.thenAccept(candidates -> {
if (candidates == null) {
consumersClosingCallback.run();
} else {
for (StreamSubscription affectedSubscription : subscriptions) {
try {
Client.Broker broker = pickBroker(candidates);
LOGGER.debug("Using {} to resume consuming from {}", broker, stream);
String key = keyForClientSubscriptionState(broker);
// FIXME in case the broker is no longer there, we may have to deal with an error here
// we could renew the list of candidates for the stream
SubscriptionState subscriptionState = clientSubscriptionStates.computeIfAbsent(key, s -> new SubscriptionState(
key,
environment
.clientParametersCopy()
.host(broker.getHost())
.port(broker.getPort())
));
if (affectedSubscription.consumer.isOpen()) {
synchronized (affectedSubscription.consumer) {
if (affectedSubscription.consumer.isOpen()) {
subscriptionState.add(affectedSubscription, OffsetSpecification.offset(affectedSubscription.offset));
}
}
};

AsyncRetry.asyncRetry(() -> findBrokersForStream(stream))
.description("Candidate lookup to consume from " + stream)
.scheduler(environment.scheduledExecutorService())
.retry(ex -> !(ex instanceof StreamDoesNotExistException))
.delay(metadataUpdateRetryDelay)
.timeout(metadataUpdateRetryTimeout)
.build()
.thenAccept(candidates -> {
if (candidates == null) {
consumersClosingCallback.run();
} else {
for (StreamSubscription affectedSubscription : affectedSubscriptions) {
try {
Client.Broker broker = pickBroker(candidates);
LOGGER.debug("Using {} to resume consuming from {}", broker, stream);
String key = keyForClientSubscriptionState(broker);
// FIXME in case the broker is no longer there, we may have to deal with an error here
// we could renew the list of candidates for the stream
SubscriptionState subscriptionState = clientSubscriptionStates.computeIfAbsent(key, s -> new SubscriptionState(environment
.clientParametersCopy()
.host(broker.getHost())
.port(broker.getPort())
));
if (affectedSubscription.consumer.isOpen()) {
synchronized (affectedSubscription.consumer) {
if (affectedSubscription.consumer.isOpen()) {
subscriptionState.add(affectedSubscription, OffsetSpecification.offset(affectedSubscription.offset));
}
}
}
} catch (Exception e) {
LOGGER.warn("Error while re-assigning subscription from stream {}", stream, e.getMessage());
}
}
}
}).exceptionally(ex -> {
consumersClosingCallback.run();
return null;
});
}
} catch (Exception e) {
LOGGER.warn("Error while re-assigning subscription from stream {}", stream, e.getMessage());
}
}, metadataUpdateInitialDelay.toMillis(), TimeUnit.MILLISECONDS);
}
}
}));
}).exceptionally(ex -> {
consumersClosingCallback.run();
return null;
});
}

void add(StreamSubscription streamSubscription, OffsetSpecification offsetSpecification) {
Expand Down Expand Up @@ -316,5 +356,9 @@ public void remove(StreamSubscription streamSubscription) {
}
});
}

void close() {
this.client.close();
}
}
}
12 changes: 10 additions & 2 deletions src/main/java/com/rabbitmq/stream/impl/StreamEnvironment.java
Original file line number Diff line number Diff line change
Expand Up @@ -115,6 +115,7 @@ class StreamEnvironment implements Environment {
Client newLocator = clientFactory.apply(newLocatorParameters
.host(address.host)
.port(address.port)
.clientProperty("name", "rabbitmq-stream-locator")
);
LOGGER.debug("Locator connected on {}", address);
return newLocator;
Expand All @@ -132,6 +133,7 @@ class StreamEnvironment implements Environment {
Client.ClientParameters locatorParameters = clientParametersPrototype
.duplicate()
.host(address.host).port(address.port)
.clientProperty("name", "rabbitmq-stream-locator")
.shutdownListener(shutdownListenerReference.get());
try {
this.locator = clientFactory.apply(locatorParameters);
Expand Down Expand Up @@ -249,7 +251,6 @@ public void close() {
}
}


for (Client client : publishingClientPool.values()) {
try {
client.close();
Expand All @@ -266,6 +267,8 @@ public void close() {
}
}

this.clientSubscriptions.close();

for (Client client : consumingClientPool.values()) {
try {
client.close();
Expand Down Expand Up @@ -303,10 +306,14 @@ public void close() {

}

protected ScheduledExecutorService scheduledExecutorService() {
ScheduledExecutorService scheduledExecutorService() {
return this.scheduledExecutorService;
}

RecoveryBackOffDelayPolicy recoveryBackOffDelayPolicy() {
return this.recoveryBackOffDelayPolicy;
}

Client getClientForPublisher(String stream) {
Map<String, Client.StreamMetadata> metadata = locator().metadata(stream);
if (metadata.size() == 0 || metadata.get(stream) == null) {
Expand Down Expand Up @@ -334,6 +341,7 @@ Client getClientForPublisher(String stream) {
clientParametersPrototype.duplicate()
.host(leader.getHost())
.port(leader.getPort())
.clientProperty("name", "rabbitmq-stream-producer")
);
});
}
Expand Down
8 changes: 8 additions & 0 deletions src/test/java/com/rabbitmq/stream/Host.java
Original file line number Diff line number Diff line change
Expand Up @@ -80,6 +80,14 @@ public static Process rabbitmqctl(String command) throws IOException {
" " + command);
}

public static Process killConnection(String connectionName) throws IOException {
return rabbitmqctl("eval 'rabbit_stream:kill_connection(\"" + connectionName + "\").'");
}

public static Process killStreamLeaderProcess(String stream) throws IOException {
return rabbitmqctl("eval 'exit(rabbit_stream_manager:lookup_leader(<<\"/\">>, <<\"" + stream + "\">>),kill).'");
}


public static String rabbitmqctlCommand() {
return System.getProperty("rabbitmqctl.bin");
Expand Down
Loading