SwiftMQ's default configuration is optimized for common use cases. That means you don't need to change it until you hit the limits. This guide will help you to change SwiftMQ's various tuning options.
Persistence means a message is written to disk before the send method returns. The JMS default delivery mode is
PERSISTENT. Therefore, if you start SwiftMQ out if the box, all messages are persisted to disk which has a big impact on your throughput. Sending persistent messages also requires synchronous sends at the message producer so the impact is not only the time to write to the disk but also the time to wait for a reply to send the next one.
Persistency is required if messages should survive a restart of the SwiftMQ CE/UR Router resp. failover of a SwiftMQ HA Router. If you don't need that, you should change the delivery mode to
NON_PERSISTENT. This can be done at the sending JMS client with
This changes the default delivery mode of this particular
Or it can be done by putting the delivery mode as an additional parameter to the send method:
sender.send(msg, DeliveryMode.PERSISTENT, Message.DEFAULT_PRIORITY, Message.DEFAULT_TIME_TO_LIVE);
If you can't change your code, you can change the default delivery mode in the JMS connection factory:
<connection-factory name="ConnectionFactory" jms-default-delivery-mode="non_persistent"/>
The next time your JMS client looks up the connection factory, it sends non-persistent messages.
In case you don't do JNDI lookups but use our proprietary connection factory, just change it there:
props.put(SwiftMQConnectionFactory.SOCKETFACTORY, "com.swiftmq.net.JSSESocketFactory"); props.put(SwiftMQConnectionFactory.HOSTNAME, "localhost"); props.put(SwiftMQConnectionFactory.PORT, "4001"); props.put(SwiftMQConnectionFactory.KEEPALIVEINTERVAL, "60000"); props.put(SwiftMQConnectionFactory.JMS_DELIVERY_MODE, String.valueOf(DeliveryMode.NON_PERSISTENT)); QueueConnectionFactory qcf = (QueueConnectionFactory) SwiftMQConnectionFactory.create(props);
There is also a way to overwrite the message persistent setting contained in the message. This can either be done at a regular queue or at a queue controller. This is NOT recommended because you only change the persistence mode but not the behavior of the send. That is, your message is sent as a persistent message (synchronously) but due to the overwrite at the queue/queue controller, the message isn't persistent. This doesn't make much sense so please use one of the methods above.
Persistent messages are sent synchronously per default (required by the JMS spec). The send method only returns after the message has persisted AND a reply has been sent back to the client. This can be relaxed to asynchronous sending by setting this system property at the JMS client:
In that case, the send request is stored in an internal outbound queue at the client and the send method returns immediately. The send requests are transferred in the background as batches. This may double the throughput but those messages may be lost that are still in the client's outbound queue when the client is terminated.
JMS Session Type and Acknowledge Mode
A JMS session can be created as transacted or non-transacted session.
Messages sent in a transacted session are buffered at the client until a commit is called on the session which then transfers all messages of this transaction to the router. The behavior of the send method on non-transacted sessions depends on the persistence setting of the message. Persistent messages are sent synchronously while non-persistent messages are sent asynchronously. See sections above.
Therefore, to send persistent messages, a transacted session can be used to implement your own batching and thus increase throughput.
On the consumer side, a commit on a transacted session acknowledges the delivery of all messages received within this transaction. Similarly, for non-transacted sessions and client-acknowledge mode, an acknowledge-call acknowledges the delivery of all messages since the last acknowledge-call. Both, commit and acknowledge are synchronous calls. Modes auto-acknowledge and dups-ok-acknowledge are synonyms in SwiftMQ (there is no further optimization). In both modes, the messages are automatically asynchronously acknowledged after the message has been returned by a receive-call resp. the onMessage-method has been returned. For non-durable subscribers, the messages are auto committed before they are delivered to the client's cache.
Therefore, the fastest mode for consumer sessions is non-transacted, auto-acknowledged.
Flow control is used to establish a maximum throughput rate between producers and consumers and is enabled by default on all destinations. Each queue (incl. subscriber queues) measures the producing and consuming rate and, if certain conditions are met, returns a flow control delay back to the producer. The delay is in milliseconds and the producer waits this amount of time before returning from a send or commits call.
Per default, flow control strives to keep all messages in a queue in the queue's cache. It is only activated if a threshold defined in attribute
flowcontrol-start-queuesize is reached. The default value is 400 messages. Since the default
cache-size of a queue is 500 messages, all messages are stored in the queue's cache and served from there.
flowcontrol-start-queuesize can be increased but the
cache-size attribute should be increased as well otherwise the queue swaps non-persistent messages out to disk which decreases throughput. Same for the case if
flowcontrol-start-queuesize is set to -1 which switches flow control off.
Flow control should always be used if a maximum throughput rate between producers and consumers is required.
SMQP and Prefetch Settings
A JMS producer acts on flow control within the send/commit method. For asynchronous sends there is an attribute in the connection factory called
smqp-producer-reply-interval which specifies in which intervals a send method has to wait for a reply to act on flow control delays. The default value is 20 which means the send method waits for every 20th call on a reply and may wait on a flow control delay contained in the reply. This default value of 20 is what we found to be an optimal value and is related to the default consumer cache size. It is not recommended to increase it without changing the default consumer cache size. Increasing
smqp-producer-reply-interval would just increase the flow control delays.
MessageConsumer object has its own client-side cache called the consumer cache. This cache is filled asynchronously in the background. Call to
onMessage is served out of the cache. The cache is dimensioned by attribute
smqp-consumer-cache-size which defines the size in number of messages (default 500) and can be further limited by attribute
smqp-consumer-cache-size-kb (default 2048) which limits it in kilobytes.
The JMS Swiftlet contains an attribute
consumer-cache-low-water-mark which defines the number of messages at which a fill-cache request will be sent from the client to the router. If this attribute would be set to 0 then the cache would be emptied before a refill would be initiated, thus there would be a gap in consuming messages. In our tests, the default of 100 messages matched the number of messages which will be consumed between refill request and arrival of new messages at the cache so that there is no gap at all.
The most effective change you can do in the standard file-based Store Swiftlet is to enable or disable disk sync of the transaction log. Enabled, every single write to the transaction log is synced with the disk. This is the most reliable option but throughput is then bound to the speed of the disk.
If disk sync is enabled, the disk write cache must be disabled. On Linux systems this can be done with:
hdparm -W 0 <device>
The transaction log disk sync is enabled by setting the transaction-log's attribute
true. Default is
Throughput-related thread pools are
jms.connection is used for batching and outbound writes from router to client. Attribute
max-threads may be reduces to
1 to avoid thread context switching and to increase the chance to get more content into the batches. Since it is a single thread, a write on a slow network connection would slow down the whole router. This is why the default is
jms.session does the whole JMS work. If you use persistent messages, it is important to get as much work as possible done in parallel so that the Store Swiftlet's log manager can write as many log records as possible in one iteration. Therefore, attribute
max-threads should be set to 100.
If you only use non-persistent messages, you don't have the log manager involved and thus you must strive to reduce thread context switching. In that case, set
1 and you will get the highest throughput.
Duplicate Message Detection
SwiftMQ HA Router and SwiftMQ CE/UR Router have inbound duplicate message detection enabled by default on all destinations. Especially for SwiftMQ CE/UR Router, this is only for the case when a router is shut down and restarted (e.g. version upgrade), JMS clients reconnect and may send a message twice. If you don't use this functionality or can afford to get duplicates, you may switch duplicate message detection for these or all destinations off. This can be done by setting attribute
duplicate-detection-enabled of the queue/queue controller to
false (default is
Outbound duplicate message detection takes place on the client side and is enabled by default. To disable it, set attribute
duplicate-message-detection of the connection factory to
false (default is
If you don't send your messages with a time-to-live, you don't use message expiration and should switch it off. If you use message expiration, however, you may consider converting the standard cleanup into a job-based cleanup.
Multiple Queue Consumers
If you use multiple concurrent consumers without selectors per queue, you may consider using a Clustered Queue instead.
You should always strive to avoid selectors. If you have a queue and many consumers on it, each with a different selector, each message has to be compared to it which may significantly drop performance. Multiple queue consumers with selectors can be converted into single consumers with each a distinct queue. Instead of defining selectors you then decide at the producer side which queue is used for sending.