Tuning JMS Programs for Optimum MQ Get API Calls Performance

On Wednesday, I decided to do some testing of some simple (1) MQ for JMS, (2) MQ for Java and (3) procedure language (i.e. amqsbcg) programs to show that using the JVM environment variable of com.ibm.mq.jmqi.defaultMaxMsgSize with an appropriate size could make a JMS application 10 to 15% faster because there would not be any double MQ Get API calls.

My original goal was to use MQ Auditor show everyone what goes on behind the scene starting with an MQ for JMS program then MQ for Java program and amqsbcg because these 2 do not have an issue with double MQ Get API calls. And then a run of the MQ for JMS program setting the JVM environment variable of com.ibm.mq.jmqi.defaultMaxMsgSize to show that the double gets no longer happen.

What should have taken me an hour to do turned into a shit-show with me saying over and over again WTF every time I ran one of my Java programs. I’ve now spend 2 full days on this and I’m pretty sure I have it straight in my head, the question will be, can I actually explain it without confusing the crap out of everyone!!!

So, I’m not going to be showing you the MQ Auditor audit files from a MQ for Java program in this blog posting. I will do a complete write of it in the next blog posting because it is the only way I can think of not to confuse everyone. Hence, this blog posting will be about MQ for JMS program and a procedure language (i.e. amqsbcg).

For each test run, I used the same 100 non-persistent MQRF2 (aka JMS) messages:

  • The first 25 messages are random sized messages between 735 bytes and 200KB
  • The next 25 messages are 8439 bytes in size
  • The next 24 messages are 735 bytes in size
  • The next 15 messages are 8439 bytes in size
  • The next 11 messages are 12298 bytes in size

A message consumer will not always receive the same sized message, hence, the first 25 messages are of a random size. The reason the latter 75 messages are in groups of same sized messages is because of something Steve Webb said in his post (IBM Developer question/answer) called MAXMSGL questions in regards to MQ JMS application. Near the bottom he said:

If a number of smaller messages are then consumed, for example 10 messages at 5KB following on from the large message, the MQ-JMS classes will downsize the buffer to 5KB, and so on.

I was curious when/what auto-resizing was going on. Hence, that is why I structured the last 75 messages the way I did.

Now to the layout of MQ Auditor audit files. An MQ Auditor audit file is a simple CSV (Comma Separated Values) file, so you can load it into your spreadsheet program for easy viewing. Each API call is on a separate line in the audit file.
i.e.
{Date}{Time}{API Call}{PID}{TID}{CC}{RC}{UserId}{HConn}{MQ Structures}

    where

  • PID is Process Id
  • TID is Thread Id
  • CC is completion code
  • RC is reason code
  • HConn is the connection handle

Note: For MQGet API calls, after the MQ Structures are 2 important fields: BufferLength and DataLength. After the BufferLength and DataLength fields, there will be the internal headers (i.e. MQRFH2, MQIMS, etc.), if present, and the message data.

  • The BufferLength field is the length of the buffer used on the MQGet API call.
  • The DataLength field is the length of the message data returned in the buffer that was used on the MQGet API call.

Here is a sample MQGet API line from an MQ Auditor audit file with the BufferLength and DataLength fields highlighted.

2019/08/02 16:06:54.320703, MQXF_GET , A, PID=12456, TID=1, CC=0, RC=0, UserId=tester, HConn=20971526, HObj=2, GMO_Options=MQGMO_NO_WAIT+MQGMO_BROWSE_NEXT+MQGMO_ACCEPT_TRUNCATED_MSG, GMO_WaitInterval=0, GMO_ResolvedQName=TEST.Q1, GMO_MatchOptions=MQMO_NONE, MD_PutDate=2019/08/03, MD_PutTime=00:19:08.00, MD_MsgId=414D51204D5141312020202020202020134CCD4020000B01, MD_Format=MQHRF2, MD_MsgType=MQMT_DATAGRAM, MD_Persistence=MQPER_NOT_PERSISTENT, BufferLength=256000, DataLength=725, RFH2_CCSID=437, RFH2_Encoding=273, RFH2_Format=MQSTR, RFH2_Folder1=jms_text , RFH2_Folder2=THis is a value for j1this is another value, RFH2_Folder3=This is a value for USR property U1 , MsgDataAsHex=496E204D515365726965732C20696E746572636F6D6D756E69,

To set the baseline, I am going to be using the MQ sample program called amqsbcg.exe. I updated the BUFFERLENGTH define (line # 70) to be 256000 (from 65536) then compiled and linked the program. The reason I updated the define is because I don’t want to see reason code of 2079 (MQRC_TRUNCATED_MSG_ACCEPTED) for messages larger than 65536. It may confuse some people.

#define    BUFFERLENGTH  256000  /* Max length of message accepted */
    Test #1:

  • Load the 100 MQRFH2 messages into a queue
  • Run amqsbcg in bindings mode against the same queue

Here is the MQ Auditor audit file. You can see that there are exactly 100 successful MQGets and 1 unsuccessful MQGet with RC of 2033 (MQRC_NO_MSG_AVAILABLE). This is exactly what is to be expected. If you scroll to the right of any MQGET line, you will see that in every case the size of the buffer given to MQ (BufferLength field) is 256000 bytes.

For the MQ for JMS program, I will be using one of my samples called MQTestJMS12L.java. You can download the source code from here. The structure of the JMS program is very similar to amqsbcg. It loops getting all messages until the queue is empty (it does not wait for more messages).

    Test #2:

  • Load the 100 MQRFH2 messages into a queue
  • Run MQTestJMS12L in bindings mode against the same queue
java MQTestJMS12L -m MQWT2 -q TEST.Q1

The loop for getting messages is:

receiver = session.createReceiver(myQ);
while (more)
{
   msg = receiver.receiveNoWait();
   if (msg != null)
      MQTestJMS12L.logger("Incoming Msg>>>"+msg);
   else
      more = false;
}

Here is the MQ Auditor audit file. You can see that there are a total of 111 MQGets:

  • 100 successful MQGets
  • 10 unsuccessful MQGet with RC of 2080 (MQRC_TRUNCATED_MSG_FAILED)
  • 1 unsuccessful MQGet with RC of 2033 (MQRC_NO_MSG_AVAILABLE)

This means that MQTestJMS12L performed 10% more MQGets API calls than amqsbcg to accomplish the same thing. So, lets analyze why there were 10 unsuccessful MQGet with RC of 2080.

  • For the 1st MQGet API call, the internal JMQI routine allocated a 4096 byte buffer and the size of the message was 725 bytes
  • For the 2nd MQGet API call, it failed with RC of 2080 because the internal JMQI routine was using a 4096 byte buffer but the message length was 4602 bytes. So, the internal JMQI routine allocated a new 4602 byte buffer and retrieve the message on the 3rd MQGet API call.
  • For the 4th, 10th, 12th, 14th, 21st and 31st MQGet API calls, they all failed with RC of 2080. For each occurrence, the internal JMQI routine allocates a new buffer of the size of the message that it failed to get.
  • On the 44th MQGet API call, for some unknown and very strange reason, the internal JMQI routine allocated a new buffer 4096 byte buffer. If you look closely, you will see that 44th MQGet is the 11th MQGet call with a smaller message size (8439 bytes) than what the internal JMQI routine had allocated for a buffer size (202601 bytes). Hence, the 44th MQGet API call failed with RC of 2080, because the internal JMQI routine was using a 4096 byte buffer but the message length was 8439 bytes. So, the internal JMQI routine allocated a new 8439 byte buffer and retrieve the message on the 44th MQGet API call.
  • On the 70th MQGet API call, for some unknown and very strange reason, the internal JMQI routine allocated a new buffer 4096 byte buffer. If you look closely, you will see that 70th MQGet is the 38th MQGet call with a smaller message size (725 bytes) than what the internal JMQI routine had allocated for a buffer size (8439 bytes). This is such strange behaviour.
  • For the 83rd MQGet API call, failed with RC of 2080 because the internal JMQI routine was using a 4096 byte buffer but the message length was 8439 bytes. So, the internal JMQI routine allocated a new 8439 byte buffer and retrieve the message on the 84th MQGet API call.
  • For the 99th MQGet API call, failed with RC of 2080 because the internal JMQI routine was using an 8439 byte buffer but the message length was 12286 bytes. So, the internal JMQI routine allocated a new 12286 byte buffer and retrieve the message on the 100th MQGet API call.

I have no idea what the internal JMQI routine is doing but it is very, very strange.

Now, lets do the very same test again with MQTestJMS12L program but add the com.ibm.mq.jmqi.defaultMaxMsgSize JVM environment variable.

    Test #3:

  • Load the 100 MQRFH2 messages into a queue
  • Run MQTestJMS12L in bindings mode with com.ibm.mq.jmqi.defaultMaxMsgSize set to 250000 against the same queue
java -Dcom.ibm.mq.jmqi.defaultMaxMsgSize=250000 MQTestJMS12L -m MQWT2 -q TEST.Q1

Here is the MQ Auditor audit file. You can see that there are exactly 100 successful MQGets and 1 unsuccessful MQGet with RC of 2033 (MQRC_NO_MSG_AVAILABLE). This is exactly what is to be expected. If you scroll to the right of any MQGET line, you will see that in every case the size of the buffer given to MQ (BufferLength field) is 250000 bytes.

There you go. If you don’t want the internal JMQI routine to cause excessive MQGet API calls, then you should use the com.ibm.mq.jmqi.defaultMaxMsgSize JVM environment variable and set it to an appropriate value to eliminate unnecessary MQGet API calls.

Finally, my 100 test messages are just a simple sampling of messages and as you can see, the MQTestJMS12L program without using the com.ibm.mq.jmqi.defaultMaxMsgSize JVM environment variable caused an extra 10% calls for MQGet. I can easily see that number climbing to 15%-18% under a normal business load. Something to think about!!

Update: Currently, the JVM environment variable ‘com.ibm.mq.jmqi.defaultMaxMsgSize’ is undocumented. Hence, the usage may change in a future release.

Regards,
Roger Lacroix
Capitalware Inc.

This entry was posted in IBM i (OS/400), IBM MQ, Java, JMS, Linux, macOS (Mac OS X), Programming, Unix, Windows, z/OS.

Comments are closed.