.NET Performance Issues for MQ Get API Calls

If you have read any of the following blog posting then you will know that I have a bee in my bonnet about the performance regarding Java/JMS MQGet API calls:

  • Tuning JMS Programs for Optimum MQ Get API Calls Performance
  • Tuning Java Programs for Optimum MQ Get API Calls Performance
  • How to Improve Your Java/JMS MQ Tuning Cred.
  • Pub/Sub Java/JMS MQ MQGet API Issue
  • Have you ever test-driven a nice looking sports car and every time you stepped on the gas pedal, you thought “wow, I expected more zip”. This kind-of describes the scenario for .NET applications issuing MQGet API calls. You expected more message through-put than you are getting.

    For the test set of messages used and the MQ Auditor audit file layout (in particular the BufferLength and DataLength fields), please review the information from one of the blog posting listed above.

    Test #1:

  • Load the 100 MQRFH2 messages into a queue
  • Run amqsbcg in bindings mode against the same queue
  • Here is the MQ Auditor audit file. You can see that there are exactly 100 successful MQGets and 1 unsuccessful MQGet with RC of 2033 (MQRC_NO_MSG_AVAILABLE). This is exactly what is to be expected. If you scroll to the right of any MQGET line, you will see that in every case the size of the buffer given to MQ (BufferLength field) is 256000 bytes.

    I have a simple C# .NET program that can be run in either .NET Managed-Mode or client mode called MQTest62.cs. You can download the source code from here. The structure of the .NET program is very similar to amqsbcg. It loops getting all messages until the queue is empty (it does not wait for more messages).

    Test #2 .NET bindings mode:

  • Load the 100 MQRFH2 messages into a queue
  • Run MQTest62 in bindings mode against the same queue
  • MQTest62.exe -m MQWT2 -q TEST.Q1

    Here is the MQ Auditor audit file. You can see that there are a total of 171 MQGets:

  • 100 successful MQGets
  • 70 unsuccessful MQGet with RC of 2080 (MQRC_TRUNCATED_MSG_FAILED)
  • 1 unsuccessful MQGet with RC of 2033 (MQRC_NO_MSG_AVAILABLE)
  • This means that MQTest62 performed 70% more MQGets API calls than amqsbcg to accomplish the same thing. So, lets analyze why there were 70 unsuccessful MQGets with RC of 2080.

  • The big difference between the internal JMQI routine (Java/JMS) and the internal MQI routine used by .NET is that the larger resized buffer is NEVER reused.
  • Hence, for every MQGet API that is larger than 4KB, the internal MQI routine will ALWAYS receive a RC of 2080 (MQRC_TRUNCATED_MSG_FAILED). The internal MQI routine will allocate a new larger buffer and the issue a 2nd MQGet API call. This newly allocated buffer is not used for future MQGet API calls.
  • For the client mode test, it will be the queue manager’s listener (MCA) that handles the interact with the queue manager and it uses MQCallBack API call rather than MQGet API call.

    Test #3 .NET managed mode:

  • Load the 100 MQRFH2 messages into a queue
  • Run MQTest62 in client mode against the same queue
  • MQTest62.exe -m MQWT2 -q TEST.Q1 -h 127.0.0.1 -p 1416 -c TEST.CHL

    Here is the MQ Auditor audit file. You can see that there are a total of 170 MQCallBacks and 1 MQGet:

  • 100 successful MQCallBacks
  • 70 unsuccessful MQCallBacks with RC of 2080 (MQRC_TRUNCATED_MSG_FAILED)
  • 1 unsuccessful MQGet with RC of 2033 (MQRC_NO_MSG_AVAILABLE)
  • This means that MQTest62 performed 70% more MQCallBacks API calls than amqsbcg to accomplish the same thing. So, lets analyze why there were 70 unsuccessful MQCallBacks with RC of 2080.

  • This is truly a funny one and is completely different from Test #1 and what the internal JMQI routine (Java/JMS) does.
  • Before every MQCallBack API call, you will see that there is an MQCB API call. The MQCB API call sets the MaxMsgLength field to 4KB in most of the cases. It rarely reuses any re-allocated buffer. Most of the time, for every MQCallBack API that is larger than 4KB, the internal MQI routine will receive a RC of 2080 (MQRC_TRUNCATED_MSG_FAILED). The internal MQI routine will allocate a new larger buffer and the issue a 2nd MQGet API call.
  • And then there are some really weird things, on line # 79 MQCB API call sets the MaxMsgLength field to 4096. On line # 80, the MQCallBack is issued but it fails with RC of 2080. If you look a little to the right, you will see “CBC_BufferLength=110592, CBC_DataLength=6587”. The buffer size if larger than the actual length of the message data but because MQCB API call set the MaxMsgLength field to 4096, this caused the MQCallBack API call to fail. Very, very strange.
  • IBM claims that the internal MQI routine that auto-adjust MQGet/MQCallBack buffer size up and down is working well and performance is not an issue. Clearly, this is not true.

    I would strongly suggest that someone open a PMR with IBM to get the .NET internal MQI routine for auto-adjusting the MQGet/MQCallBack buffer size fixed.

    Also, I cannot find any environment variables that control either the buffer size or threshold value for the auto-adjusting rotuine. I would also get IBM to add the same 2 environment variables that are used by the internal JMQI routine for Java/JMS:

  • com.ibm.mq.jmqi.defaultMaxMsgSize
  • com.ibm.mq.jmqi.smallMsgBufferReductionThreshold
  • Regards,
    Roger Lacroix
    Capitalware Inc.

    This entry was posted in .NET, C#, IBM MQ, Programming, Windows.

    Comments are closed.