Pages

Thursday, 5 January 2012

Steps to analyze JVM heap usage for java.lang.OutOfMemoryError




Analyzing Java virtual machine performance in WebSphere Process Server
http://www.ibm.com/i/c.gif


http://www.ibm.com/i/c.gif

Level: Intermediate
Performance Analyzer is a Java-based graphical tool that you can use to analyze the performance of WebSphere Process Server V6.1. This article introduces Performance Analyzer and shows you how to use the simple Java-based tool to check memory stability and predict server crashes.

Garbage collection is the virtual machine (VM) process of deallocating unused Java™ objects in the Java heap. The Java heap is where the objects of a Java program live. It is a repository for live objects, dead objects, and free memory. When an object can no longer be reached from any pointer in the running program, the object is garbage.
The Java virtual machine (JVM) heap size determines how often and how long the VM spends collecting garbage. An acceptable rate for garbage collection is application-specific and you should adjust it after analyzing the actual time and frequency of garbage collections.
If you set a large heap size, full garbage collection is slower, but it occurs less frequently. If you set your heap size in accordance with your memory needs, full garbage collection is faster, but occurs more frequently.
The goal of tuning your heap size is to minimize the time that you spend doing garbage collection while maximizing the number of clients that you can handle at a given time.
You might see the following Java error if you are running out of heap space:
java.lang.OutOfMemoryError <<no stack trace available>>
java.lang.OutOfMemoryError <<no stack trace available>>
http://www.ibm.com/i/v14/rules/blue_rule.gif


IBM® WebSphere® Process Server V6.1 Performance Analyzer is a  Java-based tool that dynamically shows the heap usage (used and free heap) for WebSphere Process Server V6.1.
You can use Performance Analyzer to:
  • Determine if the memory usage is stable.
  • Predict if JVM is going to run out of memory and crash eventually.



The Performance graph is a visual aid that shows you how the JVM is performing based on the load in the system. The graph updates itself dynamically.
The lines in the graph are color coded to differentiate between current JVM heap size, heap in use, and free heap. The color codes are described as follows in Table 1:
Color code
Description
RED
Shows the current JVM heap size. The maximum value specified in WebSphere Process Server for this sample run is 1.5GB, as seen on the y axis.
GREEN
Shows the heap in use. The fact that you see this line continuously increasing towards the red line shows that there is a memory leak and eventually performance degradation.
BLUE
Shows the free heap. If your WebSphere Process Server does have performance issues, you will see this line decreasing continuously.
‘X’
Shows the allocation failures at various GC cycles.

A stable JVM is one in which the heap size becomes constant after a certain period of time, see Figure 1.
An unstable JVM is one in which in which the heap size continuously increases over many GC cycles, see Figure 2.
http://www.ibm.com/i/v14/rules/blue_rule.gif


Performance Analyzer provides these benefits:
  • Performance Analyzer helps you predict if the JVM is going to run out of memory and crash eventually.
  • Performance Analyzer does not require that you have a great deal of knowledge about the JVM heap.
  • The performance graph provides you with just enough data/visual content to easily prove/report a performance issue.
  • Performance Analyzer is Java based, hence it runs on any platform.
  • The Performance Analyzer graph updates itself dynamically.
  • Performance Analyzer provides a report of the heap analysis.
  • Performance Analyzer also has a command line interface.



Performance Analyzer feeds off the data in the Verbose GC logs; however, the default settings in WebSphere Process Server do not create Verbose GC logs. Performance Analyzer needs to be turned on manually. Use the following steps to turn it on.
  1. Login to the WebSphere Process Server admin console.
  2. Go to Application servers => server name => Process Execution => Process Definition => Java Virtual Machine.
  3. Enable the Verbose Garbage collection check box, see Figure 3.


    Figure 3. Enabling Verbose GC
In addition to turning on Verbose GC collection, you might want to tune your JVM for optimum performance. Although many articles are available on the Web for JVM tuning, use the steps below for this article:
  1. From the WebSphere Process Server admin console, select Servers => Application Servers.
  2. Click server name.
  3. Under server infrastructure expand “Java and Process Management” by clicking on the “+” icon.
  4. Select link Process definition.
  5. Select Process definition => Additional properties => Java Virtual Machine.
  6. Increase max heap size to 1024 and above depending on configuration.
  7. Paging (that is, page faults) can cause severe performance problems for your application and result in long garbage collection pauses. To avoid paging, ensure that -Xmx is not set to more than 75 percent of the physical memory of the system.
  8. Fragmentation refers to the existence of free chunks of memory in the heap that are too small for object allocation.
The following recommendations will help you avoid fragmentation:
  • The easiest way to avoid fragmentation is to increase the heap size within its natural limits.

Option
Use
-XXcompactratio:nn
Sets the percentage (nn) of the heap that should be compacted at each old collection.
-XXfullcompaction
Compacts the entire heap at each old collection.


  • Normally, a partial compaction of each old collection occurs with the garbage collection. If you think this default compaction ratio is either insufficient or "overkill," you can turn it down; thus reducing pause times, by using some combination of the compaction start-up options listed here.
  • Use a generational garbage collector. During a young collection (nursery garbage collection) the objects that are found live in the nursery are moved to the old generation. This has the positive side-effect of compacting the objects while they are moved.
  • For Generic JVM arguments:
    Set “-Dibm.dg.trc.print=st_verify -Xgcpolicy:optavgpause -Xcompactgc –Xnopartialcompactgc”
  • For Generic JVM arguments:
    Set Initial heap size 512
    Set maximum heap size 1536
  • A very useful guide for doing tune up parameters for JVM can be found at:
    WebSphere z/OS Information Center: Tuning Java virtual machines

http://www.ibm.com/i/v14/rules/blue_rule.gif


To run Performance Analyzer:
  1. Enable Verbose GC via WebSphere Process Server Admin Console (with or without tuning parameters - jvm arguments).
  2. Perform the WebSphere Process Server activities that you suspect would cause performance degradation (for example, install/uninstall applications, run your scenario for a certain time period, and so forth). After performing step 2, you will see a native_stderr.log generated in your logs directory.
  3. Download the Performance Analyzer toolkit (see the Download section), and run it against the native_stderr.log as follows to find the valid arguments (see Figure 4).


    Figure 4. Command line arguments
  4. Run the provided toolkit against the native_stderr.log as follows to get a summary and the total GC cycles used, see Figure 5 (This will also show a graph).


    Figure 5. Summary of GC cycles used




    Figure 6. Performance Graph showing an unstable JVM that will eventually run out of memory
For an explanation of color codes and legends in the Graph, see Table 1.
  1. Use the ‘-g’ option if you do not want the graph display.
    Example: java -jar hsa_jdk15.jar –f C:\HeapSpaceAnalyzer\sample_jdk15.log -g
  2. Use the ‘-a’ option if you do not want the graph to display the allocation failures.
    Example: java -jar hsa_jdk15.jar –f c:\logs\native_stderr.log -g –a
  3. With the ‘-d’ option the graph dynamically updates every 60 seconds (default refresh rate).
    Example: java -jar hsa_jdk15.jar –f c:\logs\native_stderr.log –d
  4. Use the ‘-t’ option to change the refresh rate(milliseconds) at which the graph dynamically updates.
    Example: java -jar hsa_jdk15.jar –f c:\logs\native_stderr.log –d –t 30000

http://www.ibm.com/i/v14/rules/blue_rule.gif


Hopefully, the ease of use of Performance Analyzer and the dynamic nature of the Performance graph and the platform independence provided by Java will help testers, support teams, and consultants with a very useful tool to keep an eye on the performance of WebSphere Process Server V6.1 on any platform.


Websphere MQ Questions


1.PAth to find the mq logs on HP_UX for specific QUEUE MANAGER

/var/mqm/log/UFISMQ/active

Errors

/var/mqm/errors

2.How to alter LogPrimaryFiles ,LogFilePages and LogPath for already created queuemanager ?


Ans:Using amqhlctl.lfh.If you edit this file we can change the primary logs,secondary logs and log file pages.


Ex: 1.) Create a dummy qmgr with the appropriate log file size, lets call the
qmgr DUMMY.
$ crtmqm -lf 8192 -lp 10 -ls 5 DUMMY NOTE: The crtmqm
with large log files may take a few minutes.

2.) Next, stop the qmgr for which you are reallocating the logs.
$ endmqm -i queue-manager-name

3.) Then change directory ( cd ) to the current qmgr log directory and
delete ( rm ) file amqhlctl.lfh.
$ cd /var/mqm/log/queue-namager-name
$ rm amqhlctl.lfh

4.) Then change directory to the active directory and delete the old
logs.
$ cd active
$ rm *.LOG

5.) Copy ( cp ) from DUMMY qmgr directory the amqhlctl.lfh file and the
newly created logs to the dirctory of the Queue Manager for which
you are reallocating the logs.
$ cd /var/mqm/log/DUMMY
$ cp -pr * /var/mqm/log/Old_QMgr

6.) Update the current qmgr qm.ini Log: stanza with the updated logfile
sizes.

#* *#
#* *#
Log:
LogPrimaryFiles=10
LogSecondaryFiles=5
LogFilePages=8192
LogType=CIRCULAR
LogBufferPages=17
LogPath=/var/mqm/log/QMMQIP01/
LogWriteIntegrity=TripleWrite

7.) Delete the Dummy qmgr and restart the production qmgr.
$ dltmqm DUMMY
$ strmqm queue-manager-name


NOTE: Your better off just backing up the QM definitions with MS03, deleting the QM and recreating it with the correct size logs.

In fact only LogFilePages requires that you recreate the queue manager.

For all other changes the qmgr should be stopped.
Modify the # of primary/secondary logfiles, modify the LogPath and move the files accordingly, do not IIRC need you to recreate the queue manager. Just be sure you have it stopped while you move the logs and change the log stanza. Restart the qmgr and it should now conform to the new rules.

However as previously stated, changing the logfile size, will mandate a delete and recreate of the qmgr.


……………………………………………………………………………………………………………………………………………………………………………………………………………..


·  To stop any listeners associated with the queue managers, using the command:

endmqlsr -m QMgrName


Path for sample MQ programs on HP_UX system

/opt/mqm/samp

Installation:

The WebSphere® MQ product code is installed in /opt/mqm. If you cannot install the product code in the /opt/mqm file system because the file system is too small to contain the product, you can do one of the following:
  1. Create a new file system and mount it as /opt/mqm. If you choose this option, the new file system must be created and mounted before installing the product code.
  2. Create a new directory anywhere on your machine, and create a symbolic link from /opt/mqm to this new directory. For example:
  mkdir /bigdisk/mqm
  ln -s /bigdisk/mqm /opt/mqm
If you choose this option, the new directory must be created, and the link created, before installing the product code.

/var/mqm  –    for Websphere MQ working data.
Path to find the channel and qmstatus.ini and queues present in the dir
/var/mqm/qmgrs/UFISMQ
Creating the user ID and group
Create the required user ID and group ID before you install WebSphere® MQ. Both user ID and group ID must be set to mqm. For stand-alone machines, you can create the new user ID and group IDs locally; for machines administered in a network information services (NIS) domain, an administrator must create the IDs on the NIS master server machine.
It is also suggested that you set the mqm user's home directory to /var/mqm.
You can use the System Administration Manager (SAM) to work with user IDs.
If you want to run administration commands, for example crtmqm (create queue manager) or strmqm (start queue manager), your user ID must be a member of the mqm group.
Users do not need mqm group authority to run applications that use the queue manager; it is needed only for the administration commands.
Note: No symbolic links are required for the 64-bit WebSphere MQ libraries required by WebSphere MQ commands

//There are two configuration files are available in IBM MQ

1.mqs.ini : This file will gives information about all queue manangeers and default queue manager
Path :/var/mqm/mqs.ini
2.qm.ini : this is the configuration file defined for each specific queue manager.This file will creates automatically after queue manager was created.
Path : /var/mqm/qmgrs/QNAME

Queue manager configuration files, qm.ini
A queue manager configuration file, qm.ini, contains information relevant to a specific queue manager. There is one queue manager configuration file for each queue manager. The qm.ini file is automatically created when the queue manager with which it is associated is created.
A qm.ini file is held in the root of the directory tree occupied by the queue manager. For example, the path and the name for a configuration file for a queue manager called QMNAME is:
/var/mqm/qmgrs/QMNAME/qm.ini
The queue manager name can be up to 48 characters in length. However, this does not guarantee that the name is valid or unique. Therefore, a directory name is generated based on the queue manager name. This process is known as name transformation. For a description, see Understanding WebSphere MQ file names.
Figure 1 shows how groups of attributes might be arranged in a queue manager configuration file in WebSphere® MQ for UNIX® systems.
Figure 1. Example queue manager configuration file for WebSphere MQ for UNIX systems
#* Module Name: qm.ini                                             *#
#* Type       : WebSphere MQ queue manager configuration file      *#
#  Function   : Define the configuration of a single queue manager *#
#*                                                                 *#
#*******************************************************************#
#* Notes      :                                                    *#
#* 1) This file defines the configuration of the queue manager     *#
#*                                                                 *#
#*******************************************************************#

ExitPath:
   ExitsDefaultPath=/var/mqm/exits
   ExitsDefaultPath64=/var/mqm/exits64

Service:
   Name=AuthorizationService
   EntryPoints=13

ServiceComponent:
   Service=AuthorizationService
   Name=MQSeries.UNIX.auth.service
   Module=/opt/mqm/bin/amqzfu 1
   ComponentDataSize=0

Log:
   LogPrimaryFiles=3
   LogSecondaryFiles=2
   LogFilePages=1024
   LogType=CIRCULAR
   LogBufferPages=0
   LogPath=/var/mqm/log/saturn!queue!manager/

XAResourceManager:
   Name=DB2 Resource Manager Bank
   SwitchFile=/usr/bin/db2swit
   XAOpenString=MQBankDB
   XACloseString=
   ThreadOfControl=THREAD

Channels: 2
   MaxChannels=20
   MaxActiveChannels=100
   MQIBindType=STANDARD

TCP:
   KeepAlive = Yes

QMErrorLog:
   ErrorLogSize=262144
   ExcludeMessage=7234
   SuppressMessage=9001,9002,9202
   SuppressInterval=30

ApiExitLocal:
   Name=ClientApplicationAPIchecker
   Sequence=3
   Function=EntryPoint
   Module=/usr/Dev/ClientAppChecker
   Data=9.20.176.20
Notes for Figure 1:
  1. /usr/mqm/bin/amqzfu on AIX®
  2. For more information on the Channel stanza, see the WebSphere MQ Intercommunications manual.
fa12510_
…………………………………………………………………………………………………..

MQ Manager Stops Responding To JMS Requests
 Technote (FAQ)

Problem
MQ Manager stops responding to JMS requests.

SystemOut.log:
FreePool E J2CA0046E: Method createManagedConnctionWithMCWrapper caught an exception during creation of the ManagedConnection for resource JMS$cftestcf$JMSManagedConnection@1373738090, throwing ResourceAllocationException. Original exception: javax.resource.spi.ResourceAdapterInternalException: Failed to create session
at com.ibm.ejs.jms.JMSCMUtils.mapToResourceException(JMSCMUtils.java:125)
at com.ibm.ejs.jms.JMSManagedSession.<init>(JMSManagedSession.java:213)
. . .
javax.jms.JMSException: MQJMS2005: failed to create MQQueueManager for 'xldn0384abc:XYZ123'
at com.ibm.mq.jms.services.ConfigEnvironment.newException(ConfigEnvironment.java:546)
at com.ibm.mq.jms.MQConnection.createQM(MQConnection.java:1450)
at com.ibm.mq.jms.MQConnection.createQMNonXA(MQConnection.java:960)
at
. . .
com.ibm.mq.MQException: MQJE001: Completion Code 2, Reason 2009
at com.ibm.mq.MQManagedConnectionJ11.<init>(MQManagedConnectionJ11.java:172)
at com.ibm.mq.MQClientManagedConnectionFactoryJ11._createManagedConnection(MQClientManagedConnectionFactoryJ11.java:270)
at com.ibm.mq.MQClientManagedConnectionFactoryJ11.createManagedConnection(MQClientManagedConnectionFactoryJ11.java:290)

AMQERRO1.log
AMQ9513: Maximum number of channels reached.

Cause
The maximum number of channels that can be in use simultaneously has been reached. The number of permitted channels is a configurable parameter in the queue manager configuration file.
When this application connects to MQ a channel is started on the MQ side. If the application, for any reason, is unexpectedly disconnected (no proper disconnection takes place) then the channel will NOT get cleaned up on the MQ side. It will become 'orphaned' from its original parent connection. When the application reconnects it will get a new instance of the channel, so now there will be 2 instances of the channel, the new one and the old, orphaned instance.

MQ only allows a certain number of channels. If you build up enough channels you will get the MaxChannels error occurring here.

Channels may also be getting orphaned due to TCP/IP interruptions rather than an application disconnecting improperly from MQ.

Solution
How do we manage these orphaned channels?
If you can get these orphaned channels to clean up you will go a long way towards avoiding this issue.
Wait for some of the operating channels to close. Retry the operation when some channels are available.

The answer is TCP/IP KeepAlive.
You must enable KeepAlive at operating system (TCP/IP) level. How this is done depends entirely on the operating system you are using but your networking or System Admin people will probably know how to do this.

KeepAlive has a timeout option that is usually set to 2 hours. Recommend setting this to a much shorter interval, such as 10 minutes. Once this change has been made the OS will need to be rebooted for this to take effect.

In addition to enabling KeepAlive at the OS level, MQ must also be configured to use KeepAlive. This is done by adding the following stanza to the QM.INI file for this queue manager, as follows:
TCP:
KeepAlive=yes

Once this stanza has been added the queue manager must be restarted for this to take effect.

Lastly, it is highly recommend changing the MaxChannels value (also in the QM.INI file) to 3 times what you think may be needed. For instance from 100 to 300 MaxChannel. This will ensure that you have some 'elbow room' in the event a contingency occurs.





Document Information

Product categories: Software > Application Servers > Distributed Application & Web Servers > WebSphere Application Server > Java Message Service (JMS)
Operating system(s): HP-UX
Software version: 4.0
Software edition:
Reference #: 1177012
IBM Group: Software Group
Modified date: Jun 28, 2005
(C) Copyright IBM Corporation 2000, 2006. All Rights Re


javax.jms.JMSException: MQJMS2005: failed to create MQQueueManager
Posted: Feb 27, 2006 03:37:31 AM   in response to: bonevichin response to: bonevich's post
Click to report abuse...

Click to reply to this thread
You can just start the listener on a different port:

runmqlsr -t tcp -p <port number goes here>

Or, if you prefer to start your listeners from runmqsc scripts:

DEFINE LISTENER('listener_name') TRPTYPE(TCP) PORT(<port number goes here>)
START LISTENER('listener_name')

Or you can change your existing listener with runmqsc:

ALTER LISTENER('existing_listener') PORT(<port number goes here>)

You probably need to restart the listener to see the changes:

STOP LISTENER('existing_listener')
START LISTENER('existing_listener')

Or, you can use the MQ v6 explorer GUI; the listeners are in the listeners
folder which is under the advanced folder of the queue manager you want to change.

Hope this helps,

Phil
  •  
·  To start the sender channels as background tasks using WebSphere MQ Explorer expand the queue manager, expand Advanced, and select Channels.
·  If you prefer, you can start listeners and channels as foreground tasks:
  1. To start a listener, enter the following command on the command line:
runmqlsr -t tcp -p 1414 -m WBRK_CONFIG_QM
  1. To start channels, enter the following commands:
c.  runmqchl -m WBRK_UNS_QM -c WBRK_UN_TO_BR
d.   
runmqchl -m WBRK_QM -c WBRK_BR_TO_UN

UNIX systems

  1. To start a listener enter the following command in a shell window:
runmqlsr -t tcp -p 1414 -m WBRK_QM
  1. To start a sender channel, enter the following command in a shell window:
runmqchl -c BROKER.CONFIG -m WBRK_QM
 
Commands to stop and start MQ listener 
 
  STOP LISTENER(listener_name)
STOP LISTENER(SYSTEM.DEFAULT.LISTENER.TCP)
     2 : STOP LISTENER(SYSTEM.DEFAULT.LISTENER.TCP)
AMQ8706: Request to stop WebSphere MQ Listener accepted.
start listener(SYSTEM.DEFAULT.LISTENER.TCP)
     3 : start listener(SYSTEM.DEFAULT.LISTENER.TCP)
AMQ8021: Request to start WebSphere MQ Listener accepted.


Websphere Process Server Production Topologies


WebSphere Process Server  (WPS) consists of 3 key functional components – User Applications, Messaging Infrastructure and CEI (Common Event Infrastructure) support Infrastructure. WebSphere Process Server can be deployed in various topologies depending on how heavily each of these infrastructure components are used. Popularly called as “bronze”, “silver” and “gold” topologies address these different usage requirements.

“Bronze” or Single Cluster topology is used when WPS is deployed with all the 3 infrastructure in a single cluster

“Silver” or Remote Messaging topology is used where user application and CEI share a single cluster and messaging infrastructure is deployed in it’s own cluster. This is the best option when CEI infrastructure is not used heavily

“Gold” or Remote Messaging and Remote Support topology is used when each of the 3 functional infrastructural pieces of WPS has it’s own cluster