In the second part of the blog series dedicated to heap dump analysis with Memory Analyzer (see previous post here) I will have a detailed look at the shallow size of objects in the heap dump. In Memory Analyzer the term shallow size means the size of an object itself, without counting and accumulating the size of other objects referenced from it. For example the shallow size of an instance of java.lang.String will not include the memory needed for the underlying char[].
At first look this seems like a clear definition and a relatively boring topic to read about. So why did I decide to write about it? Because despite of the understandable definition, it is not always straightforward (for the tool developers) to calculate the shallow size, or (for a user) to understand how the size was calculated. The reasons? – different JVM vendors, different pointer sizes (32 / 64 bit), different dump formats, insufficient data in some heap dumps, etc … These factors could lead to small differences in the shallow sizes for objects of the same type shallow sizes being displayed for objects of the same type, and thus to questions.
Is it really important to know the precise size? Not necessarily. If you got a heap dump from an OutOfMemoryError in your production system, and MAT helps you to easily find the leak suspect there – let’s say it is some 500Mb big object - then the shallow size of every individual object accumulated in the suspect's size doesn’t really matter. The suspect is clear and you can go on and try to fix the problem.
On the other hand, if you are trying to understand the impact of adding some fields to your “base” classes, then the size of the individual instance can be of interest.
In the rest of the post I would have a look at the information available (or missing) in the different snapshot formats, explain what MAT displays as shallow size in the different cases, and try to answer some of the questions related to the shallow size which we usually get. If you are interested, read further.