Simple ps counts can be useful: ps -ef | grep $(pidof java) is a cheesy way to find all processes started by your java process. If, say you're launching imageMagic to do imager filtering, maybe you're launching too many. (in linux you have that default 1024 file handle limit too but you'd see explicit errors of that type)Īnother component is system-ram due to co-processes. Presumably each of which has lots of native and java buffers. You might be creating lots of temp files (should be easy to trace down) or just opening too many data-files. If see connection counts growing, there's the problem. ![]() I'll typically break out the ports based (DB/resource connections, vs inbound connections). I'll also use netstat -ntp | grep $(pidof java). Generally a mem leak has a couple companion objects, so seeing something as innocent as a Map.Key increasing lets me know I've got some unfreed' map growing. ![]() I've not found it possible to use profilers in production (not to mention it makes the system unusably slow). This helps me pinpoint my problem (in production) in a matter of minutes. If I see a particular class increase by K (for k requests) I know i've got a winner. I'll often use the jcmd $(pidof java) GC.class_histogram to identify classes with trouble. If not, maybe it's cache related need to produce a diversified parameter-set. ![]() If the memory grows proportionately to the number of requests, then this should be fixable. So I'll create a unit test or use apache-benchmark that initiate an expensive operation with deterministic memory pressure. My typical work-loads are such that it's pretty reproducible what causes a mem-leak. I'll also use jvisualVM, but it's mostly the same type of information. 90% of the time, I just inspect the java-gc logs (which I always have turned on).
0 Comments
Leave a Reply. |