Linux Diagnostic Information
When the Mango process is stuck or performing poorly, JDK diagnostic tools allow you to inspect the JVM internals -- thread states, memory usage, object counts, and execution profiles -- without stopping the application. This page covers the essential diagnostic commands for troubleshooting Mango on Linux.
Some JDK commands may not be on your system PATH. You may need to find them in your JDK's bin/ directory. You may also need to run commands as the Mango user with sudo -u mango <cmd>.
Finding the Mango Process ID (PID)
Before running any diagnostic command, you need the Mango JVM's process ID.
Search for Java processes:
ps aux | grep java
Get the PID directly (if only one Java process is running):
pidof java
Best method when using systemd:
systemctl show --property MainPID --value mango
Memory Histogram
Count and size each type of object in Mango's heap memory. The output is sorted by size in descending order, showing what consumes the most memory:
jmap -histo $(systemctl show --property MainPID --value mango) > mangoMemMap.txt
This is useful for identifying memory-hungry objects without the overhead of a full heap dump.
Heap Dump
A heap dump captures a complete snapshot of JVM memory, including all objects and their references. This is the most detailed tool for diagnosing memory leaks and understanding memory consumption patterns.
Full heap dump (includes objects eligible for garbage collection -- larger file):
jmap -dump:format=b,file=mangoHeapFull.hprof $(systemctl show --property MainPID --value mango)
Live heap dump (only reachable objects -- smaller file, usually preferred):
jmap -dump:live,format=b,file=mangoHeapLive.hprof $(systemctl show --property MainPID --value mango)
Analyze heap dumps using Eclipse MAT or JVisualVM for detailed object relationship analysis.
Thread Dump / Stack Trace
A thread dump shows every running thread, its state, and its stack trace. This tells you exactly what Mango is doing at any point in time:
jstack -l $(systemctl show --property MainPID --value mango) > mangoThreads.txt
Mango also provides a built-in thread viewer under Administration > System Status > Threads. See Threads.
Thread dumps are essential for diagnosing:
- Deadlocks -- Two or more threads waiting for locks held by each other.
- Thread starvation -- All threads in a pool are busy, preventing new tasks from executing.
- Long-running operations -- Threads stuck in the same method across multiple dumps.
Flight Recording (Profiling)
Java Flight Recorder (JFR) captures detailed runtime metrics including CPU usage, memory allocation, thread activity, and I/O operations over a period of time. Radix IoT support may request a flight recording to diagnose complex issues.
Start a 60-second recording:
jcmd $(systemctl show --property MainPID --value mango) JFR.start duration=60s filename=mango.jfr
Check recording status:
jcmd $(systemctl show --property MainPID --value mango) JFR.check
Stop an active recording:
jcmd $(systemctl show --property MainPID --value mango) JFR.stop
After capturing the recording, you can send the .jfr file to Radix IoT support or analyze it yourself using JDK Mission Control.
Number of Open Files
The Mango NoSQL time-series database uses individual files for each data point and shard. This means Mango may attempt to open a very large number of files simultaneously. The db.nosql.maxOpenFiles property in mango.properties should be set to at least 2x the number of data points.
If Mango exceeds the OS file handle limit, you will see errors like:
ERROR - Should never happen, data loss for unknown reason
java.lang.RuntimeException: java.io.FileNotFoundException:
/data/mango/databases/mangoTSDB/74/12010/759.data.rev (Too many open files)
If you use the supplied systemd service file (mango.service), the limit is set to LimitNOFILE=1048576, which is sufficient for most installations. To check or modify the limit:
# Check the current limit for the Mango process
cat /proc/$(systemctl show --property MainPID --value mango)/limits | grep "Max open files"
# Check the system-wide limit
ulimit -n
Number of Memory-Mapped Files
The NoSQL database uses memory-mapped files for improved read performance. Linux limits the number of memory-mapped regions a process can create. If this limit is exceeded, the OS will kill the Java process, leaving a hs_err_pidXXX.log file in the Mango home directory with a message like:
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate X bytes for AllocateHeap
Check the system limit:
cat /proc/sys/vm/max_map_count
Check how many memory maps Mango is using:
cat /proc/$(systemctl show --property MainPID --value mango)/maps | wc -l
Count memory maps from a crash log:
grep /opt/mango/databases hs_err_pid*.log | wc -l
Increase the limit temporarily:
sysctl -w vm.max_map_count=262144
Increase permanently (survives reboot):
Add vm.max_map_count=262144 to /etc/sysctl.conf and run sysctl -p.
For more information, see the MapDB article on memory mapped files.
MapDB Point Value Cache Disk Space Crash
If the system runs out of disk space while the MapDB point value cache is active, the JVM may crash with a hs_err_pidXXX.log containing:
# A fatal error has been detected by the Java Runtime Environment:
# SIGBUS (0x7) at pc=0x00007f6d80937086, pid=3534205, tid=3535458
# Problematic frame:
# v ~StubRoutines::jshort_disjoint_arraycopy
This occurs because the memory-mapped file backing the cache cannot be flushed to disk. To prevent this, ensure adequate disk space monitoring is in place and configure alerts well before the disk is full. See Managing Disk Space.
Related Pages
- Threads — View thread states and stack traces from the Mango UI
- Internal Metrics — Monitor JVM memory, thread counts, and database write throughput
- Managing Disk Space — Prevent JVM crashes caused by disk space exhaustion
- Persistent Point Value Cache — Diagnose MapDB-related crashes from memory-mapped file issues
- Debug Log Settings — Enable detailed logging to correlate with JVM diagnostic output