Java crashes with message 'Native memory allocation (mmap) failed to map nnnnnnn bytes for committing reserved memory.'
Applies To
Java SE JDK and JRE - Version 8 and later
Information in this document applies to any platform.
Table of Contents
1 Symptoms
A JVM crashes when trying to allocate native memory. The following
HotSpot Error log (hs_err_pid<PID>.log) snippet shows the error:
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 246939648 bytes for committing reserved memory.
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (os_linux.cpp:2640), pid=<PID>, tid=<TID>
#
# JRE version: Java(TM) SE Runtime Environment (8.0_131-b31) (build 1.8.0_131-b31)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.131-b31 mixed mode linux-amd64 compressed oops)
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
--------------- T H R E A D ---------------
Current thread (0x00007f7060142000): VMThread [stack: 0x00007f7047f00000,0x00007f7048000000] [id=19959]
Stack: [0x00007f7047f00000,0x00007f7048000000], sp=0x00007f7047ffdfc0, free space=1015k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V [libjvm.so+0xac8eda] VMError::report_and_die()+0x2ba
V [libjvm.so+0x4fd59b] report_vm_out_of_memory(char const*, int, unsigned long, VMErrorType, char const*)+0x8b
V [libjvm.so+0x925833] os::Linux::commit_memory_impl(char*, unsigned long, bool)+0x103
V [libjvm.so+0x925d89] os::pd_commit_memory(char*, unsigned long, unsigned long, bool)+0x29
V [libjvm.so+0x91ffaa] os::commit_memory(char*, unsigned long, unsigned long, bool)+0x2a
V [libjvm.so+0x994b63] PSVirtualSpace::expand_by(unsigned long)+0x53
V [libjvm.so+0x985420] PSOldGen::expand(unsigned long)+0x170
V [libjvm.so+0x98562b] PSOldGen::resize(unsigned long)+0x1cb
V [libjvm.so+0x98d331] PSParallelCompact::invoke_no_policy(bool)+0x991
V [libjvm.so+0x98d623] PSParallelCompact::invoke(bool)+0x63
V [libjvm.so+0x483fd4] CollectedHeap::collect_as_vm_thread(GCCause::Cause)+0x114
V [libjvm.so+0xaca351] VM_CollectForMetadataAllocation::doit()+0x161
V [libjvm.so+0xad23d5] VM_Operation::evaluate()+0x55
V [libjvm.so+0xad07aa] VMThread::evaluate_operation(VM_Operation*)+0xba
V [libjvm.so+0xad0b2e] VMThread::loop()+0x1ce
V [libjvm.so+0xad0fa0] VMThread::run()+0x70
V [libjvm.so+0x927e58] java_start(Thread*)+0x108
VM_Operation (0x00007f703d9f4a70): CollectForMetadataAllocation, mode: safepoint, requested by thread 0x00007f6ff4900800
2 Causes
This type of OutOfMemoryError crash can have a few causes:
- Insufficient physical memory to satisfy the memory allocation request, while retaining enough physical memory for the OS kernel
- Insufficient swap file memory
- Insufficient address space (a/k/a virtual process memory)
2.1 Insufficient Physical or Swap File Memory
To confirm which cause may apply to your situation, check first for insufficient physical or swap file memory. Check the end of the hs_err log for the Memory at the time of the crash.
Memory: 4k page, physical 36459362k(6342014k free), swap 8912892k(3124978k free)
If the amount being allocated is close to using all of the amount free of either memory type, assume the kernel reserve plus the amount of memory being requested for the allocation is too large for the amount of memory available. Even if the system has more memory free than the amount being requested, it may not have enough if it’s close to the limit. Suspect a lack of physical memory or swap if it’s within a few hundred Mb eve. In this context, being close to the limit is, of course, relevant to the particulars of your system. In general, it’s best not to try to be too precise. If you’re close enough to question it, assume you don’t have enough memory, and that this is most likely your cause.
In this case, however, ~6Gb physical memory and ~3Gb swap is plenty of memory to satisfy the ~200Mb requested and any kernel reserve.
2.2 Insufficient Address Space
Next, check for insufficient address space. A process map (pmap) shows the virtual memory layout of the process. Collect the pmap at the time of the crash. The best way to do this is with a core file that is dumped during the crash:
$ pmap <COREFILE>
If the system has core files enabled, a core file will get dumped before the process aborts, and the hs_err log will print the location of the core file near the beginning of the hs_err log, just after the JRE version. If core files are not enabled, the following line will be printed instead:
# JRE version: Java(TM) SE Runtime Environment (8.0_131-b31) (build 1.8.0_131-b31)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.131-b31 mixed mode linux-amd64 compressed oops)
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
If you cannot enable core files and you suspect an address space issue, try taking pmaps at intervals during runtime:
$ pmap <PID>
Then, you can examine the one taken just before the crash.
For 32-bit applications, the address space is limited to 4Gb. Because of this, the most common cause of insufficient address space is using nearly all the 4Gb of address space allowed. You can verify if that may be the case by looking at the end of the pmap to see the total used process memory (RSS). If it’s close to the 4Gb limit, assume you have no unmapped contiguous spaces that are large enough to satisfy the mapping request.
With 64-bit applications, the address space is, for practical purposes, unlimited. Therefore, running out of address space is not the issue. The most common cause of insufficient address space for 64-bit applications is the native (C) heap being blocked from growing. The native (C) heap is a contiguous virtual space, and it grows on demand during process runtime. If another mapping gets addressed too low in the address space, it can block the native (C) heap from growing. For Java applications, this is most commonly seen with small heap sizes (4Gb or less) when Compressed Oops are enabled. This combination of settings can result in the Java heap getting assigned a low address, which can cause the native OutOfMemoryError (OOME) in the hs_err log snippet above.
Note that Compressed Oops are are enabled by default in 64-bit
applications. The flag is called UseCompressedOops. The hs_err log
JRE version output will show if UseCompressedOops is true:
# JRE version: Java(TM) SE Runtime Environment (8.0_131-b31) (build 1.8.0_131-b31)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.131-b31 mixed mode linux-amd64 compressed oops)
To verify if the Java heap is mapped at a low address, check the pmap. It will usually be mapped at the next highest address above the native (C) heap. In the below example, notice the Java heap is mapped at 0x1FB000000 and the native heap at 0x100110000.
0000000100000000 8K r-x-- java
0000000100100000 8K rwx-- java
0000000100102000 56K rwx-- [ heap ]
0000000100110000 2624K rwx-- [ heap ] <--- native (C) heap
00000001FB000000 24576K rw--- [ anon ] <--- Java heap starts here
0000000200000000 1396736K rw--- [ anon ]
0000000600000000 700416K rw--- [ anon ]
The “heap” section of the hs_err log also shows the Java heap’s starting virtual memory address, which is the starting address for the Young Generation:
Heap
PSYoungGen total 174464K, used 2827K [0x00000000f5560000, 0x0000000100000000, 0x0000000100000000)
You can also verify this cause by disabling Compressed Oops and verify if the problem is resolved. Note that disabling Compressed Oops may have a performance effect. Always test this change in a test environment with production-equivalent loads before implementing in a production environment.
For more information on these topics, see:
- JDK-8187709 - Native memory allocation (mmap) failed to map N bytes for committing reserved memory
- Understanding the Java Heap Versus the Native (C) Heap
3.1 Solutions
The solution to the crash depends on the cause.
3.2 Insufficient Physical Memory or Swap File
Increase the physical and/or swap file memory on the system, or reduce the memory load on the system. Common ways to reduce memory load include:
- Decreasing the Java heap size (-Xmx/-Xms)
- Decreasing the number of Java threads
- Decreasing the Java thread stack sizes (-Xss)
- Decreasing the number of other processes running on the system
3.3 Insufficient Address Space
In the case of a 32-bit application that has reached the 4Gb address space limit, migrate the application to 64-bit or reduce memory load on the application by reducing:
- The Java heap size
- The number of threads
- The thread stack sizes
In the case of a 64-bit application with the Java heap based at a low address:
- Rebase the Java heap to a higher address, such as above 32Gb by
adding the following java command line option and restarting the
process:
-XX:HeapBaseMinAddress=n(where n is the desired virtual process memory address) - A workaround is to disable CompressedOops by adding the following
java command line option and restarting the process: -
-XX:-UseCompressedOops. This option instructs the JVM to run without the CompressedOops, which means all pointers will be 64-bit, and therefore, the larger footprint means the Java heap will be mapped higher in the address space. Because the 32-bit pointers will no longer be used, this option can have a performance effect. Always test this change in a test environment with production-equivalent loads before implementing in a production environment.
Last reviewed on Wed Jan 01 2025 00:00:00 GMT+0000 (Coordinated Universal Time)