我使用“大页面(1MB大小)”并使用mmap分配内存。 分配4 GB内存后,mmap返回失败。
mmap(NULL, memsize, PROT_READ | PROT_WRITE,MAP_PRIVATE | MAP_ANONYMOUS |MAP_POPULATE | MAP_HUGETLB, -1, 0);这里memsize = 1GB
我在循环中调用上面的语句。 最多4次迭代就可以了。 在第5次迭代中,mmap失败。
mmap(NULL, memsize, PROT_READ | PROT_WRITE,MAP_PRIVATE | MAP_ANONYMOUS |MAP_POPULATE , -1, 0);上面的语句(没有largepages)适用于任何次数的迭代。 我错过了与大页面相关的任何信息吗? 我尝试了“MAP_NORESERVE”标志,如4GB中的mmap失败中提到的那样。
任何形式的信息将不胜感激。 谢谢。
I am allocating memory using "huge pages(1MB size)" and using mmap. After allocating 4 GB of memory ,mmap returns fail.
mmap(NULL, memsize, PROT_READ | PROT_WRITE,MAP_PRIVATE | MAP_ANONYMOUS |MAP_POPULATE | MAP_HUGETLB, -1, 0);here memsize = 1GB
I am calling above statement in a loop. Upto 4 iterations it is fine. In 5th iteration mmap is failed.
mmap(NULL, memsize, PROT_READ | PROT_WRITE,MAP_PRIVATE | MAP_ANONYMOUS |MAP_POPULATE , -1, 0);Above statement (without hugepages) works perfectly for any number of iterations. Am I missing any information related to hugepages? I tried "MAP_NORESERVE" flag also as mentioned in mmap fail after 4GB.
Any sort of information will be greatly appreciated. Thank you.
最满意答案
更改文件中分配的“巨大页数”
/proc/sys/vm/nr_hugepages根据您要分配的内存量。 早些时候它说:
>cat /proc/meminfo | grep HugePages HugePages_Total = 25004GB =>它有2048 * 2Mb = 4Gb
已经消耗了2048个大页面。
一个GB的内存需要(1GB / 2MB = 512)512个更大的页面。 但是只剩下2500 - 2048 = 452。 但是你需要512.这就是为什么mmap失败的问题。 如果将上述文件(/ proc / sys / vm / nr_hugepages)内容编辑为2560,则允许5GB。 根据您需要的内存量更改它。 感谢@KlasLindbäck。 我提到了链接 ,小研究揭露了工作
Change the allocated "number of huge pages" in file
/proc/sys/vm/nr_hugepagesaccording to the amount of memory you want to allocate. Earlier it says:
>cat /proc/meminfo | grep HugePages HugePages_Total = 25004GB => it has 2048*2Mb= 4Gb
2048 huge pages already consumed.
one more GB of memory need (1GB/2MB= 512) 512 more huge pages. But 2500 - 2048 =452 only left. But you need 512. Thats the problem why mmap failed. If you edit the above mentioned file(/proc/sys/vm/nr_hugepages) contents to 2560, it allows 5GB. Change it according to the amount of memory you need. Thanks to @ Klas Lindbäck. I referred back the link, small research exposed the working
更多推荐
发布评论