我在docker容器中运行 npm ,并且由于无法分配足够的内存而经常中止运行.我看到一些标记,例如-memory (如何使用docker设置分配给容器的资源?)用于 docker run 命令,这似乎限制了容器可以消耗的最大内存量,但是还没有看到任何可以让我为容器保留一定数量的内存并在无法分配时立即中止的内容.
I'm running npm inside a docker container and every so often it aborts because it cannot allocate enough memory. I see some flags like --memory (How do I set resources allocated to a container using docker?) for the docker run command that seem to limit the maximum amount of memory that a container can consume, but haven't seen anything yet that would allow me to reserve an amount of memory for the container and abort immediately if it cannot be allocated.
推荐答案这不是Linux下内存管理的工作方式.
This is not how memory management works under Linux.
如果运行完全虚拟化(如QEMU),则可以分配所有内存并将其传递到VM中.然后,该VM引导内核,并且内存由VM中的内核管理.
If you run full virtualization, like QEMU, then all memory can be allocated and passed down into the VM. That VM then boots the kernel and the memory is managed by the kernel in the VM.
在Docker或任何其他容器/命名空间系统中,内存由运行docker和容器"的内核管理.在容器中运行的进程仍然像正常进程一样运行,但是在不同的 cgroup 中.每个 cgroup 都有限制,例如内核将向用户区分配多少内存,或看到的网络接口,但仍在同一内核上运行.
In Docker, or any other container/namespace system, the memory is managed by the kernel that runs docker and the "containers". The process that is run in container still runs like a normal process but in a different cgroup. Each cgroup has limits, like how much memory the kernel will hand out to userland, or what network interfaces it sees, but it still runs on same kernel.
一个类似的例子是docker是一个荣耀的 ulimit ".在此限制之下的进程仍然像普通的Linux进程一样运行
An analogy of this is that docker is a "glorified ulimit". Processes under this limit still behave as normal Linux processes
就像无法为Firefox预分配内存一样,也无法为Docker容器预分配内存.
And just like you can't pre-allocate memory for Firefox, you can't pre-allocate memory for a Docker container.
更多推荐
Docker为容器保留一定数量的内存
发布评论