Limiting memory usage of programs

This question appeared on a closed grype issue, so I thought I’d move it here, rather than lose it in among comments there.

“Is there a way to limit the GPU and RAM usage when scanning?”

I was curious, so went digging. On Linux, the answer is yes, but probably not what you’re looking for. Doing this may well kill the process earlier.

Using systemd-run with CPUWeight and MemoryMax parameters (among others). The documentation is on freedesktop.org

With 500M of RAM allotted, in this case, grype runs.

$ systemd-run --user --scope -p CPUWeight=10 -p MemoryMax=500M -p MemorySwapMax=0M grype ubuntu:latest -o json=vuln.json
Running as unit: run-r93d052608d454d6bad2f9afe29268dc1.scope; invocation ID: f1d3bfeafb4f4cac8a62d4fa928bee58
 ✔ Loaded image                                                                                                                                                                                     ubuntu:latest
 ✔ Parsed image sha256:a04dc4851cbcbb42b54d1f52a41f5f9eca6a5fd03748c3f6eb2cbeb238ca99bd
 ✔ Cataloged contents c90c8d81b78ffb4169f01971c24d39c4acd85405c05545fa03a8b8c0f030572e
   ├── ✔ Packages                        [92 packages]
   ├── ✔ File digests                    [2,266 files]
   ├── ✔ File metadata                   [2,266 locations]
   └── ✔ Executables                     [722 executables]
 ✔ Scanned for vulnerabilities     [23 vulnerability matches]
   ├── by severity: 0 critical, 0 high, 15 medium, 6 low, 2 negligible
   └── by status:   7 fixed, 16 not-fixed, 0 ignored

If I only give grype 100M RAM, it gets killed.

systemd-run --user --scope -p CPUWeight=10 -p MemoryMax=100M -p MemorySwapMax=0M grype ubuntu:latest -o json=vuln.json
Running as unit: run-rb15059f4c75a401dbdf672c8b53344ee.scope; invocation ID: 9c9e58f949164d199d04614167e51a6c
 ✔ Loaded image ubuntu:latest
 ✔ Parsed image sha256:a04dc4851cbcbb42b54d1f52a41f5f9eca6a5fd03748c3f6eb2cbeb238ca99bd
Killed

Over in the syslog we see:

[Sat Mar  8 06:00:43 2025] oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=user.slice,mems_allowed=0,oom_memcg=/user.slice/user-1001.slice/user@1001.service/app.slice/run-rb15059f4c75a401dbdf672c8b53344ee.scope,task_memcg=/user.slice/user-1001.slice/user@1001.service/app.slice/run-rb15059f4c75a401dbdf672c8b53344ee.scope,task=grype,pid=1221708,uid=1001
[Sat Mar  8 06:00:43 2025] Memory cgroup out of memory: Killed process 1221708 (grype) total-vm:1378388kB, anon-rss:89900kB, file-rss:50944kB, shmem-rss:0kB, UID:1001 pgtables:440kB oom_score_adj:0

There is one option to limit memory: the GOMEMLIMIT environment variable sets a “soft” memory limit for the runtime. Docs are here: runtime package - runtime - Go Packages

Other than that, today, there are no ways in the syft/grype/etc. configuration to limit the amount of memory the programs are able to use, as far as I’m aware. One of the biggest challenges is that there are a lot of different libraries used which allocate buffers that we don’t really have control over.

However, we do know certain processes use a fair amount of memory and we could potentially look at implementing some sort of blocking buffer pools with size limits for the things we do control. This could help for limiting memory used while reading file contents in catalogers, for example.

I don’t think our apps are using GPU resources today (I could be wrong, you never know the optimizations that libraries do!). I assume this was meant to be CPU. Today the apps are mostly single-threaded, with the exception of being able to run multiple catalogers in parallel. But if you look at the CPU usage profile even with very high concurrency, you will find that catalogers are mostly I/O bound and the CPU doesn’t get taxed very much. I have an experimental PR that parallelizes a lot more things and I think looks very promising to increase CPU usage and decrease scan times. We have some details to work out before that can get merged, but we will probably be able to use a similar technique to help speed up some things on the stereoscope side, too. But I doubt we would have a way to limit CPU usage beyond “only use 1 core”, if you need to constrain it.

1 Like