Skip to content

xbps-remove memory leakage #621

@xeroxslayer

Description

@xeroxslayer

Description of the bug

It has a memory leak if certain conditions are met (eats up CPU and mem). I think a certain package update triggers the issue... or it may be just random, I don't know to be honest. It has happened on a number occasions now to just dismiss it as a random bit flip or something similar. And it has been happening for about a year now.

To be honest, I wouldn't have noticed it if I didn't ran old hardware on spinning drives with a little RAM (4GB to 8GB), but since I do, the memory leak became fairly evident. CPU goes to 100% on all cores and mem usage just keeps rising very fast (all memory is eaten up in a matter of 10, 20 seconds maybe). Of course, the system is unresponsive then. This might not be the case if you're running Void on an SSD and newer hardware, since it'll throw mem in swap fairly fast, so the system most probably will be responsive, just a bit sluggish.

I suspect a kernel update might be the event that triggers this behavior, but as I said, I'm not certain, I'll have to investigate further.

Also, if you stop the process (Ctrl + C) and run the command again, this doesn't happen. Basically, what I am certain of is that this happens only the first run of xbps-remove after an update.

The hardware setups I've noticed this mem leak was these 2.

CPU: Intel Core i7 930
MB: Intel DX58SO
RAM: 6GB (Tripple channel, 3 x 2GB DDR3 DIMMs)
HDD: 500GB (Spinning drive)

CPU: Intel Core2 Quad Q9550
MB: JW Technology JW-IG41M-HD
RAM: 4GB (Dual channel, 2 x 2GB DDR2 DIMMs)
HDD: 250GB (Spinning drive)

Steps to reproduce

  1. Update (xbps-install -Suv). It's also preferable that the update also updates the kernel (see above for reason).
  2. Reboot.
  3. Trigger cleanup (xbps-remove -ROov).

CPU should hit the roof for about a second or so, and then drop and HDD/SSD usage should go up after that (doing the actual cleanup). That's the normal behavior. When the memory leak happens, the CPU usage never drops and mem usage just keeps rising till the system becomes unresponsive... and then you just have to restart the rig manually.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions