Page MenuHomePhabricator

Job queue memory usage
Closed, ResolvedPublic

Description

I haven't been able to pinpoint this exactly, but something is wrong with the job queue. I've experienced this for example while it's processing edit jobs for ReplaceText, but also when changing a template that's used for a few hundred times, as well as when using the Translate extension script 'fuzzy.php'.

In the cases of Translate and ReplaceText actual edits are made, for a template change, other stuff happens that I'm not really up to speed with (link table updates?).

Generally at some point in time the script will exit with "Allowed memory size exhausted" (at translatewiki.net about 150M), even though, as far as I know, command line scripts shouldn't have a default memory limit (?).

I've once looked into this, and found that memory usage of the job queue keeps growing with each job that it handles. I forgot about the actual number, but it was many megabytes per job then (12M comes to mind - probably depends on the exact wiki properties). All attempts to do "garbage collection" setting references to null to reduce memory usage failed.


See also:

Details

Reference
bz24647

Event Timeline

bzimport raised the priority of this task from to Low.Nov 21 2014, 11:00 PM
bzimport set Reference to bz24647.
bzimport added a subscriber: Unknown Object (MLST).

The 150M memory limit is hard coded in runJobs.php, but that's not really the issue, obviously. It's the growing memory usage.

Is this still an issue? As far as I remember TWN used PHP 5.2 then while PHP 5.3 should have much better memory management.

(In reply to comment #0)

Generally at some point in time the script will exit with "Allowed memory
size
exhausted" (at translatewiki.net about 150M)

Heh, good old times. Nowadays, even 550M is not enough: see bug 58969, bug 60844. No idea what to do with this bug, it's now superseded by MediaWiki's memory hogs being standard it seems?

Aklapper lowered the priority of this task from Low to Lowest.Apr 9 2015, 12:35 PM
Krinkle removed a subscriber: wikibugs-l-list.
Krinkle added subscribers: Nikerabbit, Krinkle.

With no other reports about memory usage newer than 2015 since this one in MediaWiki-Core-JobQueue, I'm leaning towards declining this.

@Nikerabbit @siebrand Do you still find that the job runner process is increasing memory usage after each job it runs?

I think there used to be quite a few places where instance caches would store info with no limit, but most of those have been fixed due to issues with long running maint scripts like this task (e.g. rMW32d1017e7d77: Don't let LinkCache grow indefinitely).

We are running runJobs.php with --maxjobs 1000 under HHVM so cannot comment.

Okay, I'll close this for now then.

I don't claim we have zero memory leaks in the whole software, but at least surrounding JobQueue in particular we haven't had any reports recently and personally can't think of any obvious places, either.

As @Legoktm points out, in general for MediaWiki we do have a fair number of singletons with instance caching and process caching – which, if re-used between job runs, may lead to leaks if they have no upper bound. We've fixed a few of those over the years, and new ones can be addressed on a case-by-case basis (without this task).

In addition, it's also not recommended to have runJobs.php run indefinitely without any limits. Typical usage involves periodic runs with at least one of the restrictions set – so that it won't run too long on a single stretch. Fresh starts also ensure that the process use the latest configuration.