Page MenuHomePhabricator

mysql caching is consuming more disk space on 1.18 wikis due to change in parser cache keys
Closed, ResolvedPublic

Description

We're running out of disk space and this needs to be fixed in the next few hours. See http://ganglia.wikimedia.org/graph.php?c=MySQL&h=db40.pmtpa.wmnet&v=406.493&m=disk_free&r=day&z=medium&jr=&js=&st=1317836378&vl=GB&ti=Disk%20Space%20Available

More details may be added, but Roan is currently running a query to see how much space can be cleared up by deleting space used by an old iOS hack.


Version: unspecified
Severity: normal

Details

Reference
bz31388
ReferenceSource BranchDest BranchAuthorTitle
repos/phabricator/deployment!16work/phorge-migration-2023-07-11wmf/stablebrennenmerge updated arcanist and phabricator submodules from phorge
repos/phabricator/deployment!10work/phorge-migrationwmf/stablebrennenDraft: Merge updated arcanist and phabricator submodules from Phorge
repos/phabricator/deployment!9work/phorge-migrationwmf/stablebrennenTentative merge of phorge repos
Customize query in GitLab

Event Timeline

bzimport raised the priority of this task from to Medium.Nov 21 2014, 11:47 PM
bzimport set Reference to bz31388.

Domas has a short term workaround, which involves periodically truncating one of the 256 tables in the parser cache. This has had a negative effect on our hit rate:
http://noc.wikimedia.org/cgi-bin/pcache-hit-rate.py?period=8d
(note: yellow is bad)

...but, as you can see from the disk utilization graph above, we're not chewing through disk anymore. I'm assigning to Tim since he's responsible for the long-term fix, but Domas has fixed the most urgent problem.

it isn't "periodically truncating one", it is "truncating all over extended period" - as old data will not be referenced anyway.

Turning down priority/severity due to temporary fix.

The immediate crisis is averted. Will create separate bug to make sure we've got regular purging going on.