Page MenuHomePhabricator

be able to restart history dump after breakage, from where it was interrupted
Closed, DeclinedPublic

Description

Dumping the page-meta-history file is the phase that takes the longest; when some external factor causes the dumps to fail (a code push that breaks them, network/db/power/space/other issues), they currently must be restarted from the beginning. Even when they complete in 2 weeks instead of 6 weeks, the odds of something going wrong in that time is quite high. Being able to restart from the point of interruption would mean being able to produce them on a reasonable schedule.

Code available: find last page id in file form interrupted run (works only for bz2 files), by seeking to the end and walking through compressed blocks.

Code needed: stream this file to a filter which writes out the MediaWiki header, writes everything up to but excluding the last pageID, writes the MediaWiki footer; this output can be piped to bzip2 to produce an intact bzip2 file.

We can then run from that pageID to the end, take the two bzip2 files, recombine them and be done.

Why can't we just find the truncated bzip2 block, toss it, and start from there? Because at the end of a file bzip2 requires a cumulative crc algorithm, which means rereading all the text the minute we want to add blocks at the end.


Version: unspecified
Severity: enhancement

Details

Reference
bz27113

Event Timeline

bzimport raised the priority of this task from to Medium.Nov 21 2014, 11:23 PM
bzimport set Reference to bz27113.

Well, this ticket is certainly out of date. While we need to be able to complete broken history runs, this title and the description are now wrong. We produce a pile of checkpoint files for en wiki; if any one or several of them are missing, all we want to do is to be able to rerun those. That's a very different mechanism. Declining this ticket and opening a new one for that task.