Page MenuHomePhabricator

Fatal error when uploading a file to Commons (UploadStashFileNotFoundException)
Closed, ResolvedPublicPRODUCTION ERROR

Details

Reference
bz36587

Related Objects

Event Timeline

There are a very large number of changes, so older changes are hidden. Show Older Changes

(In reply to comment #41)

Is three minutes sufficient time?

Not really, it should be increased, say to 5 (and more as needed, though I at some point it will get kind of unreasonable without more ui feedback).

Are there other things we should do to

speed
up the concatenation step?

Increasing the chunk size would help somewhat. Perhaps pipelining the chunks would help (though the db layout does not support that, they must come in order). Disabling multiwrite would speed up chunks storage and final file stashing by 1.5X or so.

To both me (Chromium) and odder (?), upload of https://archive.org/download/Plan_9_from_Outer_Space_1959/Plan_9_from_Outer_Space_1959.ogv (372 MiB) is failing in "Upload" step with «Unknown error: "unknown".»

Gah, same here. :-( It's now aborting immediately at the first API request that returns the "queued" result. (Chrome 22.)

jgerber wrote:

if you run it with the console open whats the last response from the server in the network tab?

Second try, it fails as before with the last API requests all returning result:Poll,stage:queued (including the final response), until Upload Wizard reports the "unknown" error, presumably due to triggering the aforementioned timeout. This is with a 125MB file.

Not sure it really behaved differently before, will do some more testing.

Whatever is going wrong in the assembly stage, it doesn't look like the slowdown is linear. With a 22MB file the assembly succeeds almost instantaneously after the first API poll. With a 30MB file, it's two poll requests. With a 125MB file, I have more than 50 polls before it finally times out.

The last uploads I tried all succeeded. Could others following this please try again and see if you can successfully upload >100MB files through Upload Wizard with the chunked upload preference enabled?

Your file was 120 MB, I've tried a 370 MiB video and it failed again (I'm now retrying). Too bad, because it seemed also fast enough, averaging around 300-400 KiB/s and oscillating in 100-800 interval.

Trying with a 344M file I get the good old Unknown error: "internal_api_error_UploadStashFileNotFoundException" again. Note that it doesn't appear to be doing the asynchronous polling any more -- the final chunk is uploaded and fails with an error 504 - gateway timeout response.

It looks like increasing chunk size to 5MB may have helped somewhat but not sufficiently for very large files.

We (Jan/RobLa/Aaron/myself) connected about this earlier today. It looks like part of the problem is preserving the request context (user/IP) in a sane manner when shelling out for asynchronous assembly of the chunks / uploading the file from stash. Jan wants to take a first crack at resolving this w/ Aaron's help. In addition the server-side thumbnail generation for Ogg files currently doesn't scale for large files and needs to be re-implemented using range requests. (Jump in if I got any of that wrong.)

Hopefully we can make some further progress on this in the next couple of weeks.

M8R-udfkkf wrote:

I'm getting this error roughly once ever several thousand files (~10-20MB) that are being chunk-uploaded via commons API in 2MB-3MB chunks:

{"servedby":"mw1138","error":{"code":"internal_api_error_UploadChunkFileException","info":"Exception Caught: error storing file in '\/tmp\/php2BDowP': backend-fail-internal; local-swift","*":""}}

Looks like something isn't being allocated/locked properly possibly a rare race condition. It's annoying.

Am I right that this is mainly waiting for this changeset to be merged, or are there other dependencies at this point?

https://gerrit.wikimedia.org/r/#/c/48940/

Changeset merged, is this fixed now?

(In reply to comment #39)

then upon "retry failed uploads" a
api-error-internal_api_error_UploadStashFileNotFoundException error message,
but the file was actually uploaded.

This happened again with http://commons.wikimedia.org/wiki/File:Scrooge_1935.ogv uploaded by Beria (300 MB in 12 min).

I'm having mixed success with the latest code. A 459M file seemed to work fine (I didn't go past stage 1). A 491M file I just tried resulted in the following API request sequence:

5MB chunk->ok
5MB chunk->ok
5MB chunk->ok
...
lots of chunks later
...
~500K (final) chunk->Error 504
Retry of ~500K final chunk->API error.

The final API error was:

{"servedby":"mw1194","error":{"code":"stashfailed","info":"Invalid chunk offset"}}

Surfaced to the user as "Internal error: Server failed to store temporary file".

(In reply to comment #57)

I'm having mixed success with the latest code. A 459M file seemed to work
fine
(I didn't go past stage 1). A 491M file I just tried resulted in the
following
API request sequence:

5MB chunk->ok
5MB chunk->ok
5MB chunk->ok
...
lots of chunks later
...
~500K (final) chunk->Error 504
Retry of ~500K final chunk->API error.

The final API error was:

{"servedby":"mw1194","error":{"code":"stashfailed","info":"Invalid chunk
offset"}}

Surfaced to the user as "Internal error: Server failed to store temporary
file".

No async upload was enabled at that time (it is behind a feature flag). Since all wikis were on wmf11, I deployed the new redis queue aggregator on Thursday, which worked fine. Async uploads were enabled again then. The existing high priority loop made via puppet config changes was already done and appears to work as desired. The new code to fix the IP logging issue was broken by CentralAuth, which, that caused upload job to fail. This was fixed in https://gerrit.wikimedia.org/r/#/c/54084/. It can be tested at test2wiki (jobs on testwiki are broken due to srv193 being in pmtpa, so don't use that).

(In reply to comment #58)

No async upload was enabled at that time (it is behind a feature flag).

Obviously meant "disabled".

I don't know whether it was caused by the exact same API error as in this bug, but I just got the above UploadWizard error message when trying to upload a 231MB file (twice, on Chromium and Firefox):

"Internal error: Server failed to store temporary file."

Clicking "Retry failed uploads" in Chromium resulted in "Unknown error: 'unknown'", but on Firefox it succeeded in completing the upload.

(In reply to comment #60)

I don't know whether it was caused by the exact same API error as in this
bug,
but I just got the above UploadWizard error message when trying to upload a
231MB file (twice, on Chromium and Firefox):

"Internal error: Server failed to store temporary file."

Clicking "Retry failed uploads" in Chromium resulted in "Unknown error:
'unknown'", but on Firefox it succeeded in completing the upload.

Confirmed, this is totally broken again. Why is this broken again?

Fastily: If you can confirm it, providing some basic info would be very welcome (file size, browser, etc.). Thanks!

Certainly. Every few big uploads, I get a generic HTTP 500 error. Also, I'm not sure if it's related, but I also get the occasional error in which the server claims it can't reassemble the chunks. Neither of these errors really occur when I'm editing on a corporate network with some 60+ mbps upload speeds, but when I'm at home, I get an average of 5mbps upload. That said, I suspect something is timing out server-side.

I used a variety of test files, ranging from 152-450 Mb, using my Java library to upload the files via the MediaWiki API.

When I tested on test2.wikipedia.org - I was able to upload a small file fine. However a large (200 mb range, don't remember the exact size) file split into about 400 chunks ended up with me just getting result: poll; stage: queued forever and ever (Well actually I gave up after about 2 and a half hours of waiting).

I get a generic HTTP 500 error

Just for reference, wikimedia's 500 errors usually contain debugging information near the bottom (unless they've changed).

Is anything being done to resolve this issue at the moment?

I'd suggest debugging with a 350mb file but throttling upload speed to ~0.5mbps. Each time I did this it failed without exception.

I used a variety of test files, ranging from 152-450 Mb, using my Java
library
to upload the files via the MediaWiki API.

Is your java library using the async option when uploading these files?

What stage does the 500 error usually occur at? (While uploading a chunk, Some point during the "assembling" stage or some point during the "publish" stage? Or does it vary).

(In reply to comment #66)

I used a variety of test files, ranging from 152-450 Mb, using my Java
library
to upload the files via the MediaWiki API.

Is your java library using the async option when uploading these files?

What stage does the 500 error usually occur at? (While uploading a chunk,
Some
point during the "assembling" stage or some point during the "publish" stage?
Or does it vary).

I believe we are using the async option when uploading.

The 500 error typically occurs at the publishing stage. I've had similar, but infrequent 500 errors at the assembling stage as well, but I'm not sure how related this is.

Now, that we have increased the UploadWizard limit to 1GB, the frequency of this error will probably increase. Last report I have read is about two consecutive uploads of a 800 MB video (with FF and chrome) which both failed with a "stasherror": https://bugzilla.wikimedia.org/show_bug.cgi?id=52593#c9

New update -- It looks like big files which 'failed to upload' are visible at [[Special:UploadStash]]. I'm unable to download & verify the contents of
those files however, because the system "Cannot serve a file larger than 1048576 bytes."
Given this, it's hard to say what kind of issue this is (e.g. maybe the
uploaded file is corrupt, i.e. file was not assembled properly server-side?)

(In reply to comment #70)

New update -- It looks like big files which 'failed to upload' are visible at
[[Special:UploadStash]]. I'm unable to download & verify the contents of
those files however, because the system "Cannot serve a file larger than
1048576 bytes."
Given this, it's hard to say what kind of issue this is (e.g. maybe the
uploaded file is corrupt, i.e. file was not assembled properly server-side?)

Yes, we currently don't let people download things that are in the upload "stash" if they are bigger than 1 mb. If it is of interest, the reason given in the code for this is:

// Since we are directly writing the file to STDOUT,
// we should not be reading in really big files and serving them out.
//
// We also don't want people using this as a file drop, even if they
// share credentials.
//
// This service is really for thumbnails and other such previews while
// uploading.

You should be able to verify if the upload worked by requesting a thumbnail that would be smaller than 1 mb. If it was a jpeg file, with a stash name of 11oedl0sn7e4.aggjsr.1.jpg , then a url of Special:UploadStash/thumb/11oedl0sn7e4.aggjsr.1.jpg/120px-11oedl0sn7e4.aggjsr.1.jpg should work. If its a video file named 11oedl0sn7e4.aggjsr.1.webm, then Special:UploadStash/thumb/11oedl0sn7e4.aggjsr.1.webm/100px--11oedl0sn7e4.aggjsr.1.webm.jpg would get you a thumbnail if the file is not corrupt (I think, haven't tested that for a video)

Given this, it's hard to say what kind of issue this is (e.g. maybe the
uploaded file is corrupt, i.e. file was not assembled properly server-side?)

I wonder if some sort of timeout/race condition happened with the screwy way we store data in the session, and maybe the file is uploaded fine, but the publish step (i.e. The step moving file from stash to actually on-wiki) never really happened due to timeout. If that was the case, it may be possible to do a further API request after the fact to finish the upload.

Given this, it's hard to say what kind of issue this is (e.g. maybe the
uploaded file is corrupt, i.e. file was not assembled properly server-side?)

I wonder if some sort of timeout/race condition happened with the screwy way
we
store data in the session, and maybe the file is uploaded fine, but the
publish
step (i.e. The step moving file from stash to actually on-wiki) never really
happened due to timeout. If that was the case, it may be possible to do a
further API request after the fact to finish the upload.

Meh, looks like the individual chunks get listed to, so hard to tell what that means.

Also, looks like the thumbnailing infrastructure around stashed upload is totally broken on wmf wikis. Presumably it was forgotten about in the swift migration(?) Not that surprising, since I'm not sure if anyone


Because Special:Upload is kind of useless... I made some (very hacky) js that will add some additional links. It adds a (broken) link to a thumbnail. It adds a link to metadata, and it adds a publish link, to take a file out of the stash and on to the wiki.

In particularly, the metadata link includes the file size in bytes, which you can use to verify that all the parts of the file made it. If you want to be more paranoid, it also returns an SHA1 sum of the file, so you can be sure its really the right file on the server.

If that matches up, try the publish link and see what happens...

Anyhow, to sum up, add
importScript( 'User:Bawolff/stash.js' );
to [[commons:Special:MyPage/common.js]], and you should have the extra link on [[commons:Special:UploadStash]] which you can use to verify what file is in the stash.

You should be able to verify if the upload worked by requesting a thumbnail
that would be smaller than 1 mb. If it was a jpeg file, with a stash name of
[...]

or you simply try

https://commons.wikimedia.org/wiki/Special:UploadStash?withJS=MediaWiki:EnhancedStash.js

(In reply to comment #72)

importScript( 'User:Bawolff/stash.js' );

Ha! Didn't notice that. Ever wanted to write something like that and now we have 2 of them.

(In reply to comment #71)

I think, haven't tested that for a video

Video works *but* the generated "thumbnail" (for me https://commons.wikimedia.org/wiki/Special:UploadStash/thumb/11vmdxqgjy9o.2239xy.1173692.webm/120px--11vmdxqgjy9o.2239xy.1173692.webm.jpg) is in full video size (here 1920x1080px).

(In reply to comment #74)

(In reply to comment #72)

importScript( 'User:Bawolff/stash.js' );

Ha! Didn't notice that. Ever wanted to write something like that and now we
have 2 of them.

Cool. Yours is about a billion times better than my hack.

(In reply to comment #71)

I think, haven't tested that for a video

Video works *but* the generated "thumbnail" (for me
https://commons.wikimedia.org/wiki/Special:UploadStash/thumb/11vmdxqgjy9o.
2239xy.1173692.webm/120px--11vmdxqgjy9o.2239xy.1173692.webm.jpg)
is in full video size (here 1920x1080px).

Interesting. When I tried I was getting squid 503 errors all over the place (both for videos and normal images)

(In reply to comment #67)

(In reply to comment #66)

I used a variety of test files, ranging from 152-450 Mb, using my Java
library
to upload the files via the MediaWiki API.

Is your java library using the async option when uploading these files?

What stage does the 500 error usually occur at? (While uploading a chunk,
Some
point during the "assembling" stage or some point during the "publish" stage?
Or does it vary).

I believe we are using the async option when uploading.

The 500 error typically occurs at the publishing stage. I've had similar,
but
infrequent 500 errors at the assembling stage as well, but I'm not sure how
related this is.

I'd like to clarify this a bit. Your main issue is a 500 that happens at the publishing stage, is that correct? I think that this ticket has actually talked about several different bugs over time, which makes things more confusing than they need to be. I'd like to treat the assembly stage errors separately, I'm more interested in the one that's causing you issues the most frequently.

Are there any more specific errors in the header or body of the 500 response?

As an aside, splitting comment 64 to bug 59917

Gilles: I'm resetting assignee for now. Should the priority be lowered as well (there hasn't been any movement/communication (either direction) since January)?

Fastily, do you have a reply for Gille's question below?

Gilles: you asked(In reply to Gilles Dubuc from comment #76)

(In reply to comment #67)

(In reply to comment #66)

I used a variety of test files, ranging from 152-450 Mb, using my Java
library
to upload the files via the MediaWiki API.

Is your java library using the async option when uploading these files?

What stage does the 500 error usually occur at? (While uploading a chunk,
Some
point during the "assembling" stage or some point during the "publish" stage?
Or does it vary).

I believe we are using the async option when uploading.

The 500 error typically occurs at the publishing stage. I've had similar,
but
infrequent 500 errors at the assembling stage as well, but I'm not sure how
related this is.

I'd like to clarify this a bit. Your main issue is a 500 that happens at the
publishing stage, is that correct? I think that this ticket has actually
talked about several different bugs over time, which makes things more
confusing than they need to be. I'd like to treat the assembly stage errors
separately, I'm more interested in the one that's causing you issues the
most frequently.

Are there any more specific errors in the header or body of the 500 response?

Using the bigChunkedUpload.js to upload a new version (345.142KB) of https://commons.wikimedia.org/wiki/File:Clusiodes_-_2014-07-06kl.AVI.webm I got the message " FAILED: {"servedby":"mw1190","error":{"code":"stasherror","info":"UploadStashFileNotFoundException: key '12fc8few9krk.lhij8x.957461.webm' not found in stash"}}. This error occurred after uploading 82 of 85 chunks. I have to use a 384 kbit/s connection, so bigger uploads need several hours.

Whatever "bigChunkedUpload.js" is, this bug report is about UploadWizard instead...

(In reply to Andre Klapper from comment #80)

Whatever "bigChunkedUpload.js" is, this bug report is about UploadWizard
instead...

This bug is about an issue with chunked uploading and thus belongs to either Wikimedia or MediaWiki file management.

bigChunkedUpload.js is a standard-compliant script written by me and the error message is what it got back by the API.

(In reply to Sisa from comment #79)
Sisa, do you remember

  1. how long it took uploading the 82 chunks
  2. when you were attempting to upload (date+time+timezone or just in UTC)

Did your re-try?

(In reply to Rainer Rillke @commons.wikimedia from comment #82)

(In reply to Sisa from comment #79)
Sisa, do you remember

  1. how long it took uploading the 82 chunks
  2. when you were attempting to upload (date+time+timezone or just in UTC)

Did your re-try?

Sorry, I can not answer your questions exactly. I started the upload yesterday at (about) 17h here in Germany (UTC+2) and I went to bed at about 1.30h today in the morning. As far as I remember about 70 chunks were uploaded (without any error) at this time. I will try it again next night...

(In reply to Rainer Rillke @commons.wikimedia from comment #82)

Also the retry ended up unsuccessfully. I started it at 0.43h (UTC+2) and all chunks were uploaded (87 of 87, chunk size: 4096 KiB, duration: 36558s). However the server sided rebuilding of the new file is hanging ("44552: finalize/87> Still waiting for server to rebuild uploaded file" and so on...)

(In reply to Sisa from comment #84)
If this particular file matters for you, you can try publishing it from your upload stash, if it's still in: https://commons.wikimedia.org/w/index.php?title=Special:UploadStash&withJS=MediaWiki:EnhancedStash.js


I notice that files are removed from stash quite frequently now ... could this cause any harm?

How can I do this? Openimg the file in a new tab of my browser (SeaMonkey 2.26.1) brings the message "Internal Server Error Cannot serve a file larger than 1048576 bytes."

(In reply to Sisa from comment #86)
Is there a "publish" button? Try that. If it doesn't let you use the desired destination file name, we can move it later to where it should go.

See also T200820#4826332
I can't publish the files either. I get the same API error message.

I tried again without "stash and async" (as recommended on https://commons.wikimedia.org/wiki/User_talk:Rillke/bigChunkedUpload.js#Troubleshooting ), and I got the following

03054: 287/287> Server error 503 after uploading chunk: 
Response: <!DOCTYPE html>
<html lang="en" dir="ltr">
<meta charset="utf-8">
<title>Wikimedia Error</title>
<style>
* { margin: 0; padding: 0; }
body { background: #fff; font: 15px/1.6 sans-serif; color: #333; }
.content { margin: 7% auto 0; padding: 2em 1em 1em; max-width: 640px; }
img { float: left; margin: 0 2em 2em 0; }
a img { border: 0; }
h1 { margin-top: 1em; font-size: 1.2em; }
p { margin: 0.7em 0 1em 0; }
a { color: #0645AD; text-decoration: none; }
a:hover { text-decoration: underline; }
</style>
03054: 287/287> upload in progress Upload: 100%
03078: FAILED: internal_api_error_DBQueryError: [XBZngQpAAEwAACGSPNYAAACO] Caught exception of type Wikimedia\Rdbms\DBQueryError

Is this the same issue?
File in stash is 1698l1a6s3kw.c4ox7h.1.pdf

New tasks for new bugs, please. You don't get an UploadStashFileNotFoundException so it's not the same error. (It's a lock wait timeout on the uploadstash row which suggests maybe your client is doing something weird.)

In T38587#4826657, @Tgr wrote:

New tasks for new bugs, please. You don't get an UploadStashFileNotFoundException so it's not the same error. (It's a lock wait timeout on the uploadstash row which suggests maybe your client is doing something weird.)

OK, see T212101

Krinkle renamed this task from Chunked upload fails with internal_api_error_UploadStashFileNotFoundException to Fatal error when uploading a file to Commons (UploadStashFileNotFoundException).Feb 12 2019, 8:35 PM
Krinkle updated the task description. (Show Details)
Krinkle subscribed.

Recent sample from Logstash:

Req ID: XGGYdgpAICEAAF97@JEAAAAK
Req URL: https://commons.m.wikimedia.org/wiki/Special:Upload (HTTP Method: POST)
Req Date: 2019-02-11

UploadStashFileNotFoundException: Key "###.###.#######.jpg" not found in stash

#0 /srv/mediawiki/php-1.33.0-wmf.16/includes/upload/UploadStash.php(183): UploadStash->getFile(string)
#1 /srv/mediawiki/php-1.33.0-wmf.16/includes/upload/UploadFromStash.php(102): UploadStash->getMetadata(string)
#2 /srv/mediawiki/php-1.33.0-wmf.16/includes/upload/UploadFromStash.php(128): UploadFromStash->initialize(string, string)
#3 /srv/mediawiki/php-1.33.0-wmf.16/includes/upload/UploadBase.php(192): UploadFromStash->initializeFromRequest(WebRequest)
#4 /srv/mediawiki/php-1.33.0-wmf.16/includes/specials/SpecialUpload.php(97): UploadBase::createFromRequest(WebRequest)
#5 /srv/mediawiki/php-1.33.0-wmf.16/includes/specials/SpecialUpload.php(192): SpecialUpload->loadRequest()
#6 /srv/mediawiki/php-1.33.0-wmf.16/includes/specialpage/SpecialPage.php(569): SpecialUpload->execute(NULL)
#7 /srv/mediawiki/php-1.33.0-wmf.16/includes/specialpage/SpecialPageFactory.php(558): SpecialPage->run(NULL)
#8 /srv/mediawiki/php-1.33.0-wmf.16/includes/MediaWiki.php(288): MediaWiki\Special\SpecialPageFactory->executePath(Title, RequestContext)
#9 /srv/mediawiki/php-1.33.0-wmf.16/includes/MediaWiki.php(862): MediaWiki->performRequest()
#10 /srv/mediawiki/php-1.33.0-wmf.16/includes/MediaWiki.php(517): MediaWiki->main()
#11 /srv/mediawiki/php-1.33.0-wmf.16/index.php(42): MediaWiki->run()

(Still seen on 1.34.0-wmf.19)

mmodell changed the subtype of this task from "Task" to "Production Error".Aug 28 2019, 11:12 PM

This is a ticket with a long history...
The current occurrences of UploadStashFileNotFoundException (not many anymore) are no longer those originally described: nearly all seem to come from POST Special:Upload requests, which all seem to be resubmissions of an earlier failure (in which case the file got stashed, warning gets displayed, and stashed file gets re-used when resubmitting fixed other data)

The exception has become very rare (2-5 per day across all wikis) and there's not really much detail to go on.
I've tried to reason my way through the code, but can't find amiss (but can't rule it out completely - the exceptions are there...)

My least implausible theories for actual failures are:

  • the DB insert in UploadStash::stashFile fails silently & the record is never actually inserted (unlikely - would likely indicate an issue upstream)
  • the DB insert is rolled back after rendering the upload form (which includes the insertid) for some reason (though I can't find much in the logs to support this)

However, the stashed files could also have gone missing because they got cleared, either manually via Special:UploadStash, or via automated cleanupUploadStash.php script runs.
If I were a betting man, I'd put my money on this last scenario: I suspect Commons receives significantly more uploads than all other wikis, yet it's underrepresented in terms UploadStashFileNotFoundException exceptions. It is also the only wiki where cleanupUploadStash.php cleans out files older than 48h rather than the default 6h on other wikis.

I propose closing this task as resolved: there have not been any recent user reports of this occurring unexpectedly AFAICT.
The exceptions have become rare, and have plausible explanations (there are patterns in the exceptions that look like odd user-specific behavior (e.g. not coming back to a failed upload for hours) rather than a systemic issue)

matthiasmullie claimed this task.

Closing per rationale in above comment.

Closing per rationale in above comment.

Re-opening as this is tracked in our mediawiki new errors dashboard and is still logging as an exception as of 2021-09-14:

Stack trace:

from /srv/mediawiki/php-1.37.0-wmf.21/includes/upload/UploadStash.php(131)
#0 /srv/mediawiki/php-1.37.0-wmf.21/includes/upload/UploadStash.php(173): UploadStash->getFile(string)
#1 /srv/mediawiki/php-1.37.0-wmf.21/includes/upload/UploadFromStash.php(102): UploadStash->getMetadata(string)
#2 /srv/mediawiki/php-1.37.0-wmf.21/includes/upload/UploadFromStash.php(128): UploadFromStash->initialize(string, string)
#3 /srv/mediawiki/php-1.37.0-wmf.21/includes/upload/UploadBase.php(222): UploadFromStash->initializeFromRequest(WebRequest)
#4 /srv/mediawiki/php-1.37.0-wmf.21/includes/specials/SpecialUpload.php(127): UploadBase::createFromRequest(WebRequest)
#5 /srv/mediawiki/php-1.37.0-wmf.21/includes/specials/SpecialUpload.php(231): SpecialUpload->loadRequest()
#6 /srv/mediawiki/php-1.37.0-wmf.21/includes/specialpage/SpecialPage.php(646): SpecialUpload->execute(NULL)
#7 /srv/mediawiki/php-1.37.0-wmf.21/includes/specialpage/SpecialPageFactory.php(1366): SpecialPage->run(NULL)
#8 /srv/mediawiki/php-1.37.0-wmf.21/includes/MediaWiki.php(314): MediaWiki\SpecialPage\SpecialPageFactory->executePath(string, RequestContext)
#9 /srv/mediawiki/php-1.37.0-wmf.21/includes/MediaWiki.php(925): MediaWiki->performRequest()
#10 /srv/mediawiki/php-1.37.0-wmf.21/includes/MediaWiki.php(559): MediaWiki->main()
#11 /srv/mediawiki/php-1.37.0-wmf.21/index.php(53): MediaWiki->run()
#12 /srv/mediawiki/php-1.37.0-wmf.21/index.php(46): wfIndexMain()
#13 /srv/mediawiki/w/index.php(3): require(string)
#14 {main}

It's fine to have this as a low priority, but releng uses this task to track errors that we see in logstash and if this task is closed we'll probably just accidentally open a new one.

If it's still happening, but it's expected, it would be nice to catch the exception and ignore or raise a user error as appropriate and then releng won't pester you about it <3

thcipriani lowered the priority of this task from High to Low.Sep 15 2021, 6:17 PM

Setting to low per above.

Re-closing and continuing at T204827 per Matthias.