Page MenuHomePhabricator

Key performance indicator: Gerrit review queue
Closed, ResolvedPublic

Description

We need some sort of statistics for the backlogs of review

CodeReview had some basic stats, that gave these sorts of numbers [1]

The simplest useful statistic would be number of unmerged (and not abandoned) revisions per git repository

[1] https://www.mediawiki.org/wiki/Special:Code/MediaWiki/stats


Version: unspecified
Severity: enhancement
See Also:
https://bugzilla.wikimedia.org/show_bug.cgi?id=53485

Details

Reference
bz37463

Event Timeline

bzimport raised the priority of this task from to High.Nov 22 2014, 12:25 AM
bzimport set Reference to bz37463.

Number of open changes per project:

username = 'krenair'
import os, json, operator
lines = os.popen('ssh ' + username + '@gerrit.wikimedia.org -p 29418 gerrit query status:open --format json').read().splitlines()
projects = {}
for line in lines:

data = json.loads(line)
if 'project' not in data:
    continue
elif data['project'] not in projects:
    projects[data['project']] = 1
else:
    projects[data['project']] += 1

for project, unmergedcount in sorted(projects.iteritems(), key=operator.itemgetter(1), reverse=True):

print project, unmergedcount

(In reply to comment #4)

How about Gitblit?

For the records: depends on bug 38383.

We are planning to have Gerrit statistics soon-ish. Álvaro del Castillo (CCed) knows the details, and if you want you can assign this report to him. See

http://www.mediawiki.org/wiki/Community_metrics#Metrics_dashboard

http://blog.bitergia.com/2013/06/06/first-steps-mining-gerrit/

See the first iteration:

http://korma.wmflabs.org/browser/scr.html
http://korma.wmflabs.org/browser/scr-repos.html

https://www.mediawiki.org/wiki/Gerrit/Navigation#Reports ha also been around for some time. Hopefully Korma will match that data and some point in order to have everything in one place.

PS: could this report be assigned to acs@bitergia in order to reflect the current situation?

Time to work on Gerrit review queue metrics.

See https://www.mediawiki.org/wiki/Community_metrics#Gerrit_review_queue

These are the questions we want to answer in this iteration:

How long is the Gerrit review queue over time? How long does it take to review code contributions? Are we improving? Are Wikimedia's staff and non-staff contributions processed equally? Who is doing better and worse?

  • Number of Wikimedia staff / non-Wikimedia-staff commits reviewed in <2 days, <1 week, <1 month, <3 months, >3 months or unreviewed (values may be fine tuned based on actual data).
  • Same as above, per project. Ranked from slowest to fastest.

How to calculate the average time? A few old open changes might distort the picture. What if we compare the pure average of the 100% of commits with the average removing the 10% of slowest and fastest?

One chart with extra data in a table? What kind of chart?

Great! Next week time to start working on it!

Ok, we have some results no. The gerrit backend for Bicho has being improved in order to get the MERGED and ABANDONED real event dates.

We have added some new vizs to the gerrit panel in korma dashboard:

http://korma.wmflabs.org/browser/scr.html

The last one is the accumulated pending reviews in time.

We need to find why so many reviews were merged in 2013-04. It seems that the merged date was not correct before this date. So probably, for this kind of analysis, we have only good data after 2013-04.

In order to get the tables for this data using just SQL:

select new-closed as increment, new, closed, year_new, month_new FROM ( select count(issue_id) as new, YEAR(changed_on) as year_new, MONTH(changed_on) as month_new FROM changes WHERE (new_value='new') GROUP BY YEAR(changed_on),MONTH(changed_on)) t, (select count(issue_id) as closed, YEAR(changed_on) as year_closed, MONTH(changed_on) as month_closed FROM changes WHERE (new_value='merged' OR new_value='abandoned') GROUP BY YEAR(changed_on),MONTH(changed_on)) t1 WHERE year_new = year_closed and month_new = month_closed;
+-----------+------+--------+----------+-----------+

incrementnewclosedyear_newmonth_new

+-----------+------+--------+----------+-----------+

1842062220119
33936728201110
59661923201111
49554348201112
3643842020121
6616993820122
940105811820123
1795195415920124
2681281213120125
3036319215620126
2459261115220127
3519369417520128
2658279713920129
38884098210201210
38914121230201211
37783961183201212
3887410121420131
3202339319120132
-3743241184155020133
-213922394320134
1503846369620135
223919389720136
314321429020137
1033785368220138
-123375338720139
7543944319201310
7035023432201311

+-----------+------+--------+----------+-----------+

Is this the kind of information you are searching for?

The other important analysis is the backlog: the distribution in time of the pending reviews (when they were opened). We have already done that but not yet included in korma dashboard for gerrit.

We can build this analysis per repository, company and people!

Next goal is to work in the review time analysis.

Added info about time to review to the gerrit panel in korma:

http://korma.wmflabs.org/browser/scr.html

The average time for time to review is: 3.7333

And you can see also the evolution in time.

The data is from 2013-05 to now because before this date, there are problems with the change date in gerrit data.

Summary of a chat with Alvaro reviewing https://www.mediawiki.org/wiki/Community_metrics#Gerrit_review_queue :

  • How long is the Gerrit review queue over time?

Answered an the "Pending" graph at http://korma.wmflabs.org/browser/scr.html --the data is wrong as reported in Bug 58349

  • How long does it take to review code contributions? Are we improving?

Answered at the "Average time to review" at http://korma.wmflabs.org/browser/scr.html . The data might be also wrong based on Bug 58349. Also, currently the pure average is computed, but we want to calculate the median instead. This will help avoiding the distortion caused by superfast intra-team reviews and reviews stalled during months.

  • Are Wikimedia's staff and non-staff contributions processed equally?

Pending. We need to cross the "Average time to review" data with the affiliation data from http://korma.wmflabs.org/browser/contributors.html -- bug 58424

  • Who is doing better and worse?

Pending. We need to display the "Average time to review" by repository. -- bug 58426

We didn't have time to discuss how to visualize more details about the review queues of each repository beyond the median -- bug 58428

As per bug 53485 comment 35 and 39, let's not count self-merges (reviews merged by the author of the patch) in any "average time to review" metrics.

From what above and bug 57038, it's not clear to me if you're currently relying on the database or the API. Anyway.

(In reply to comment #13)

As per bug 53485 comment 35 and 39, let's not count self-merges (reviews
merged
by the author of the patch) in any "average time to review" metrics.

PleaseStand noted that we now have better searches by label: https://gerrit.wikimedia.org/r/Documentation/user-search.html#labels (2.7 was https://gerrit-documentation.storage.googleapis.com/Documentation/2.7/user-search.html ).

For instance.

It doesn't always work reliably if you don't specify a project. Should work in the same way for API, e.g. https://gerrit.wikimedia.org/r/changes/?q=-ownerin:wmf+label:Code-Review=2,wmf+project:mediawiki/core&n=25 (https://gerrit.wikimedia.org/r/Documentation/rest-api-changes.html#list-changes).

Status update (Nemo, I will leave your comment to Alvaro since I don't know better)

We have some graphs here:

http://korma.wmflabs.org/browser/gerrit_review_queue.html

"How long is the Gerrit review queue over time?" and "Backlog of pending reviews (when reviews were submitted)" could be put together, both starting when Gerrit was set up (march 2012?).

Maybe they could even be combined in a single graph? (optional)

Little known bug: all graphs should have Y axis starting with 0.

"How long does it take to review code contributions?" and "Are Wikimedia's staff and non-staff contributions processed equally?" also belong to each other. At the end the second question equals "How long does it take to review code contributions... from WMF, independents, WMDE, Wikia...?"

We are still discussing what to count exactly here. We started the discussion in our weekly meeting and we will continue here. The last assumption (which is not reflected in the graphs now) is:

For each measurement timespot, median of the ages of all reviews still open at
that timespot, understanding as "age" the time period since the review
was open to the measurement timespot.

Note that a review can have multiple patches. I'm proposing to count the age of the last patch instead of the age of creation of the review. What we are most interested in seeing is lack of activity, languishing reviews. Counting the age since creation is also useful, as secondary value (a second graph?)

"Who is doing better and worse?" should be sorted by median age of open reviews, starting by those repos with oldest patches awaiting review.


In our discussion there were more ideas about data we could count. I'm documenting them here, although this might be relegated to a second iteration (since we have other KPIs that we want to complete in the following weeks):

  • median time between uploading the changeset and receiving a first review
  • average of iterations vs time since creation (some changesets may take long to review but they were nailed in a couple of revisions as soon as someone answered, while others required many iterations)
  • normalization of data for big projects, calculating number of revisions within a period compared with the previous period.

The graph would use a set of legends or reference manual. For instance, I have no idea at all what "Reviews submitted" and "Reviews new" could mean.

Yes, labels and descriptions will be reviewed before closing this task. The "?" icons next to the tables contain some information.

Improvements to the text of the page can be proposed directly at https://github.com/Bitergia/mediawiki-dashboard/blob/master/browser/gerrit_review_queue.html (Alvaro, where are the "?" legends located?)

I agree "Reviews submitted" and "Reviews new" are unclear labels.

We have a new graph at http://korma.wmflabs.org/browser/gerrit_review_queue.html labeled

"How long does it take to review code contributions?" "For pending reviews" (final labels pending)

It calculates the average and median age of the changes opened at a given time. Currently that age is based on the date the changesets where initially uploaded, but I requested to calculate the age based on the date of the last patchset uploaded, in order to reflect more accurately the age of changesets that are stuck.

The list of repos at the end of the page will be sorted by this median age in the past month, in order to get at the top those repos that currently accummulate the oldest patchsets stuck.

Update:

The list of repos is now sorted by the median age of the changes opened at a given time (month) and the evol in time of this metric is shown.

Also, for affiliations this metric has being also added.

Next step is to try the change: to calculate the age based on the date of the last patchset uploaded.

About "to calculate the age based on the date of the last patchset uploaded" analyzing bicho gerrit database, the upload event is not stored, but the actions against patchsets (Code-Review, Verified).

So we can use now the last "Code-Review" event in the review. And in a next iteration, improve gerrit backend for bicho so all upload changes are recorded.

All patchsets receives some kind of review so if we are not reviewing the time to review (right now the goal is to detect patchsets not reviewed, not time to review) I think this is a reasonable approach.

What do you think?

(In reply to Alvaro from comment #20)

About "to calculate the age based on the date of the last patchset uploaded"
analyzing bicho gerrit database, the upload event is not stored, but the
actions against patchsets (Code-Review, Verified).

So we can use now the last "Code-Review" event in the review. And in a next
iteration, improve gerrit backend for bicho so all upload changes are
recorded.

All patchsets receives some kind of review so if we are not reviewing the
time to review (right now the goal is to detect patchsets not reviewed, not
time to review) I think this is a reasonable approach.

What do you think?

In order to understand better the expected results I have added another line to the pending accumulated time to show the time from last update metric. The time from last upload will be between these two lines.

http://korma.wmflabs.org/browser/gerrit_review_queue.html

VERY COOL! These news graphs are very useful.

Calculating "last code review" is not bad, and in fact it a useful measure per se. Jenkins reviews every new patchset within seconds, so at least we know that no patchset will be missed because of lack of reviews. Let's go with is, without removing the current graphs based on age of the first upload because they are also useful. This is good enough for now, and maybe all we need.

Proposed steps:

  • in "Backlog of pending reviews", add the total amount of changesets submitted every month (optional, we can leave this for later)
  • the first graph at "Are Wikimedia's staff..." can be removed; it is a bit confusing and the second one explains better what we are looking for
  • in "Are Wikimedia's staff...", add "last code review median age at a given time"
  • in "Who is doing better and worse?", add a line for "last code review median age at a given time", without removing review_time_pending_days_acc_median, and sort repos by "last code review median age at a given time"
  • remember the fix to make all Y axis start with zero
  • also please have one graph per row (except for the list of repos, of course
  • and a small detail, currently the page as some horizontal scroll, and some graphs have no margin to the left, which I guess could be solved wrapping everything in a <div> width:95% or something.

After this I believe your part will be ready, and then we can work on better descriptions.

I note that all graphs are still completely meaningless because they still don't exclude self merges.

Babel (http://korma.wmflabs.org/browser/repository.html?repository=gerrit.wikimedia.org_mediawiki_extensions_Babel), which for some reason is ranked second in http://korma.wmflabs.org/browser/gerrit_review_queue.html despite having only one open change (about 6 months old, but -2'd), is depicted as some sort of heaven where in every single month the same amount of commits are opened and merged. The reality is that it's a very quiet extension with only 9 changes in the last year, the rest being localisation updates. https://gerrit.wikimedia.org/r/#/q/project:mediawiki/extensions/Babel+-owner:l10n-bot,n,z

At the very least, l10n-bot should have its own line in graphs, it doesn't make sense to mix it with "Independent" or "Unknown" whatever those lines in "Are we improving?" mean.

(In reply to Nemo from comment #23)

I note that all graphs are still completely meaningless because they still
don't exclude self merges.

Could you point to a Gerrit changeset with an example of a self merge, please? I'm still not sure I understand what you mean.

Babel
(<http://korma.wmflabs.org/browser/repository.html?repository=gerrit.
wikimedia.org_mediawiki_extensions_Babel>), which for some reason is ranked
second in http://korma.wmflabs.org/browser/gerrit_review_queue.html
despite having only one open change (about 6 months old, but -2'd), is
depicted as some sort of heaven where in every single month the same amount
of commits are opened and merged. The reality is that it's a very quiet
extension with only 9 changes in the last year, the rest being localisation
updates.
<https://gerrit.wikimedia.org/r/#/q/project:mediawiki/extensions/Babel+-
owner:l10n-bot,n,z>

That list aims to be a ranking of shame. The projects at the top are the ones with a oldest median of unresolved changesets (now counted by date of first upload, soon by date of last review).

Looking at https://gerrit.wikimedia.org/r/#/q/project:mediawiki/extensions/Babel+-owner:l10n-bot,n,z , the fact that Babel appears there shows that there might be a bug somewhere. That table shows a couple of changesets with "Label not applicable", but clicking themm one ca see that have been abandoned. Does Babel have any review open currently? Looks like not.

Alvaro, can you explain the results of Babel based on the data you have gathered from Gerrit?

At the very least, l10n-bot should have its own line in graphs, it doesn't
make sense to mix it with "Independent" or "Unknown" whatever those lines in
"Are we improving?" mean.

We removed l10n-bot from http://korma.wmflabs.org/browser/who_contributes_code.html but I'm not sure whether it is being counted here.

(In reply to Quim Gil from comment #24)

At the very least, l10n-bot should have its own line in graphs, it doesn't
make sense to mix it with "Independent" or "Unknown" whatever those lines in
"Are we improving?" mean.

We removed l10n-bot from
http://korma.wmflabs.org/browser/who_contributes_code.html but I'm not sure
whether it is being counted here.

It would be nice having a list of bots to be excluded from the calculus. From what I read in this ticket, it is clear that l10n-bot should be one of them. There are some more?

In adition, I understand that we should remove them from *all* the charts related to performance or time-to-x, not only from lists of persons. Am I right? I guess right now they are now only excluded from lists, but not from this kind of charts. Alvaro can for sure clarify this.

(In reply to Quim Gil from comment #24)

(In reply to Nemo from comment #23)

I note that all graphs are still completely meaningless because they still
don't exclude self merges.

Could you point to a Gerrit changeset with an example of a self merge,
please? I'm still not sure I understand what you mean.

Nemo, are you referring to code review processes where the uploader of the changeset and the person accepting the merge is same person?

(In reply to Jesus M. Gonzalez-Barahona from comment #26)

Nemo, are you referring to code review processes where the uploader of the
changeset and the person accepting the merge is same person?

Yes. I provided queries to exclude them in comment 14.

Ok, I see:

https://gerrit.wikimedia.org/r/#/c/114180/

Owner Mark Bergsma
Uploaded Feb 19, 2014 10:48 AM
Mark Bergsma Patch Set 1: Code-Review+2 Feb 19 10:50 AM

Self-merge in 2 minutes...

Sorry Nemo if I sounded stupid or deaf. I *really* didn't know that merging your own patches was a) a practice in our project and even b) something that Gerrit would allow someone to do someone unless you buy "him" a server or something. :)

So yes, if uploader and +2 is the same user, then that changeset should be ignored when calculating times to review. Thank you Nemo.

Thanks to both of you. Definitely, we will look into that to fix our query.

Hi guys, about the automerge and "instant reviews" we are already dealing with this in the last metrics:

https://github.com/VizGrimoire/VizGrimoireR/blob/mediawiki/vizgrimoire/SCR.py#L728

This is the basic query for getting the time to review and we are filtering:

filters += " AND i.submitted_by<>changes.changed_by "

and also

min_days_for_review = 0.042 # one hour

so in this metrics, no auto reviews and no <1h reviews are analyzed.

If you feel comfortable with that we can apply the same filtering to all data when getting closed submissions.

We will also filter out "i18n" submissions.

Opinions?

(In reply to Alvaro from comment #30)

This is the basic query for getting the time to review and we are filtering:

filters += " AND i.submitted_by<>changes.changed_by "

Can't review the query syntax but the principle is ok.

and also

min_days_for_review = 0.042 # one hour

This I don't like. Speedy reviews are still reviews: sure, they may be mostly typo fixes or changes coming from pair programming or whatever, but it's still valid. Don't draw an arbitrary line, instead use appropriate percentiles. If the median is still too skewed, use the 75th percentile as the WMF is doing... I don't remember where.

We will also filter out "i18n" submissions.

This is ok. Ideally you'd leave them in *one* graph i.e. the page on l10n-bot account itself, so that we have one place where to monitor its activity. But it's not necessary, I can also get such data from ohloh once I convince Nikerabbit to reply to its emails. :P

(In reply to Nemo from comment #31)

(In reply to Alvaro from comment #30)

min_days_for_review = 0.042 # one hour

This I don't like. Speedy reviews are still reviews

Agreed, speedy reviews happen when there are at least two developers working together fast. It is a factor to be considered in a project. It is also a factor that might increase the differences between WMF results and the rest, since it is easier to get two WMF employees in a same team working in such conditions that, say, and independent developer waiting for the review of someone with +2.

If the median is still too skewed, use the 75th percentile as
the WMF is doing... I don't remember where.

I think the pure median is good until someone comes with a better argument. The graphs should be good enough to show trends over time, and differences between affiliation and repositories.

We will also filter out "i18n" submissions.

One question, then. Do we need to filter explicitly changesets uploaded by bots, or will they be automatically filtered if/when they are merged by themselves? Or are the localization patches still merged manually?

But yes, in case of doubt let's remove bots from the equation. The purpose of these community metrics is to analyze the performance of humans.

Data updated removing l10n and adding Speedy reviews:

http://korma.wmflabs.org/browser/gerrit_review_queue.html

Also the pending metric has being improved.

Thank you! Please try to address Comment #22 and "the paradox of Babel" :) explained in Comment #24 before our meeting tomorrow. If we want to have all KPIs solved this month, we need to close this one asap. Thank you!

(In reply to Quim Gil from comment #34)

Thank you! Please try to address Comment #22 and "the paradox of Babel" :)
explained in Comment #24 before our meeting tomorrow. If we want to have all
KPIs solved this month, we need to close this one asap. Thank you!

Ok, let's go with that. Filtering now l10bot the activity in babel has decreased a lot:

http://korma.wmflabs.org/browser/repository.html?repository=gerrit.wikimedia.org_mediawiki_extensions_Babel

26 total submitted, 23 merged, 2 abandoned and 1 pending.

In gerrit data there are 26 submissions:

https://gerrit.wikimedia.org/r/#/q/project:mediawiki/extensions/Babel+-owner:l10n-bot,n,z

So the data is correct.

About the high time to review, we are using the median, so if there are only one open review:

https://gerrit.wikimedia.org/r/#/c/62601/

from "May 7, 2013 3:52 PM", the median review time is this review time: 301 days. This is why it is the second worst project using this metric.

Cheers

Ah, ok, I guess I had confused that "-2" with rejected/abandoned. So these -2 are still open. Understood, thank you.

What I don't understand is the flat line that repositories like Babel have. If there is only one changeset and the median age at the end of February was 301 days, it cannot be 301 days a month earlier as well, right?

That changeset was uploaded on May 7, 2013, and if this was the only patch around I would expect a line starting with 24 days at the end of May, 54 at the end of June... until the current 301. Or am I missing something?

(In reply to Quim Gil from comment #36)

Ah, ok, I guess I had confused that "-2" with rejected/abandoned. So these
-2 are still open. Understood, thank you.

Ok!

What I don't understand is the flat line that repositories like Babel have.
If there is only one changeset and the median age at the end of February was
301 days, it cannot be 301 days a month earlier as well, right?

Let me see ... the flat line for Babel starts at May 2013 according to

http://korma.wmflabs.org/browser/gerrit_review_queue.html

And this is ok with the review opened at 7 May and already pending.

No more open reviews are pending, so each month the median review time for pending is the same, the review time for this pending review.

Remember that the review time is computed for all submissions using the NOW date.

That changeset was uploaded on May 7, 2013, and if this was the only patch
around I would expect a line starting with 24 days at the end of May, 54 at
the end of June... until the current 301. Or am I missing something?

The way we are computing this metric is a bit different.

What do you think? Hmmm, it make sense what your are expecting so maybe we need to rework this metric. The computing of the metric is more difficult using your approach.

(In reply to Alvaro from comment #37)

Remember that the review time is computed for all submissions using the NOW
date.

That changeset was uploaded on May 7, 2013, and if this was the only patch
around I would expect a line starting with 24 days at the end of May, 54 at
the end of June... until the current 301. Or am I missing something?

The way we are computing this metric is a bit different.

What do you think? Hmmm, it make sense what your are expecting so maybe we
need to rework this metric. The computing of the metric is more difficult
using your approach.

Since comment #15 I'm assuming that the graphs showing age are being counted based on this rule:

"For each measurement timespot, median of the ages of all reviews still open at
that timespot, understanding as "age" the time period since the review
was open to the measurement timespot."

Which is exactly the way I'm counting the example of Babel above.

Isn't this the rule used already to draw "Review time for open submissions: Upload and Updated times (median)"?

Quim, metric logic fixed following the above definition.

You have the new data and viz in korma:

http://korma.wmflabs.org/browser/gerrit_review_queue.html

Ups, I am sorry about the first version of the metric. It was showing incorrect data. Now the "Review time for pending in days" in much more real ;)

(In reply to Alvaro from comment #39)

Quim, metric logic fixed following the above definition.

You have the new data and viz in korma:

http://korma.wmflabs.org/browser/gerrit_review_queue.html

Ups, I am sorry about the first version of the metric. It was showing
incorrect data. Now the "Review time for pending in days" is much more real
;)

still open at that timespot ... the timespot is the last day of each month.

https://github.com/VizGrimoire/VizGrimoireR/commit/a400e000663a65aa001f828375c7f0ce84074074#diff-830de9be97e5b0e8825c98e7be544f04R854

What remains to close this report:

(In reply to Quim Gil from comment #22)

  • the first graph at "Are Wikimedia's staff..." can be removed; it is a bit

confusing and the second one explains better what we are looking for

  • in "Are Wikimedia's staff...", add "last code review median age at a given

time"

  • in "Who is doing better and worse?", add a line for "last code review

median age at a given time", without removing
review_time_pending_days_acc_median, and sort repos by "last code review
median age at a given time"

  • remember the fix to make all Y axis start with zero

"Number of pending submissions" still needs to be fixed.

  • also please have one graph per row (except for the list of repos, of course
  • and a small detail, currently the page as some horizontal scroll, and some

graphs have no margin to the left, which I guess could be solved wrapping
everything in a <div> width:95% or something.

Still happening, just shrink the window a bit to see what I mean.

We agreed to refresh this page at the beginning of every month. This will also solve the problem of the graphs showing confusing results in the current month.

We left this task as optional:

  • in "Backlog of pending reviews", add the total amount of changesets

submitted every month (optional, we can leave this for later)

The graphs are now considered ready:

http://korma.wmflabs.org/browser/gerrit_review_queue.html

We are still missing better strings, but this is a task for me. I'm taking the bug. Help / patches welcome.

https://www.mediawiki.org/wiki/Community_metrics#Global_Pending_List still mentions some nice-to-have improvements:

  • Add median age of pending reviews by submission date in the list of repos (now it lists median age of last patches)
  • Add columns of total number of submissions in Pending backlog
  • Add Organizations graph with median age of pending reviews by submission date

If these items are still open when we close this bug, we will open new bug reports for them.

Finally, I have started providing better strings in the form of pull requests:

https://github.com/Bitergia/mediawiki-dashboard/pulls

In the meantime http://korma.wmflabs.org/browser/gerrit_review_queue.html has become a very useful page, providing that you understand the graphs.

Qgil lowered the priority of this task from High to Low.Nov 23 2014, 11:05 PM
Qgil set Security to None.
Qgil added a subscriber: Dicortazar.
Qgil raised the priority of this task from Low to Medium.Mar 13 2015, 1:41 PM

Alright, I finally went through http://korma.wmflabs.org/browser/gerrit_review_queue.html and I think the page is good overall.

There are some problems specific to the graph of median review time by affiliation. Since there is a task open and scheduled for the current sprint about this graph, I have left my comments at T72600#1319849.

Resolving here.