Page MenuHomePhabricator

[EPIC] Support language variant conversion in Parsoid
Open, LowPublic

Description

This is the top-level tracker bug for LanguageConverter support in Parsoid.

Plan of record, roughly:

  • Phase 1: Parse all LC constructs into DOM (and round-trip them).

    This is sufficient to allow VE to edit LC wikis in same fashion as wikitext editor, w/ mix of variants displayed during editing.
  • Phase 3 (speculative): Use selective serialization to allow VE to operate on the converted text.

    This allows "single variant" editing, without the chaotic mix of variants shown in wikitext editing, and uses selective serialization to preserve the original variant of unedited text.
  • Phase 4 (speculative): Introduce new LC syntax or Glossary features which are a better match for future plans.

    This would avoid the "from this point forward" behavior of LC rules, which complicates incremental update, as well as avoiding the use of templates as a workaround for per-page glossaries. We might also introduce more pervasive language tagging in the source, to better match LC uses where character set can't be used to distinguish variant (toy example: pig latin -vs- english).

Related Objects

View Standalone Graph
This task is connected to more than 200 other tasks. Only direct parents and subtasks are shown here. Use View Standalone Graph to show more of the graph.
StatusSubtypeAssignedTask
InvalidNone
ResolvedJdforrester-WMF
Resolvedcscott
DuplicateNone
ResolvedMarkTraceur
ResolvedJdlrobson
Resolved Pchelolo
ResolvedJdlrobson
OpenNone
OpenNone
OpenNone
OpenNone
OpenNone
OpenNone
Invalid GWicke
Resolvedliangent
OpenNone
DuplicateBUG REPORTNone
Resolvedcscott
OpenNone
OpenNone
ResolvedBUG REPORTJgiannelos
OpenNone

Event Timeline

There are a very large number of changes, so older changes are hidden. Show Older Changes

Given that now one of the transformations was merged, we actually need a way to access that and to transform the HTML stored in RESTBase. What do you think about the following API in Parsoid?

POST /transform/html/to/html
Params:
 - Accept-Language in headers
 - html in body

@Arlolra Is that ^ consistent with the other html2html endpoints we've implemented?

Unfortunately, that endpoint is a bit of a mess, see https://github.com/wikimedia/parsoid/blob/master/lib/api/routes.js#L568-L570

But, I'd expect something more along the lines of the following to work,

POST /transform/pagebundle/to/pagebundle
Params:
 - Accept-Language in headers
 - original.html in body

Unfortunately, that endpoint is a bit of a mess, see https://github.com/wikimedia/parsoid/blob/master/lib/api/routes.js#L568-L570

But, I'd expect something more along the lines of the following to work,

POST /transform/pagebundle/to/pagebundle
Params:
 - Accept-Language in headers
 - original.html in body

Hm, we would be sending only the HTML, though, and require only the HTML back. Supplying the data-parsoid as well would increase the load and latency. Do you expect the /html/to/html/ to be revised soon?

Unfortunately, that endpoint is a bit of a mess, see https://github.com/wikimedia/parsoid/blob/master/lib/api/routes.js#L568-L570

But, I'd expect something more along the lines of the following to work,

POST /transform/pagebundle/to/pagebundle
Params:
 - Accept-Language in headers
 - original.html in body

Hm, we would be sending only the HTML, though, and require only the HTML back. Supplying the data-parsoid as well would increase the load and latency. Do you expect the /html/to/html/ to be revised soon?

Note that pb2pb is that endpoint. Depending on the specific conversion operation, only some parts of the pagebundle might actually be required. So, you don't have to post data-parsoid in this case.

Consider the case when we split up data-mw into a different bucket. Then, data-mw will be part of a different posted param in case it is required for this conversion. So, pb2pb is the correct generic endpoint.

In T114413#2365456, I indicate that for all pb2pb endpoints, we should introduce an additional parameter that explicitly specifies the required conversion to eliminate complexity (which is the mess that Alro refers to above). So, we will likely add that to this pb2pb endpoint.

Note to self: I probably should make sure LanguageConverter doesn't require access to data-parsoid.

After some discussion on IRC (and review of T114413) I'm proposing the following API:

POST /transform/pagebundle/to/pagebundle
Request:

original: {
 html: {
  headers: {
    'content-type': 'text/html; charset=utf-8; profile="https://mediawiki.org/wiki/Specs/DOM/1.7.0"'
  },
  body: '<html>...</html>'
 },
},
updates: {
  variant: { source: 'en', target: 'en-x-piglatin' }
}

The variant.source property can be omitted (ie, undefined), in which case Parsoid will attempt to guess the source variant to support round-tripping. Setting source to null will disable round-trip support (useful for display-only use cases). Setting 'target' to the special value 'x-roundtrip' will use embedded round-trip metadata to attempt to convert the HTML back to the original source variant.

For example, Visual Editor might use: variant: { source: 'en', target: 'en-x-piglatin' } on English wikipedia, where it is known that all articles are stored in English, not Pig Latin. (Some other wikis have similarly "we always write in one specific variant" conventions.) When saving the edited document, it would use variant: { source: 'en-x-piglatin', target: 'x-roundtrip' } to convert it back to the original English text.

If an editor were to shift VE from zh-cn to zh-tw in the middle of an edit, two requests would probably have to be made: variant: { source: 'zh-cn', target: 'x-roundtrip' } to restore the original wikitext, then variant: { target: 'zh-tw' } on the result in order to convert to the user's new variant preference. At the moment we don't support combining these requests, but we might do so in the future.

MCS would use variant: { source: null, target: '...' } when localizing summaries or wikidata text for display.

At the moment we don't support combining a variant update with another sort of update (redlinks, etc), but we might do so in the future.

EDIT: updated with arlo's correction below.

For consistency,

Request:

original: { html: { ... } },
updates: { ... }

This API strikes me as complicated for an HTML-to-HTML transliteration. Namely, RB would need to completely reconstruct every request made to it for any variant other than the default one instead of simply getting the HTML and adding the A-L header to it. For round-tripping, do I understand correctly that 2 requests would need to be made: first to tell Parsoid we want round-tripping and the other one to specify the actual target? Wouldn't something like { source: 'zh-cn', target: 'zh-tw', roundtrip: true} work?

No, round-tripping is the default. Specify source: null to explicitly disable it --- but the only reason to do so is to slim down the HTML a bit, maybe I don't even need to complicate the API for that.

The two requests example above is just for on-the-fly variant switching *while editing*. In that case you need to do a little dance instead of trying to convert directly from one variant to the other in order to ensure the round-trip information is preserved.

In most cases, you'd take the HTML from parsoid, stuff it into a JSON blob as original.html and then add updates.variant.target = 'my-target-variant' and send it to the pb2pb endpoint.

Or, you know, just ask for the variant you want directly from wt2html using the Accept-Language header...

I'm inclined to implement the pb2pb endpoint compliant with T114413 first, then if we find that there's a significant efficiency loss by JSON-encoding the HTML we can talk about adding a new specialized endpoint?

We've had a meeting with @cscott yesterday and here's a couple of notes from our discussion worths mentioning:

  1. By default, we will return the "natural" variant - the HTML corresponding to the mixed-variant wikitext stored in the database. "By default" will be returned if no accept-language is provided, or if accept-language has a value that's not supported for a particular language.
  1. Looking at the domain in RESTBase is not enough for splitting the Varnish cache or for deciding whether to even look at the accept-language and go to Parsoid for transformation, since LanguageConverter is actually enabled on all wikis, so, for example, even on English Wikipedia certain pages can have a different page language that will support conversion. This is mostly important for multi-language wikis like mediawiki.org. For cache-splitting Parsoid could provide the info about page language in some meta tag, however, for making a decision whether to go to Parsoid for transformation that's not very convenient, at least it's not easy to bootstrap, cause all pages must be re-rendered and re-stored in order for this to work reliably. I'm evaluating the possibility to include this info in the title_revision table so that RESTBase could decide on its own.
ssastry raised the priority of this task from Low to Needs Triage.Sep 20 2018, 4:01 PM
ssastry triaged this task as High priority.

In my opinion, it would be the best to dump the whole LanguageConverter -{ }- markup, which is used to define specific variant translation for one term, and used the data from Wikidata instead. Wikidata can store language variant info and can be used across all wikimedia project, rather than volunteers maintaining the same CGroup across multiple project manually.

LGoto lowered the priority of this task from High to Medium.Mar 13 2020, 4:19 PM
LGoto moved this task from Missing Functionality to Future Ideas on the Parsoid board.

In my opinion, it would be the best to dump the whole LanguageConverter -{ }- markup, which is used to define specific variant translation for one term, and used the data from Wikidata instead. Wikidata can store language variant info and can be used across all wikimedia project, rather than volunteers maintaining the same CGroup across multiple project manually.

See also the Glossary RFC (T484). Unfortunately glossaries tend to be topic-specific --- the dictionary you'd use for a pop culture article about movies may not be appropriate for a science article -- but the glossary could certainly source the variants from wikidata. It would be useful to be able to reference glossaries in a global manner as well, so that pages in zh in places other than zhwiki (ie, commons or mediawiki.org or wikimania.org etc) can use the constructed glossaries.

ssastry renamed this task from Support language variant conversion in Parsoid to [EPIC] Support language variant conversion in Parsoid.Jul 15 2020, 5:55 PM
ssastry added a project: Parsoid-Rendering.

@cscott: Hi, I'm resetting the task assignee due to inactivity. Please feel free to reclaim this task if you plan to work on this - it would be welcome! Also see https://www.mediawiki.org/wiki/Bug_management/Assignee_cleanup for more information - thanks!

This ticket is important for the openZIM/Kiwix community and in particular its Chinese audience, see https://github.com/openzim/mwoffliner/issues/840

MSantos lowered the priority of this task from Medium to Low.Jun 26 2023, 3:16 PM