Google's Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation
Google's Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation
Abstract
We propose a simple solution to use a single Neural Machine Translation (NMT) model to translate
between multiple languages. Our solution requires no changes to the model architecture from a standard
NMT system but instead introduces an artificial token at the beginning of the input sentence to specify
the required target language. The rest of the model, which includes an encoder, decoder and attention
module, remains unchanged and is shared across all languages. Using a shared wordpiece vocabulary, our
approach enables Multilingual NMT using a single model without any increase in parameters, which is
significantly simpler than previous proposals for Multilingual NMT. On the WMT’14 benchmarks, a single
multilingual model achieves comparable performance for English→French and surpasses state-of-the-art
results for English→German. Similarly, a single multilingual model surpasses state-of-the-art results
for French→English and German→English on WMT’14 and WMT’15 benchmarks, respectively. On
production corpora, multilingual models of up to twelve language pairs allow for better translation of
many individual pairs. In addition to improving the translation quality of language pairs that the model
was trained with, our models can also learn to perform implicit bridging between language pairs never
seen explicitly during training, showing that transfer learning and zero-shot translation is possible for
neural translation. Finally, we show analyses that hints at a universal interlingua representation in our
models and show some interesting examples when mixing languages.
1 Introduction
End-to-end Neural Machine Translation (NMT) [27, 2, 5] is an approach to machine translation that has
rapidly gained adoption in many large-scale settings [31, 29, 6]. Almost all such systems are built for a single
language pair — so far there has not been a sufficiently simple and efficient way to handle multiple language
pairs using a single model without making significant changes to the basic NMT architecture.
In this paper we introduce a simple method to translate between multiple languages using a single model,
taking advantage of multilingual data to improve NMT for all languages involved. Our method requires no
change to the traditional NMT model architecture. Instead, we add an artificial token to the input sequence
to indicate the required target language, a simple amendment to the data only. All other parts of the system
— encoder, decoder, attention, and shared wordpiece vocabulary as described in [29] — stay exactly the same.
This method has several attractive benefits:
• Simplicity: Since no changes are made to the architecture of the model, scaling to more languages is
trivial — any new data is simply added, possibly with over- or under-sampling such that all languages
are appropriately represented, and used with a new token if the target language changes. Since no
changes are made to the training procedure, the mini-batches for training are just sampled from the
overall mixed-language training data just like for the single-language case. Since no a-priori decisions
about how to allocate parameters for different languages are made the system adapts automatically
to use the total number of parameters efficiently to minimize the global loss. A multilingual model
architecture of this type also simplifies production deployment significantly since it can cut down the
1
total number of models necessary when dealing with multiple languages. Note that at Google, we
support a total of over 100 languages as source and target, so theoretically 1002 models would be
necessary for the best possible translations between all pairs, if each model could only support a single
language pair. Clearly this would be problematic in a production environment. Even when limiting
to translating to/from English only, we still need over 200 models. Finally, batching together many
requests from potentially different source and target languages can significantly improve efficiency of
the serving system. In comparison, an alternative system that requires language-dependent encoders,
decoders or attention modules does not have any of the above advantages.
• Low-resource language improvements: In a multilingual NMT model, all parameters are implicitly
shared by all the language pairs being modeled. This forces the model to generalize across language
boundaries during training. It is observed that when language pairs with little available data and
language pairs with abundant data are mixed into a single model, translation quality on the low resource
language pair is significantly improved.
• Zero-shot translation: A surprising benefit of modeling several language pairs in a single model
is that the model can learn to translate between language pairs it has never seen in this combina-
tion during training (zero-shot translation) — a working example of transfer learning within neural
translation models. For example, a multilingual NMT model trained with Portuguese→English and
English→Spanish examples can generate reasonable translations for Portuguese→Spanish although it
has not seen any data for that language pair. We show that the quality of zero-shot language pairs can
easily be improved with little additional data of the language pair in question (a fact that has been
previously confirmed for a related approach which is discussed in more detail in the next section).
In the remaining sections of this paper we first discuss related work and explain our multilingual system
architecture in more detail. Then, we go through the different ways of merging languages on the source and
target side in increasing difficulty (many-to-one, one-to-many, many-to-many), and discuss the results of a
number of experiments on WMT benchmarks, as well as on some of Google’s large-scale production datasets.
We present results from transfer learning experiments and show how implicitly-learned bridging (zero-shot
translation) performs in comparison to explicit bridging (i.e., first translating to a common language like
English and then translating from that common language into the desired target language) as typically used
in machine translation systems. We describe visualizations of the new system in action, which provide early
evidence of shared semantic representations (interlingua) between languages. Finally we also show some
interesting applications of mixing languages with examples: code-switching on the source side and weighted
target language mixing, and suggest possible avenues for further exploration.
2 Related Work
Interlingual translation is a classic method in machine translation [21, 14]. Despite its distinguished history,
most practical applications of machine translation have focused on individual language pairs because it was
simply too difficult to build a single system that translates reliably from and to several languages.
Neural Machine Translation (NMT) [15] was shown to be a promising end-to-end learning approach in
[27, 2, 5] and was quickly extended to multilingual machine translation in various ways.
An early attempt is the work in [7], where the authors modify an attention-based encoder-decoder approach
to perform multilingual NMT by adding a separate decoder and attention mechanism for each target language.
In [17] multilingual training in a multitask learning setting is described. This model is also an encoder-decoder
network, in this case without an attention mechanism. To make proper use of multilingual data, they extend
their model with multiple encoders and decoders, one for each supported source and target language. In [3]
the authors incorporate multiple modalities other than text into the encoder-decoder framework.
Several other approaches have been proposed for multilingual training, especially for low-resource language
pairs. For instance, in [32] a form of multi-source translation was proposed where the model has multiple
different encoders and different attention mechanisms for each source language. However, this work requires
the presence of a multi-way parallel corpus between all the languages involved, which is difficult to obtain in
practice. Most closely related to our approach is [8] in which the authors propose multi-way multilingual
NMT using a single shared attention mechanism but multiple encoders/decoders for each source/target
2
language. Recently in [16] a CNN-based character-level encoder was proposed which is shared across multiple
source languages. However, this approach can only perform translations into a single target language.
Our approach is related to the multitask learning framework [4]. Despite its promise, this framework
has seen limited practical success in real world applications. In speech recognition, there have been many
successful reports of modeling multiple languages using a single model (see [22] for an extensive reference and
references therein). Multilingual language processing has also shown to be successful in domains other than
translation [13, 28].
There have been other approaches similar to ours in spirit, but used for very different purposes. In [25],
the NMT framework has been extended to control the politeness level of the target translation by adding a
special token to the source sentence. The same idea was used in [30] to add the distinction between ’active’
and ’passive’ tense to the generated target sentence.
Our method has an additional benefit not seen in other systems: It gives the system the ability to perform
zero-shot translation, meaning the system can translate from a source language to a target language without
having seen explicit examples from this specific language pair during training. Zero-shot translation was the
direct goal of [10]. Although they were not able to achieve this direct goal, they were able to do what they call
“zero-resource” translation by using their pre-trained multi-way multilingual model and later fine-tuning it
with pseudo-parallel data generated by the model. It should be noted that the difference between “zero-shot”
and “zero-resource” translation is the additional fine-tuning step which is required in the latter approach.
To the best of our knowledge, our work is the first to validate the use of true multilingual translation
using a single encoder-decoder model, and is incidentally also already used in a production setting. It is also
the first work to demonstrate the possibility of zero-shot translation, a successful example of transfer learning
in machine translation, without any additional steps.
3
Figure 1: The model architecture of the Multilingual GNMT system. In addition to what is described in [29],
our input has an artificial token to indicate the required target language. In this example, the token “<2es>”
indicates that the target sentence is in Spanish, and the source sentence is reversed as a processing step. For
most of our experiments we also used direct connections between the encoder and decoder although we later
found out that the effect of these connections is negligible (however, once you train with those they have to
be present for inference as well). The rest of the model architecture is the same as in [29].
As already discussed in Section 2, other models have been used to explore some of these cases already, but
for completeness we apply our technique to these interesting use cases again to give a full picture of the
effectiveness of our approach.
We will also show results and discuss benefits of bringing together many (un)related languages in a single
large-scale model trained on production data. Finally, we will present our findings on zero-shot translation
where the model learns to translate between pairs of languages for which no explicit parallel examples existed
in the training data, and show results of experiments where adding additional data improves zero-shot
translation quality further.
4
In addition to WMT, we also evaluate the multilingual approach on some Google-internal large-scale
production datasets representing a wide spectrum of languages with very distinct linguistic properties:
English↔Japanese(Ja), English↔Korean(Ko), English↔Spanish(Es), and English↔Portuguese(Pt). These
datasets are two to three orders of magnitude larger than the WMT datasets.
Our training protocols are mostly identical to those described in [29] and we refer the reader to the
detailed description in that paper. We find that some multilingual models take a little more time to train
than single language pair models, likely because each language pair is seen only for a fraction of the training
process. Depending on the number of languages a full training can take up to 10M steps and 3 weeks to
converge (on roughly 100 GPUs). We use larger batch sizes with a slightly higher initial learning rate to
speed up the convergence of these models.
We evaluate our models using the standard BLEU score metric and to make our results comparable
to [27, 19, 31, 29], we report tokenized BLEU score as computed by the multi-bleu.pl script, which can be
downloaded from the public implementation of Moses.1
To test the influence of varying amounts of training data per language pair we explore two strategies when
building multilingual models: a) where we oversample the data from all language pairs to be of the same
size as the largest language pair, and b) where we mix the data as is without any change. The wordpiece
model training is done after the optional oversampling taking into account all the changed data ratios. For
the WMT models we report results using both of these strategies. For the production models, we always
balance the data such that the ratios are equal.
One benefit of the way we share all the components of the model is that the mini-batches can contain data
from different language pairs during training and inference, which are typically just random samples from
the final training and test data distributions. This is a simple way of preventing “catastrophic forgetting”
- tendency for knowledge of previously learnt task(s) (e.g. language pair A) to be abruptly forgotten as
information relevant to the current task (e.g. language pair B) is incorporated [11]. Other approaches to
multilingual translation require complex update scheduling mechanisms to prevent this effect [9].
• The second set of experiments is on production data where we combine Japanese→English and
Korean→English, with oversampling. The baselines are two single language pair models: Japanese→English
and Korean→English trained independently.
• Finally, the third set of experiments is on production data where we combine Spanish→English and
Portuguese→English, with oversampling. The baselines are again two single language pair models
trained independently.
All of the multilingual and single language pair models have the same total number of parameters as the
baseline NMT models trained on a single language pair (using 1024 nodes, 8 LSTM layers and a shared
wordpiece model vocabulary of 32k, a total of 255M parameters per model). A side effect of this equal choice
of parameters is that it is presumably unfair to the multilingual models as the number of parameters available
per language pair is reduced by a factor of N compared to the single language pair models, if N is the
number of language pairs combined in the multilingual model. The multilingual model also has to handle the
combined vocabulary of all the single models. We chose to keep the number of parameters constant for all
models to simplify experimentation. We relax this constraint for some of the large-scale experiments shown
further below.
1 http://www.statmt.org/moses/
5
Table 1: Many to One: BLEU scores on various data sets for single language pair and multilingual models.
Model Single Multi Diff
WMT German→English (oversampling) 30.43 30.59 +0.16
WMT French→English (oversampling) 35.50 35.73 +0.23
WMT German→English (no oversampling) 30.43 30.54 +0.11
WMT French→English (no oversampling) 35.50 36.77 +1.27
Prod Japanese→English 23.41 23.87 +0.46
Prod Korean→English 25.42 25.47 +0.05
Prod Spanish→English 38.00 38.73 +0.73
Prod Portuguese→English 44.40 45.19 +0.79
The results are presented in Table 1. For all experiments the multilingual models outperform the baseline
single systems despite the above mentioned disadvantage with respect to the number of parameters available
per language pair. One possible hypothesis explaining the gains is that the model has been shown more
English data on the target side, and that the source languages belong to the same language families, so the
model has learned useful generalizations.
For the WMT experiments, we obtain a maximum gain of +1.27 BLEU for French→English. Note that
the results on both the WMT test sets are better than other published state-of-the-art results for a single
model, to the best of our knowledge. On the production experiments, we see that the multilingual models
outperform the baseline single systems by as much as +0.8 BLEU.
Table 2: One to Many: BLEU scores on various data sets for single language pair and multilingual models.
Model Single Multi Diff
WMT English→German (oversampling) 24.67 24.97 +0.30
WMT English→French (oversampling) 38.95 36.84 -2.11
WMT English→German (no oversampling) 24.67 22.61 -2.06
WMT English→French (no oversampling) 38.95 38.16 -0.79
Prod English→Japanese 23.66 23.73 +0.07
Prod English→Korean 19.75 19.58 -0.17
Prod English→Spanish 34.50 35.40 +0.90
Prod English→Portuguese 38.40 38.63 +0.23
We observe that oversampling helps the smaller language pair (En→De) at the cost of lower quality for
the larger language pair (En→Fr). The model without oversampling achieves better results on the larger
language compared to the smaller one as expected. We also find that this effect is more prominent on smaller
6
datasets (WMT) and much less so on our much larger production datasets.
Table 3: Many to Many: BLEU scores on various data sets for single language pair and multilingual models.
Model Single Multi Diff
WMT English→German (oversampling) 24.67 24.49 -0.18
WMT English→French (oversampling) 38.95 36.23 -2.72
WMT German→English (oversampling) 30.43 29.84 -0.59
WMT French→English (oversampling) 35.50 34.89 -0.61
WMT English→German (no oversampling) 24.67 21.92 -2.75
WMT English→French (no oversampling) 38.95 37.45 -1.50
WMT German→English (no oversampling) 30.43 29.22 -1.21
WMT French→English (no oversampling) 35.50 35.93 +0.43
Prod English→Japanese 23.66 23.12 -0.54
Prod English→Korean 19.75 19.73 -0.02
Prod Japanese→English 23.41 22.86 -0.55
Prod Korean→English 25.42 24.76 -0.66
Prod English→Spanish 34.50 34.69 +0.19
Prod English→Portuguese 38.40 37.25 -1.15
Prod Spanish→English 38.00 37.65 -0.35
Prod Portuguese→English 44.40 44.02 -0.38
On the WMT datasets, we once again explore the impact of oversampling the smaller language pairs. We
notice a similar trend to the previous section in which oversampling helps the smaller language pairs at the
expense of the larger ones, while not oversampling seems to have the reverse effect.
Although there are some significant losses in quality from training many languages jointly using a model
with the same total number of parameters as the single language pair models, these models reduce the total
complexity involved in training and productionization. Additionally, these multilingual models have more
interesting advantages as will be discussed in more detail in the sections below.
7
order of weeks). Another important point is that since we only train for a little longer than a standard single
model, the individual language pairs can see as little as 1/12-th of the data in comparison to their single
language pair models but still produce satisfactory results.
Table 4: Large-scale experiments: BLEU scores for single language pair and multilingual models.
Model Single Multi Multi Multi Multi
#nodes 1024 1024 1280 1536 1792
#params 3B 255M 367M 499M 650M
Prod English→Japanese 23.66 21.10 21.17 21.72 21.70
Prod English→Korean 19.75 18.41 18.36 18.30 18.28
Prod Japanese→English 23.41 21.62 22.03 22.51 23.18
Prod Korean→English 25.42 22.87 23.46 24.00 24.67
Prod English→Spanish 34.50 34.25 34.40 34.77 34.70
Prod English→Portuguese 38.40 37.35 37.42 37.80 37.92
Prod Spanish→English 38.00 36.04 36.50 37.26 37.45
Prod Portuguese→English 44.40 42.53 42.82 43.64 43.87
Prod English→German 26.43 23.15 23.77 23.63 24.01
Prod English→French 35.37 34.00 34.19 34.91 34.81
Prod German→English 31.77 31.17 31.65 32.24 32.32
Prod French→English 36.47 34.40 34.56 35.35 35.52
ave diff - -1.72 -1.43 -0.95 -0.76
vs single - -5.6% -4.7% -3.1% -2.5%
The results are summarized in Table 4. We find that the multilingual model is reasonably close to the best
single models and in some cases even achieves comparable quality. It is remarkable that a single model with
255M parameters can do what 12 models with a total of 3B parameters would have done. The multilingual
model also requires one twelfth of the training time and computing resources to converge. Another important
point is that since we only train for a little longer than the single models, the individual language pairs can
see as low as one twelfth of the data in comparison to their single language pair models. Again we note that
the comparison below is somewhat unfair for the multilingual model and we expect a larger model trained on
all available data will likely achieve comparable or better quality than the baselines.
In summary, multilingual NMT enables us to group languages with little or no loss in quality while having
the benefits of better training efficiency, smaller number of models, and easier productionization.
8
models. Note that besides the pleasant fact that zero-shot translation works at all it has also the advantage
of halving decoding speed as no explicit bridging through a third language is necessary when translating from
Portuguese to Spanish.
Table 5 summarizes our results for the Portuguese→Spanish translation experiments. Rows (a) and
(b) report the performance of the phrase-based machine translation (PBMT) system and the NMT system
through bridging (translation from Portuguese to English and translating the resulting English sentence
to Spanish). It can be seen that the NMT system outperforms the PBMT system by close to 2 BLEU
points. Note that Model 1 and Model 2 can be bridged within themselves to perform Portuguese→Spanish
translation. We do not report these numbers since they are similar to the performance of bridging with two
individual single language pair NMT models. For comparison, we built a single NMT model on all available
Portuguese→Spanish parallel sentences (see (c) in Table 5).
The most interesting observation is that both Model 1 and Model 2 can perform zero-shot translation
with reasonable quality (see (d) and (e)) compared to the initial expectation that this would not work at
all. Note that Model 2 outperforms Model 1 by close to 3 BLEU points although Model 2 was trained with
four language pairs as opposed to with only two for Model 1 (with both models having the same number of
total parameters). In this case the addition of Spanish on the source side and Portuguese on the target side
helps Pt→Es zero-shot translation (which is the opposite direction of where we would expect it to help). We
believe that this unexpected effect is only possible because our shared architecture enables the model to learn
a form of interlingua between all these languages. We explore this hypothesis in more detail in Section 5.
Finally we incrementally train zero-shot Model 2 with a small amount of true Pt→Es parallel data (an
order of magnitude less than Table 5 (c)) and obtain the best quality and half the decoding time compared
to explicit bridging (Table 5 (b)). The resulting model cannot be called zero-shot anymore since some true
parallel data has been used to improve it. Overall this shows that the proposed approach of implicit bridging
using zero-shot translation via multilingual models can serve as a good baseline for further incremental
training with relatively small amounts of true parallel data of the zero-shot direction. This result is especially
significant for non-English low-resource language pairs where it might be easier to obtain parallel data with
English but much harder to obtain parallel data for language pairs where neither the source nor the target
language is English. We explore the effect of using parallel data in more detail in Section 4.7.
Since Portuguese and Spanish are of the same language family an interesting question is how well zero-shot
translation works for less related languages. Table 6 shows the results for explicit and implicit bridging from
Spanish to Japanese using the large-scale model from Table 4 – Spanish and Japanese can be regarded as
quite unrelated. As expected zero-shot translation works worse than explicit bridging and the quality drops
relatively more (roughly 50% drop in BLEU score) than for the case of more related languages as shown
above. Despite the quality drop, this proves that our approach enables zero-shot translation even between
unrelated languages.
Table 6: Spanish→Japanese BLEU scores for explicit and implicit bridging using the 12-language pair
large-scale model from Table 4.
Model BLEU
NMT Es→Ja explicitly bridged 18.00
NMT Es→Ja implicitly bridged 9.14
9
4.7 Effect of Direct Parallel Data
In this section, we explore two ways of leveraging available parallel data to improve zero-shot translation
quality, similar in spirit to what was reported in [10]. For our multilingual architecture we consider:
• Incrementally training the multilingual model on the additional parallel data for the zero-shot directions.
• Training a new multilingual model with all available parallel data mixed equally.
For our experiments, we use a baseline model which we call “Zero-Shot” trained on a combined parallel corpus
of English↔{Belarusian(Be), Russian(Ru), Ukrainian(Uk)}. We trained a second model on the above corpus
together with additional Ru↔{Be, Uk} data. We call this model “From-Scratch”. Both models support four
target languages, and are evaluated on our standard test sets. As done previously we oversample the data
such that all language pairs are represented equally. Finally, we take the best checkpoint of the “Zero-Shot”
model, and run incremental training on a small portion of the data used to train the “From-Scratch” model
for a short period of time until convergence (in this case 3% of “Zero-Shot” model total training time). We
call this model “Incremental”.
As can be seen from Table 7, for the English↔X directions, all three models show comparable scores.
On the Russian↔{Belarusian, Ukrainian} directions, the “Zero-Shot” model already achieves relatively high
BLEU scores for all directions except one, without any explicit parallel data. This could be because these
languages are linguistically related. In the “From-Scratch” column, we see that training a new model from
scratch improves the zero-shot translation directions further. However, this strategy has a slightly negative
effect on the English↔X directions because our oversampling strategy will reduce the frequency of the data
from these directions. In the final column, we see that incremental training with direct parallel data recovers
most of the BLEU score difference between the first two columns on the zero-shot language pairs. In summary,
our shared architecture models the zero-shot language pairs quite well and hence enables us to easily improve
their quality with a small amount of additional parallel data.
5 Visual Analysis
The results of this paper — that training a model across multiple languages can enhance performance at the
individual language level, and that zero-shot translation can be effective — raise a number of questions about
how these tasks are handled inside the model, for example:
• Is the network learning some sort of shared representation, in which sentences with the same meaning
are represented in similar ways regardless of language?
• Does the model operate on zero-shot translations in the same way as it treats language pairs it has
been trained on?
10
One way to study the representations used by the network is to look at the activations of the network
during translation. A starting point for investigation is the set of context vectors, i.e., the sum of internal
encoder states weighted by their attention probabilities per step (Eq. (5) in [2]).
A translation of a single sentence generates a sequence of context vectors. In this context, our original
questions about shared representation can be studied by looking at how the vector sequences of different
sentences relate. We could then ask for example:
11
Figure 2: A t-SNE projection of the embedding of 74 semantically identical sentences translated across
all 6 possible directions, yielding a total of 9,978 steps (dots in the image), from the model trained on
English↔Japanese and English↔Korean examples. (a) A bird’s-eye view of the embedding, coloring by the
index of the semantic sentence. Well-defined clusters each having a single color are apparent. (b) A zoomed
in view of one of the clusters with the same coloring. All of the sentences within this cluster are translations
of “The stratosphere extends from about 10km to about 50km in altitude.” (c) The same cluster colored by
source language. All three source languages can be seen within this cluster.
Figure 3: (a) A bird’s-eye view of a t-SNE projection of an embedding of the model trained on
Portuguese→English (blue) and English→Spanish (yellow) examples with a Portuguese→Spanish zero-
shot bridge (red). The large red region on the left primarily contains the zero-shot Portuguese→Spanish
translations. (b) A scatter plot of BLEU scores of zero-shot translations versus the average point-wise distance
between the zero-shot translation and a non-bridged translation. The Pearson correlation coefficient is −0.42.
12
for embeddings of different sentences, accounting for the fact that two sentences might consist of different
numbers of wordpieces. To do so, for a sentence of n wordpieces w0 , w1 , . . . , wn−1 where the i th wordpiece
i
has been embedded at yi ∈ R1024 , we defined a curve γ : [0, 1] → R1024 at “control points” of the form n−1
by:
i
γ = yi
n−1
and use linear interpolation to define γ between these points. The dissimilarity between two curves γ1 and
γ2 , where m is the maximum number of wordpieces in both sentences, is defined by
m−1
1 X i i
dissimilarity(γ1 , γ2 ) = d γ1 , γ2
m i=0 m−1 m−1
Figure 3b shows a plot of BLEU scores of a zero-shot translation versus the average pointwise distance
between it and the same translation from a trained language pair. We can see that the value of this
dissimilarity score is correlated with the quality of the zero-shot translation with a Pearson correlation
coefficient of −0.42, indicating moderate correlation. An interesting area for future research is to find a more
reliable correspondence between embedding geometry and model performance to predict the quality of a
zero-shot translation during decoding by comparing it to the embedding of the translation through a trained
language pair.
6 Mixing Languages
Having a mechanism to translate from a random source language to a single chosen target language using
an additional source token made us think about what happens when languages are mixed on the source or
target side. In particular, we were interested in the following two experiments:
1. Can a multilingual model successfully handle multi-language input (code-switching), when it happens
in the middle of the sentence?
2. What happens when a multilingual model is triggered not with a single but two target language tokens
weighted such that their weight adds up to one (the equivalent of merging the weighted embeddings of
these tokens)?
The following two sections discuss these experiments.
13
6.2 Weighted Target Language Selection
In this section we test what happens when we mix target languages. We take a multilingual model trained
with multiple target languages, for example, English→{Japanese, Korean}. Then instead of feeding the
embedding vector for “<2ja>” to the bottom layer of the encoder LSTM, we feed a linear combination
(1 − w)<2ja> + w<2ko>. Clearly, for w = 0 the model should produce Japanese, for w = 1 it should produce
Korean, but what happens in between?
One expectation could be that the model will output some sort of intermediate language (“Japarean”),
but the results turn out to be less surprising. Most of the time the output just switches from one language
to another around w = 0.5. In some cases, for intermediate values of w the model switches languages
mid-sentence.
A possible explanation for this behavior is that the target language model, implicitly learned by the
decoder LSTM, may make it very hard to mix words from different languages, especially when these languages
use different scripts. In addition, since the token which defines the requested target language is placed at the
beginning of the sentence, the further the decoder progresses, the less likely it is to put attention on this
token, and instead the choice of language is determined by previously generated target words.
Table 8 shows examples of mixed target language using three different multilingual models. It is interesting
that in the first example (Russian/Belarusian) the model switches from Russian to Ukrainian (underlined) as
target language first before finally switching to Belarusian. In the second example (Japanese/Korean), we
observe an even more interesting transition from Japanese to Korean, where the model gradually changes the
grammar from Japanese to Korean. At wko = 0.58, the model translates the source sentence into a mix of
Japanese and Korean at the beginning of the target sentence. At wko = 0.60, the source sentence is translated
into full Korean, where all of the source words are captured, however, the ordering of the words does not look
natural. Interestingly, when the wko is increased up to 0.7, the model starts to translate the source sentence
14
into a Korean sentence that sounds more natural.3
7 Conclusion
We present a simple solution to multilingual NMT. We show that we can train multilingual NMT models that
can be used to translate between a number of different languages using a single model where all parameters
are shared, which as a positive side-effect also improves the translation quality of low-resource languages in
the mix. We also show that zero-shot translation without explicit bridging is possible, which is the first time
to our knowledge that a form of true transfer learning has been shown to work for machine translation. To
explicitly improve the zero-shot translation quality, we explore two ways of adding available parallel data
and find that small additional amounts are sufficient to reach satisfactory results. In our largest experiment
we merge 12 language pairs into a single model and achieve only slightly lower translation quality as for
the single language pair baselines despite the drastically reduced amount of modeling capacity per language
in the multilingual model. Visual interpretation of the results shows that these models learn a form of
interlingua representation between all involved language pairs. The simple architecture makes it possible to
mix languages on the source or target side to yield some interesting translation examples. Our approach has
been shown to work reliably in a Google-scale production setting and enables us to scale to a large number of
languages quickly.
Acknowledgements
We would like to thank the entire Google Brain Team and Google Translate Team for their foundational
contributions to this project. In particular, we thank Junyoung Chung for his insights on the topic and Alex
Rudnick and Otavio Good for helpful suggestions.
References
[1] Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S.,
Irving, G., Isard, M., Kudlur, M., Levenberg, J., Monga, R., Moore, S., Murray, D. G.,
Steiner, B., Tucker, P., Vasudevan, V., Warden, P., Wicke, M., Yu, Y., and Zheng, X.
Tensorflow: A system for large-scale machine learning. arXiv preprint arXiv:1605.08695 (2016).
[2] Bahdanau, D., Cho, K., and Bengio, Y. Neural machine translation by jointly learning to align
and translate. In International Conference on Learning Representations (2015).
[3] Caglayan, O., Aransa, W., Wang, Y., Masana, M., García-Martínez, M., Bougares, F.,
Barrault, L., and van de Weijer, J. Does multimodality help human and machine for translation
and image captioning? In Proceedings of the First Conference on Machine Translation (Berlin, Germany,
August 2016), Association for Computational Linguistics, pp. 627–633.
[4] Caruana, R. Multitask learning. In Learning to learn. Springer, 1998, pp. 95–133.
[5] Cho, K., van Merrienboer, B., Gülçehre, Ç., Bougares, F., Schwenk, H., and Bengio,
Y. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In
Conference on Empirical Methods in Natural Language Processing (2014).
[6] Crego, J., Kim, J., Klein, G., Rebollo, A., Yang, K., Senellart, J., Akhanov, E., Brunelle,
P., Coquard, A., Deng, Y., Enoue, S., Geiss, C., Johanson, J., Khalsa, A., Khiari, R., Ko,
B., Kobus, C., Lorieux, J., Martins, L., Nguyen, D.-C., Priori, A., Riccardi, T., Segal, N.,
Servan, C., Tiquet, C., Wang, B., Yang, J., Zhang, D., Zhou, J., and Zoldan, P. Systran’s
pure neural machine translation systems. arXiv preprint arXiv:1610.05540 (2016).
3 The
Korean translation does not contain spaces and uses ‘。’ as punctuation symbol, and these are all artifacts of applying
a Japanese postprocessor.
15
[7] Dong, D., Wu, H., He, W., Yu, D., and Wang, H. Multi-task learning for multiple language
translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics
(2015), pp. 1723–1732.
[8] Firat, O., Cho, K., and Bengio, Y. Multi-way, multilingual neural machine translation with a shared
attention mechanism. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of
the Association for Computational Linguistics: Human Language Technologies, San Diego California,
USA, June 12-17, 2016 (2016), pp. 866–875.
[9] Firat, O., Cho, K., Sankaran, B., Yarman Vural, F., and Bengio, Y. Multi-way, multilingual
neural machine translation. Computer Speech and Language (4 2016).
[10] Firat, O., Sankaran, B., Al-Onaizan, Y., Yarman-Vural, F. T., and Cho, K. Zero-resource
translation with multi-lingual neural machine translation. In EMNLP (2016).
[11] French, R. M. Catastrophic forgetting in connectionist networks. Trends in cognitive sciences 3, 4
(1999), 128–135.
[12] Gage, P. A new algorithm for data compression. C Users J. 12, 2 (Feb. 1994), 23–38.
[13] Gillick, D., Brunk, C., Vinyals, O., and Subramanya, A. Multilingual language processing
from bytes. In Proceedings of the 2016 Conference of the North American Chapter of the Association
for Computational Linguistics: Human Language Technologies (San Diego, California, June 2016),
Association for Computational Linguistics, pp. 1296–1306.
[14] Hutchins, W. J., and Somers, H. L. An introduction to machine translation, vol. 362. Academic
Press London, 1992.
[15] Kalchbrenner, N., and Blunsom, P. Recurrent continuous translation models. In Conference on
Empirical Methods in Natural Language Processing (2013).
[16] Lee, J., Cho, K., and Hofmann, T. Fully character-level neural machine translation without explicit
segmentation. arXiv preprint arXiv:1610.03017 (2016).
[17] Luong, M.-T., Le, Q. V., Sutskever, I., Vinyals, O., and Kaiser, L. Multi-task sequence to
sequence learning. In International Conference on Learning Representations (2015).
[18] Luong, M.-T., Pham, H., and Manning, C. D. Effective approaches to attention-based neural
machine translation. In Conference on Empirical Methods in Natural Language Processing (2015).
[19] Luong, M.-T., Sutskever, I., Le, Q. V., Vinyals, O., and Zaremba, W. Addressing the rare word
problem in neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for
Computational Linguistics and the 7th International Joint Conference on Natural Language Processing
(2015).
[20] Maaten, L. V. D., and Hinton, G. Visualizing Data using t-SNE. Journal of Machine Learning
Research 9 (2008).
[21] Richens, R. H. Interlingual machine translation. The Computer Journal 1, 3 (1958), 144–147.
[22] Schultz, T., and Kirchhoff, K. Multilingual speech processing. Elsevier Academic Press, Amsterdam,
Boston, Paris, 2006.
[23] Schuster, M., and Nakajima, K. Japanese and Korean voice search. 2012 IEEE International
Conference on Acoustics, Speech and Signal Processing (2012).
[24] Sébastien, J., Kyunghyun, C., Memisevic, R., and Bengio, Y. On using very large target
vocabulary for neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association
for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing
(2015).
16
[25] Sennrich, R., Haddow, B., and Birch, A. Controlling politeness in neural machine translation via
side constraints. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the
Association for Computational Linguistics: Human Language Technologies, San Diego California, USA,
June 12-17, 2016 (2016), pp. 35–40.
[26] Sennrich, R., Haddow, B., and Birch, A. Neural machine translation of rare words with subword
units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (2016).
[27] Sutskever, I., Vinyals, O., and Le, Q. V. Sequence to sequence learning with neural networks. In
Advances in Neural Information Processing Systems (2014), pp. 3104–3112.
[28] Tsvetkov, Y., Sitaram, S., Faruqui, M., Lample, G., Littell, P., Mortensen, D., Black,
A. W., Levin, L., and Dyer, C. Polyglot neural language models: A case study in cross-lingual
phonetic representation learning. In Proceedings of the 2016 Conference of the North American Chapter
of the Association for Computational Linguistics: Human Language Technologies (San Diego, California,
June 2016), Association for Computational Linguistics, pp. 1357–1366.
[29] Wu, Y., Schuster, M., Chen, Z., Le, Q. V., Norouzi, M., Macherey, W., Krikun, M., Cao,
Y., Gao, Q., Macherey, K., Klingner, J., Shah, A., Johnson, M., Liu, X., Łukasz Kaiser,
Gouws, S., Kato, Y., Kudo, T., Kazawa, H., Stevens, K., Kurian, G., Patil, N., Wang, W.,
Young, C., Smith, J., Riesa, J., Rudnick, A., Vinyals, O., Corrado, G., Hughes, M., and
Dean, J. Google’s neural machine translation system: Bridging the gap between human and machine
translation. arXiv preprint arXiv:1609.08144 (2016).
[30] Yamagishi, H., Kanouchi, S., and Komachi, M. Controlling the voice of a sentence in japanese-to-
english neural machine translation. In Proceedings of the 3rd Workshop on Asian Translation (Osaka,
Japan, December 2016), pp. 203–210.
[31] Zhou, J., Cao, Y., Wang, X., Li, P., and Xu, W. Deep recurrent models with fast-forward
connections for neural machine translation. Transactions of the Association for Computational Linguistics
4 (2016), 371–383.
[32] Zoph, B., and Knight, K. Multi-source neural translation. In NAACL HLT 2016, The 2016 Conference
of the North American Chapter of the Association for Computational Linguistics: Human Language
Technologies, San Diego California, USA, June 12-17, 2016 (2016), pp. 30–34.
17