In a paper she has recently revived on academia.edu (see References), Elisabet Tiselius, citing Grbic, reminds us that there are other ways of evaluating interpretation quality than the traditional one.
So what's the traditional one? It's to chunk the source text and its translation into short segments (translation units), compare them one by one, assigning a score of 'good' or 'bad' to each one (a procedure that is inevitably subjective to some degree), and then add up the scores or else subtract them from 100. Modern alignment software makes this easy.
"Grbic explores the concept of quality as it has been treated in interpreting literature and interpreting studies. She divides the construct of interpreting into different dimensions according to how it is perceived…. Third, quality is defined as something that is fit for a certain purpose, with a premium placed on customer satisfaction and value for money."What Grbic says about interpreting applies also to written translation.
Here are two examples to illustrate "that is fit for a certain purpose", or for short fit for purpose (FFP). The first has already been recounted on this blog but it was back in April 2010 – Gosh! This blog has been going for five years already! – so I may be forgiven for repeating myself. Back around 1990, a branch of the Canadian government was using machine translation (MT) to translate into French the notices of job vacancies that were posted up each day in government employment centres in English. It was a requirement of the Canadian laws on bilingualism. The MT system contracted to do it was, to say the least, rather primitive, and at one point it became notorious for a classic mistake. The French for man is homme. But Man. (with the initial capital and the dot) is also a standard abbreviation for Manitoba, one of the Canadian provinces. So whenever there was a job vacancy in a town in that province, the location would come out, for example, as Winnipeg, Homme. One day I found myself sitting next to a senior official from the ministry at a conference and I couldn't resist asking him about this. He was piqued to retort:
“The only people who complain about our translations are professional translators and university professors like you. Our clients are happy with them because they get them the same day. If they had to wait even 24 hours while we sent them to the government translation bureau, the chances are that the vacancy would already be filled. And as for Manitoba / Homme, well they soon learn that Homme means Manitoba. For them, our translations serve their purpose.”The second example is much more recent, in fact from last week. A Spanish student wanted to find articles about how expert translators use their dictionaries. Among the many that Google found for her was one in German and she doesn't know any German. So she ran the title through Google Translate, which gave her:
"Para fundamento técnico-acción de uso del diccionario de investigación."Apart from the bad grammar of the first part, the latter part of the Spanish is an outright mistranslation. If we compare it with the German
- Zur handlungstheoretischen Grundlegung der Wörterbuchbenutzungsforschung –we see that the translation ought not to be uso del diccionario de investigación (use of the research dictionary) but investigación del uso del diccionario (research on the use of dictionaries). Complex syntax is still a stumbling block for Google Translate and software like it. Never mind. All she wanted at that stage was a confirmation that the article dealt with the use of dictionaries, and she got it. The translation was screwed up yet it was nevertheless fit for purpose.
Incidentally this use of MT to search for relevant publications like needles in a haystack, known as scanning, was one of the earliest applications of MT. In the early 1970s, alarmed by the unexpected success of the Russian Sputnik, the United States Air Force used it at their Wright Patterson base to scan Soviet technical publications. Usefulness for scanning may still be MT developers' best defence against critics of their systems' output.
However, the admission of FFP as a standard of translation leads to another problem. How can we score it and scale it? Unlike the traditional approach, it has no established procedures. The government official implied his own simplistic answer: no user complaints means 100% FFP. But below 100%? Perhaps a solution lies in sounding out customer satisfaction with a one-line questionnaire to be attached to each translation:
"On a scale of 1 to 5 (useless to very satisfactory), indicate whether this translation has met your needs."Other suggestions welcome.
Elisabet Tiselius. The development of expertise – or not. Three simltaneous interpreters' development over time. 2013. Available for downloading from academia.edu (https://su-se.academia.edu/ElisabetTiselius).
Nadja Grbic (pardon the missimg diacritic on the c). Constructing interpreting quality. Interpretng: International Journal of Research and Practice in Interpreting, vol. 10, no. 2, pp. 232-257, 2008.
Herbert Ernst Wiegand. Zur handlungstheoretischen Grundlegung der Wörterbuchbenutzungsforschung. Lexicographica No. 3, pp. 178-227, 1987.