The question of MT post-editing for professional translation work is pretty much an unavoidable question nowadays. It’s also a powder keg for some people. For some MT is a godsend with potential to speed up translations like nothing before. For others it’s a crude hack that should be avoided at all cost. I think most would agree that, just like MT in general, it really depends on the language pair and the subject matter in question. The debates surrounding this question get quite heated in many cases, but what I’ve noticed over the past couple years is a surprising lack of data to back up any claims. Sure, there’s anecdotal evidence all over – “I use it and my translations are better than ever,” vs. “MT slows me down and messes me up ALWAYS.” But where’s the data?
In search for an answer to this question, I stumbled upon this article:
http://www.mt-archive.info/AMTA-2010-VanEss-Dykema.pdf. It appears that someone is actually looking into this empirically. The difficulty in this empirical approach to this question of course lies largely in translation metrics – how do you really determine translation quality? There’s no question that using MT and doing post editing will give you different results for those you would get if a translator did it the old fashioned way, but how do you determine quality of the one versus the other? And when does speed increase trump linguistic accuracy?
The article mentioned above is a case study proposal from last year that doesn’t seem to have had results published yet. I’ll see if I can get a hold of any preliminary findings, but it’s encouraging to me that in the near future this question won’t just be he-said-she-said. Without the data, I can see both sides of the argument – if post editing of MT gives you quick, intelligible results then I can see many cases where it would be very useful. On the other hand, MT doesn’t think. If you are familiar with the Vauquois Triangle (a often used graphic to demonstrate the different levels at which meaning transfer for MT can occur), anything that doesn’t use an interlingua to translate is not going to the level of a human and thus probably misses nuances that would be left out of a MT editing processes. And while MT can stay try to word, syntactic, and some semantic constructs through statistical leveraging of enough data and some rules, translators understandably balk at numbers and rules governing something both innately artistic and cerebral.
So until we have the data, live and let live. If professional translators are using MT to increase their speed without compromising quality, then great. And if you are one of those that believe for whatever reason that MT is never a viable solution for professional translation, then ponder and paint away.