Yes, yes, the robots are getting good: artificial intelligence can chew up words, sentences, whole books even, faster than I, lowly human, can read them. The AI then reiterates it all as a series of numbers and decodes that output into the data set of another language. It’s quick, and, barring the kind of sirena/mermaid/alarm hilarity that memes are made of, and software’s infuriating inability to understand where to stick a comma, it can be reasonably accurate. Which is fine, if you like that sort of thing.
Accuracy is the enemy of art. Even in its mechanics, through its trajectory—the way it parcels language into pieces—machine translation misses the point: segmentation undermines meaning. I don’t just mean the nuances and subtle shifts that dance around with syntax, emphasis, and inflection. Meaning itself is inherently the result of its own evolution. You may have read a sentence or a paragraph five times, but when you sit down to translate it, you are learning its meaning anew; that is, its meaning is being created anew. Every word teaches you where you are going, and where you have come from, shaping the target text in a way that would have been impossible to predict from the mere sum of its parts. The translator, too, changes as they translate: not only do they come to understand the meaning of what they’re translating as they go, they simultaneously allow that meaning to mutate, generating the meaning that will become the work as they themselves change. We are cyborgs already, made of flesh and language.
Humans manage semantic information through complex, extensive pathways, including seemingly superficial categorization (such as homonyms, for example).But this process also borrows more deeply from our memories, our immediate surroundings, all the other scraps of knowledge stored hither and yon and linked more or less randomly to other knowledge. Association is not random at all, of course: the word I began with up there, two lines ago, which became “borrows,” was “delve,” but that wasn’t quite right. (How did I know it wasn’t right? What is that instinct, that can’t-put-my-finger-on-it? Our almost is an organ the machines do not possess.) What I was after was the French word “puiser,” which brought me over to a well, and from there it was a skip-hop to dig, mine, and finally burrow, which was elegantly typoed into borrow. Association is wrought by tension, from one almost-right to another, and stretching from the literal to the figurative, a tension that oscillates, constantly adjusting. That tension is energy, and it feeds the text, the art.
We also ascribe meaning based on everything that has come before the moment of translation. A word is never just a word, it is also a childhood, it is its first acquisition, it is all the times you pronounced it wrong because you’d only seen it written down; it is a monument, a love letter, a song, a lullaby, a decree, a sermon, an elegy. Translators choose their words carefully, ceding space to the author, and trying to uphold the primacy of the original, but we cannot erase ourselves completely, nor should we want to: what sets a translation apart as literature is not its faithfulness, but its polyphony. Artificial intelligence does not sing with the author’s human instrument, it is a tinny reproduction of the score, an electronic facsimile, lacking the scritch of the vocal cords, the biological necessity of drawing a breath to be able to keep going, the nervous anticipation of where we’re going next. Most of all, it lacks the slips and misses.
AI errs without trial.
To err is human, yadda yadda. I just mistyped “to err us human,” then corrected it as “to err as human,” before finally coming to the intended verb. But on the way there, I stopped in at the first-person plural, steeped in collectivity, and then tripped over a chewy little comparative adverb. Error is intrinsic to creation, not just because it can’t be avoided, but because of the detours it forces. (Amish women quilting make a deliberate mistake in their work to avoid hubris, so as not to aspire to the perfection of the divine. I’m pretty resolutely atheistic, but it’s nice to tell myself that’s why I fuck up.) If accuracy is the enemy of art, then perhaps adjacency is its impish house elf. Where neural networks lay little nuggets of content side by side as ones and zeros, the human mind is more easily distracted, squirreling left and right on its way to its destination, scanning the trees along the path to see what might jump out, and sometimes going down the wrong path entirely. Compare even the act of looking up a word electronically versus in a print dictionary. Online, for instance, I get “illuminate |ɪˈluː.mɪ.neɪt|, verb: To make something visible or bright . . . ” Clear enough, and not unpoetic. But flipping to that page in my OED gives me an eyeful, and a brainful—not only “illuminate,” but also “illegitimate,” “illusory,” “ill-fated,” “illustrious,” and—lo!—a word I didn’t know, “illumine.” If I am a good translator, I will not let that adjacency hijack accuracy; if I am a better (and a human) translator, tiny particles of those infinite proximate possibilities will seep into my work, changing it, imperceptibly, perhaps, but inevitably injecting some pressure. Although sometimes (most times?) the actual word we choose as the equivalent of another may well be the same an AI would have spit out, all the associations—distractions, surprises, missteps—we accumulate on the way to that word, that sentence, that text, cannot help but change us, the meaning we contain, and the meaning we create.
AI makes its bed, and we have to sleep in it.
Machine translation makes mistakes, the best of all time being perhaps “polissez la saucisse” for “Polish sausage.” Those mistakes are yet another turn of the screw in the shackles that bind the unluckiest among us in the technocratic pyramid. Beyond the crushing environmental cost, already our robot overlords are forcing us into servitude, exploiting AI-model trainers in digital sweatshops or as content moderators—actual people, mostly in the global south, who are paid peanuts to sit there and triage content that is at best tedious and at worst abhorrent. And already the next generation of technical translators is doomed to a professional fate of correcting the machines’ oversights as underpaid post-editors. Good thing three unimagineably wealthy guys are saving a few bucks, eh. The biases embricked into the machine—the worst of our worst, apparently—add another flavour of evil, too.
My argument is that AI makes the wrong kind of mistakes. Even the most advanced large language models haphazardly invent prophecies, skip over complex sentences, and jumble pronouns. AI flops on context, too: it is Anglo-centric, racist, misogynistic. It distills the worst of us, in terms of both abilities and biases. But what AI doesn’t do is operate within risk and error like humans. Mistakes are crucial to human development and learning; as the (Slavic, obviously) saying goes, burn yourself with hot milk and next time you’ll blow on cold water. In terms of making art, the water won’t burn you, but you’ll have learned to blow, and, best of all, the little ripples in the surface of the water might be shaped like something that reminds you of something else that leads to a metaphor that solves the problem. Mistakes lead to unexpectedly good outcomes.
Mistakes breed resilience, and, most importantly, humility. I don’t know about you, but I write best when I feel like I have forgotten how to write entirely, when I have no idea what I’m doing, when watching my thoughts take shape on paper or seeing my process unfurl on the screen teaches me where I’m going, what I’m thinking, what I have to say.
Mistakes are a reminder to pay closer attention, and attention is the leavening agent of translation. Attention is how we care for the source text; attention, and its kissing cousin, focus, also frees up our peripheral vision, allowing it to notice-without-noticing all the sidelong chaff that sneaks in. In this regard, using AI at all is wearing down its allegedly parallel competency in humans: just as resorting to generative AI to write for us is making us stupider, and romancing interpersonal AI is making us psychotic, translating with AI narrows our multilingual—and multisensory, and multi-everything-else—periphery.
Human translation is fed by adjacency and by latency. A work of literary translation is palimpsestic in so many ways: it contains every draft the author overwrote or threw out; it recalls the plodding days we spent circling around a single word for hours, and the rush of those moments when we plowed through entire passages at once; and it holds, fraught and buried, but enriching the final art, all the times we went sideways, either from bewilderment or blunder, and all the growth engendered.
