“Translation—with a Little Help from Our Computers,” Ensign, Apr. 1979, 30
Fact: The Book of Mormon is currently published in less than one percent of the world’s languages.
Fact: According to one estimate, it takes somewhere between four and ten years to translate the Book of Mormon, about five years to translate fifteen tracts and the missionary discussions.
Fact: If we translate the Book of Mormon into four new languages a year, it will take nearly forty years to cover only the languages spoken by more than one million—and there are more than 3,000 spoken languages and major dialects.
A key to making it happen faster may be the Translation Sciences Institute at Brigham Young University, last described in the Ensign in July 1974. (Justus Ernst, “Every Man … In His Own Language … ,” pp. 23–27.) Since then, the Institute has come several steps closer to making its dream a reality by working on two different problems simultaneously: (1) computer word processing and translation aids, and (2) interactive computer translation.
Getting the words from one page in English to another page in Spanish is not a simple process. A translator has to convert the text from one language to another; his draft is typed; he proofreads and corrects; it’s typed again; it’s reviewed and retyped, reviewed again and retyped, proofread, typeset, proofread again, and printed. At any stage, errors can be introduced. And all of these steps take time.
At BYU’s Translation Sciences Institute, another process is under study—word processing. That means that a translator takes the original document and types his translation directly into the computer, following his process on a screen above the keyboard. He can proofread, edit, and correct as he goes along. If a word doesn’t need to be changed, it will never be typed again and, hence, can never be misspelled. He can add, delete, or change letters, words, phrases, sentences, paragraphs, or even pages.
When he’s finished, he simply records the translated document on a cassette tape or “floppy disk.” His reviewers, using the same kind of terminal, can make all their own changes without altering the parts that should stay the same. The computer will also typeset it. TSI did not invent word processing (newspapers use it too) but TSI pioneered its use for translation in conjunction with the Church’s Translation Services.
Results? Actual translation time still takes about the same amount of time, but the other steps can be done so quickly that it slashes the total time almost exactly in half. And, since a computer works so quickly, the savings in money are equally dramatic.
Can anything be done to cut down the translation time itself? Yes. One of the current projects is an instantaneous retrieval system that will call to the video screen the dictionaries and a bank of all translated scriptures, which now have to be laboriously found one at a time.
Some helps that are now operational are a system that will represent the accent marks of most languages, and the ability to print 10,000 Chinese characters plus the development of analytical concordances for the standard works. Under development is a system for editing Chinese—to their knowledge one of the few such projects in the world.
The second major focus is the one that sounds like science fiction—interactive computer translation. This means that the human operator “guides” the computer through the translation process, and BYU’s Translation Sciences Institute is the only major center in the country where it’s working. BYU’s Institute has received international recognition for its pioneering efforts. Traditionally, one of the problems that has always hampered computer translation is that the machine doesn’t know what to do with ambiguities, colloquial phrases, and grammatical peculiarities. Dr. Eldon Lytle, associate professor of linquistics and director of the Institute, developed a language “model” called Junction Grammar which “instructs” the computer on the relationships that exist between language elements and also “tells” it what the equivalent would be in the target language. English uses more passive constructions than most languages, for instance, but most languages also have equivalent structures. Junction Grammar “tells” the computer to make one of those equivalent transfers every time it’s needed for a foreign language. It “builds” its structural representation in the computer and generates a corresponding structure in the target language.
And when an ambiguity comes up that the machine can’t handle, it stops and asks the operator a question. (The human operator working with the computer at this stage is highly trained in English, the language of origin.) For instance, when the computer encounters the word pen, as a noun, it discovers three meanings: a writing instrument, a pig’s domicile, and a prison. The translator specifies which meaning the word should be given. Another potentially ambiguous construction is “I found the boy in my car.” The computer wants to know: who was in the car? I? or the boy? Or both? When the ambiguity is resolved, the computer finishes its processing and produces a draft translation.
Then the translation is reviewed by a native speaker of the target language at a video terminal that shows both English and target language. Thus, a human operator has both the first and last word, and each is primarily responsible for the subtleties of his own language. This double expertise inevitably produces more accurate interpretations.
Just how accurate are these translations? An independent evaluation by Sperry Univac Corporation in June 1977 showed about 96 percent technical accuracy in grammar, word selection, and completeness. When all of the systems are operative—and the target date for linking word processing with the interactive computer translation system is September 1979—accuracy will rise even higher. They call this system their “Model T,” to be refined over the next several months. TSI will start with five languages—English to German, French, Portuguese, Spanish, and Chinese. Since all five translations can be made simultaneously and since much less editing will be required after translation, a project that used to take five hours should be done in two. TSI’s goal is eventually to use 100 percent of the capacity of its 370/138 computer, a gift of BYU’s former president Ernest L. Wilkinson and his family.
And after the goal is met in the future, what’s on the drawing boards? Perfecting the process for other languages, of course.
One long-range plan is machine interpretation of conference addresses. For instance, the text of a conference talk to be given in English will be translated—into Spanish, say. This text will then be put into a machine synthesizer to produce spoken Spanish, a system that even has the potential of altering pitch, speed, and intonation so that it will sound like the same speaker—or a different speaker. This won’t happen at the same time as the English speech is given—someday it may be very nearly simultaneous.
There are a lot of potential spin-offs from speech processing: in what may be the ultimate application, one native speaker could read all of the parts on a filmstrip script and the computer could transform the voices into those of the different characters.
Speech processing also may help students learn foreign languages faster, and it may be a real aid for the severely deaf. A patent has already been granted on a low-frequency auditory code.