Meta's Omnilingual MT for 1,600 Languages
104 points - last Wednesday at 5:00 AM
Sourcestingraycharles
today at 1:42 PM
I find that meta’s translations are very poor compared to others, at least for relatively obscure languages, which I figured was relevant considering the article.
Google Translate is a good default, but LLMs are really good at translations, as they’re better capable at understanding context and providing culturally appropriate translations.
(I live in Cambodia where they speak Khmer)
djsamseng
today at 3:07 PM
Hello from Siem Reap, Cambodia! Awesome to see a fellow tech enthusiast from Cambodia.
I actually found Facebook’s translations pretty good (better than Google Translate for things longer than a sentence). From my understanding of Khmer, Khmer is a bit more verbose and context dependent, hence LLMs in Khmer would be a big help understand those nuances.
In the inverse case (LLMs generating khmer from English) I heard from locals that it sounds formal and “robotic” which I found quite interesting.
pseudocomposer
today at 3:18 PM
Kagi Translate is fantastic. Multilingual support is honestly one of the best things about LLMs, imo.
So, LLMs are noticeably better in Khmer than Google Translate? I wonder why Google Translate doesn't use Gemini under-the-hood. Perhaps it's more prone to hallucinations.
I'm interested in find some thorough testing of translations on different LLMs vs Translation APIs.
pattilupone
today at 5:10 PM
There's a dropdown on Google Translate that lets you choose "Advanced" mode or "Classic" mode. Advanced mode uses Gemini but it's only available for select languages.
yellow_lead
today at 3:37 PM
It's not even good for Chinese
smallerize
today at 1:56 PM
*they're
(Sorry I had to)
stingraycharles
today at 2:05 PM
I could have sworn I edited it! I did notice myself as well, but thanks for the correction.
I'll be looking at this in detail. I've started a company to do similar things, https://6k.ai
I'm currently concentrating on better data gathering for low-resource languages.
When you look in detail at data like Common Crawl, finepdfs, and fineweb, (1) they are really lacking quality data sources if you know where to look, and (2) the sources they have are not processed "finely" enough (e.g. finepdfs classify each page of PDF as having a specific language, where-as many language learning sources have language pairs, etc.
Common Crawl has been running a low-resource language project for 1.5 years now -- it's a hard problem.
There’s many nation states working on this, have you looked into availability of those data sets?
What languages are you prioritizing?
Yes, there are government datasets, languge "acadamies" (or "regulators") - organizations focused on preserving / teaching the language, and often smaller, local publishers that publish material in their local language.
I'm living in Guatemala, so have been focusing on the Mayan languages here (22 languages, millions of speakers).
As an aside, I remember visiting Guatemala (in the border area near Chiapas) in the early 90s and discovering that “Mayan” was not the monolith that I had been led to believe by my culturally narrow American education, but was a diverse collection of related cultures with multiple languages.
In one of the villages we visited, there was a language school where foreigners were learning Jacalteco. One student was from Israel and where most of the students had vocabulary lists in three columns (Jacalteco - Spanish - English), his had four columns where he did one more step of translation to Hebrew.
Just spent a long time trying to find where you can download any of these weights.
Is it open weight? If so, why isn't there just a straight link to the models?
I haven't seen anywhere claiming they are open weight (although their last similar model, NLLB was).
They say their leaderboard and evaluation datasets are freely available. Closest statement I've seen in the paper, "Our translation models are built on top of
freely available models."
garyclarke27
today at 3:39 PM
They can translate 1600 languages, but they cannot do basic text formatting, where are the paragraphs?
canjobear
today at 5:33 PM
It's an abstract for a paper, so it's officially supposed to be one paragraph.
BalinKing
today at 6:35 PM
In the paper itself, the abstract actually does have a paragraph break, so it's probably just an autoformatting issue or something.
psychoslave
today at 1:44 PM
That's a high count, but still a bit away from "Omni". Usual count is between 4k and 8k depending the source. But the first 1k might be the hardest, certainly.
simultsop
today at 3:24 PM
when you market, you use frontier and edge terms, so it sounds pro max
Didn’t research show that models get worse at translation the more languages get added in? The curse of multilinguality? Lauscher 2020?
It looks like meta found a way forward.
Reading meta’s abstract, it seems that they have found ways to improve the quality of the training data, and also new evaluation tools?
They are also saying that OMT-LLaMA does a better job at text generation than other baseline models.
Off topic, since the AI craze MS‘ documentation translation has ridiculous errors like translating try catch keywords to "versuchen" and "fangen" for German pages
Yes their translations offer negative value, which is annoying because at work you can't usually choose your locale settings.
And the errors are really basic, like translating shortly to short, not the same thing at all!
rowanseerwald
today at 4:06 PM
[dead]
true21733
last Wednesday at 7:37 AM
[dead]