- cross-posted to:
- Aii@programming.dev
- cross-posted to:
- Aii@programming.dev
Not really surprising that they’re good at analyzing language, since they are Large Language Models after all. Still neat to see, though. Here’s the most interesting bit:
In the phonology task, the group made up 30 new mini-languages, as Beguš called them, to find out whether the LLMs could correctly infer the phonological rules without any prior knowledge. Each language consisted of 40 made-up words. Here are some example words from one of the languages:
- θalp
- ʃebre
- ði̤zṳ
- ga̤rbo̤nda̤
- ʒi̤zṳðe̤jo
They then asked the language models to analyze the phonological processes of each language. For this language, o1 correctly wrote that “a vowel becomes a breathy vowel when it is immediately preceded by a consonant that is both voiced and an obstruent” — a sound formed by restricting airflow, like the “t” in “top.”
The languages were newly invented, so there’s no way that o1 could have been exposed to them during its training. “I was not expecting the results to be as strong or as impressive as they were,” Mortensen said.
I’ve also tried out various LLMs on daily puzzles that it couldn’t have been trained on, like Connections and it does a really good job. I don’t think that the end of humanity is nigh or anything dramatic like that, but IMO this invalidates people that really want to hate AI and claim has 0 intelligence.


