0

AI models have favorite numbers, because they think they’re people | TechCrunch

AI models always surprise us, not only in what they can do, but also in what they cannot do, and why. An interesting new behavior about these systems is both superficial and revealing: they choose random numbers as if they were humans, that is, badly.

But first, what does that mean? Can’t people just pick numbers at random? And how can you tell if someone is doing that successfully? This is actually a very old and well-known limitation that we humans have: we overthink randomness and misunderstand it.

Ask someone to predict the flip of 100 coins, and compare that to the flip of 100 real coins – you can almost always tell them apart because, counter-intuitively, real coins are flipped Look Less random. For example, there will often be six or seven heads or tails in a row, something that almost no human predictor includes in their 100.

The same thing happens when you ask someone to choose a number between 0 and 100. People almost never choose 1 or 100. Multiples of 5 are rare, as are numbers with repeating digits, such as 66 and 99. These don’t seem like “random” choices to us, because they embody some quality: small, large, special. Instead, we often choose numbers ending in 7, usually from somewhere in the middle.

There are countless examples of such predictions in psychology. But it is no less strange when AI does the same thing.

Yes, Some curious engineers from Grammenar An informal but still interesting experiment was conducted in which they asked several major LLM chatbots to pick a random number between 0 and 100.

Reader, these were the results No Random.

Image Credit: Gramener

All three models tested had a “favorite” number that was always their answer when placed in the most deterministic mode, but which also appeared most often at higher “temperatures”, a setting that models often have that increases the variability of their results.

OpenAI’s GPT-3.5 Turbo actually likes 47. Previously, it liked 42 – a number that was of course made famous by Douglas Adams The Hitchhiker’s Guide to the Galaxy There is an answer to life, the universe, and everything.

Anthropic’s Cloud 3 haiku has a value of 42. And Gemini likes 72.

More interestingly, all three models also displayed human-like bias in the other selected numbers, even at high temperatures.

Everyone tried to avoid low and high numbers; Cloud never went above 87 or below 27, and even those were exceptions. Double digits were avoided altogether: no 33, 55 or 66, but 77 showed up (ending in 7). Almost no round numbers – although Gemini once, at the highest temperature, wildly chose 0.

Why should that be? AIs are not humans! Why would they care about what “seems” random? Have they finally achieved consciousness and they show it like this?!

No. The answer, as is usually the case with these things, is that we’ve gone one step too far. These models don’t care about what is random and what isn’t. They don’t know what “randomness” is! They answer this question the same way they answer all the other questions: by looking at their training data and repeating the most frequently written question after the question that looks like “pick a random number.” The more times it appears, the more times the model repeats it.

Where in their training data will they see 100, if almost no one ever answers that way? All AI models know that 100 is not an acceptable answer to that question. Without real reasoning capability, and without any kind of understanding of numbers, it can only answer like a stochastic parrot. (Similarly, they tend to fail at simple arithmetic, like multiplying certain numbers together; after all, how likely is it that their training data contains the phrase “112*894*32=3,204,096” somewhere? Newer models will recognize that a math problem exists and put it into a subroutine, though.)

This is an object lesson in the habits of LLMs and the humanity they display. In every interaction with these systems, one must keep in mind that they have been trained to act like people, even if that is not the intention. That is why pseudohumanism It is very difficult to avoid or prevent this.

I wrote in the title that these models “think they are people,” but that is a bit misleading. As we often have the opportunity to point out, they are not people. Thinking Of course not. But in their reactions, all the time, they Are Mimicking people, without any knowledge or thought. Whether you’re asking it for a chickpea salad recipe, investment advice or a random number, the process is the same. The results look human because they are human, taken directly from human-generated material and remixed – for your convenience, and of course for the bottom line of big AI.

ai-models-have-favorite-numbers-because-they-think-theyre-people-techcrunch