r/science • u/thebelsnickle1991 • 1d ago
Engineering New study finds large language models are prone to social identity biases similar to the way humans are—but LLMs can be trained to stem these outputs
https://www.nyu.edu/about/news-publications/news/2024/december/-us--vs---them--biases-plague-ai--too.html125
u/fiddletee 1d ago
“New study finds LLMs are prone to the same biases contained in the data they are trained on”
27
33
u/RadicalLynx 1d ago
Huh, this software that breaks down and recombines words that people wrote tends to say similar things to those people. That's so weird, man
10
u/BINGODINGODONG 1d ago
Didnt we also know this with Microsoft’s old chatbot Tay? Thing went from hardcore feminist to anti-feminist and then neo-nazi in less than 16 hours?
6
u/WTFwhatthehell 1d ago
Tay became famous but for decades various groups tried to build chatbots that could learn on the fly.
the lesson every time was "don't do that" because for the Internet it's fun to make the robot say taboo things.
I remember "cleverbot" and various others.
26
u/lulzmachine 1d ago
Stemming the outputs? So they'll have the same biases as the rest of the population, they will just be sure not to speak them in public. Checks out
6
u/nicuramar 1d ago
How would you measure what biases a LLM has if it’s not part of the output?
6
u/Limp_Scale1281 1d ago
Easy, measure how much it must be “stemmed”. One AI could evaluate this about a second. They could also evaluate it about each other.
3
2
u/RMCPhoto 1d ago
Hopefully one day we will have better ways of analyzing the internal weights and biases of these models. But as of right now, I think you're correct.
4
u/VectorNavigator 1d ago
it's not surprising that LLMs reflect human biases since they're trained on human data it makes you wonder how we can achieve truly AI if it's ultimately learning from our skewed perspectives.
3
u/acutelychronicpanic 1d ago
Newer models are trained using reinforcement learning on task completion and problem solving rather than just more internet data.
So there is a good chance biases will be weeded out in areas where there is an objective truth to verify against.
4
u/WTFwhatthehell 1d ago edited 1d ago
depends what you call skew and what you consider reality.
a lot of complaints over the years have been people complaining when an AI system notices some simple truth about reality that one group or another would prefer wasn't real.
they can be really simple things
[ai notices that most nurses are women hence if a character is stated to be a nurse it guesses they are probably a woman] -> [people scream "bias" because they don't like physical reality and insist the robot should pretend otherwise]
1
1
u/BreakingBaIIs 1d ago
I feel like these psych professors can save a lot of time by just asking someone how a decoder-only transformer works. Anyone who knows could have just told them that this would be the outcome.
1
0
•
u/AutoModerator 1d ago
Welcome to r/science! This is a heavily moderated subreddit in order to keep the discussion on science. However, we recognize that many people want to discuss how they feel the research relates to their own personal lives, so to give people a space to do that, personal anecdotes are allowed as responses to this comment. Any anecdotal comments elsewhere in the discussion will be removed and our normal comment rules apply to all other comments.
Do you have an academic degree? We can verify your credentials in order to assign user flair indicating your area of expertise. Click here to apply.
User: u/thebelsnickle1991
Permalink: https://www.nyu.edu/about/news-publications/news/2024/december/-us--vs---them--biases-plague-ai--too.html
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.