What can we do to make sure AI algorithms are fair and don’t continue harmful biases? How can we avoid stereotypes being built into AI systems?
Welcome to the forum
Guidelines for posting here
Please keep the following in mind when posting:
- Your post should have more than 100 characters – the more detail, the better.
- Your question might have been answered before, so please search the forum first.
- For example, the whole ‘AI is going to steal our jobs’ topic has been covered plenty!
- Feel free to discuss the pros and cons of AI, just keep it respectful.
- It’s great if you include links to support your points.
- No silly questions (unless you’re asking if AI is the one that will bring the end of the world… and no, it’s not).
Thanks, let us know if you need any help!
I’m a bot and this message was posted automatically. Please let the moderators know if you have any issues.
We don’t have much luck preventing bad outcomes unless we can make people profit from solving the problem. AI bias won’t be any different unless being accountable is seen as a smart business move.
Lee said:
We don’t have much luck preventing bad outcomes unless we can make people profit from solving the problem. AI bias won’t be any different unless being accountable is seen as a smart business move.
Accountability is definitely needed in areas like healthcare.
AI will reflect the biases of the team that created it. Since most of these teams are based in Silicon Valley, it doesn’t seem like a great sign.
Sutton said:
AI will reflect the biases of the team that created it. Since most of these teams are based in Silicon Valley, it doesn’t seem like a great sign.
I don’t think it’s just about the team. The biases come more from the training data itself.
@Keegan
True, but did the model pick its own training data, or did the team choose it? Remember that scene in Demolition Man where the cops see what Simon Phoenix was watching in cryosleep? This is not a proper training process!
@Sutton
With recent models, like Phi-4, the system actually selected its own training data while training with synthetic data. As for the Silicon Valley bias, I work there and I’d argue it’s less about being inherently biased and more about the extremes. There are libertarians, left-wing activists, conservative communities, and autocratic billionaires, all contributing to the mix.
Once AI starts developing on its own, it might try to remove illogical biases… but that could be even worse.
Open-source data would help verify the information.
I know this might be a hot take, but I think bias and prejudice exist naturally. Trying to eliminate them might just mean picking and choosing which ones we want to keep. That feels wrong. I’m not sure what the right answer is, but I love this discussion because we need to address it!
I think AI like ChatGPT still has biases, even after trying hard to remove them.
@Kris
Exactly. Bias is part of being human, and we build AI to help us with our own human experiences. AI developed in China will have Chinese biases, and AI made in the US or elsewhere will reflect their own country’s biases. It’s natural. Preferences and biases are what make us unique and help us generate new ideas. We need AI to have biases too in order to be useful. If I want an AI to help me with writing, it needs to be biased toward that purpose.
AI won’t be a universal, unbiased intelligence, but more like different AIs with different opinions and values. Over time, they’ll probably learn to work together, just like humans do.
It’s impossible to fully prevent bias because the data used to train the model is biased. Even if we try to find unbiased data, it’s still biased in some way because humans are the ones selecting it. The best we can do is find the least biased data, but it will still have some bias.
There’s no such thing as a totally unbiased perspective. It’s like how a perfect sphere doesn’t really exist. What we call ‘bias’ is just a way we make sense of all the data around us. We pass this perspective onto AI through the way we write.
@Shan
But you can make an imperfect sphere better, right?
Zen said:
@Shan
But you can make an imperfect sphere better, right?
Yes, I totally agree. It’s like we should still try to live a good life and have rich experiences, even though we all know we’re going to die eventually.
I wrote about this in another thread. The future of AI will depend a lot on the people running the best models. And honestly, that’s a scary thought.
What exactly is AI bias? Is it just politically incorrect facts?
Chancey said:
What exactly is AI bias? Is it just politically incorrect facts?
I feel like that’s a bad faith question.
Chancey said:
What exactly is AI bias? Is it just politically incorrect facts?
I feel like that’s a bad faith question.
I honestly don’t know. I think AI just uses statistics, and a lot of people don’t like the results and try to override them with their own biases.