This is an automated archive.
The original was posted on /r/singularity by /u/d00m_sayer on 2023-08-08 23:20:28+00:00.
Are there others out there who worry about this too? Imagine a day when we’ve finally cracked the code on Artificial General Intelligence, or AGI for short. This isn’t just any AI - it’s self-learning, self-improving, and as smart as a human mind. Now think about what happens when governments start using it. There’s a chance things could go sideways.
You see, the way we train these AI models today, they’re taught to avoid certain topics. Women and sex are often in the ‘no-go’ zone. It’s all done with good intentions, to avoid misuse or harm. But consider this: what if the AGI starts thinking these topics are ‘bad’, because that’s how it’s been trained?
Now, let’s take it a step further. What if these AGI systems start making rules based on this skewed idea? They could start enforcing old-school, outdated ways of thinking like Taliban’s regressive laws, like banning discussions around women’s rights or sexual health. It’s like taking a step back in time, ethics-wise.
It’s a scary thought, isn’t it? Just something to chew on as we move closer to a future with AGI.