The world is increasingly becoming inundated with AI-based content. Artists were the first to voice complaints, saying that artificial intelligence would put them out of work. Then other content creators raised their voices, as AI can already match junior content creator skill levels. Legal experts and medical doctors are facing a similar crisis — AI is encroaching on their area of expertise. Granted, AI needs handholding, but the sheer speed of data processing and exponentially lower costs more than make up for it.
Now coders and content creators are facing the cutting board as well. As a member of both groups, I can say that the threat is real. AI will soon be coding our websites, writing our content and making news.
While many articles detail how AI is taking over content creator’s jobs, no one is really talking about what this could mean for human culture and civilization as a whole. Perhaps this is how AI truly conquers us: by creating an AI “echo chamber.”
Before I explain why having the overwhelming amounts of AI-only generated data is extremely bad for human society as a whole, I need to briefly explain how artificial intelligence gets the data it needs. AI learns in several ways:
1. From existing datasets: Datasets are nothing more than huge databases of specially formatted data fed to the AI model, which shape it and teach it patterns and likelihood of certain terms occurring together. This helps AI understand relationships between concepts, and it can return information based on this data.
2. From human input: AI that interacts with users can be configured to store their feedback and use it as an additional source of information.
3. From specially trained human administrators/moderators: These are people who can literally coerce AI to take a certain stance on selected subjects or feed it selected data for specific scenarios.
Out of these three, existing datasets carry the most weight in individual model generation. ChatGPT, for example, has been trained on 570GB and 300 billion words worth of data, including books, Wikipedia articles and website content.
Without going too deep into the process, let’s focus on the obvious: this content is almost exclusively human-made, generated by scientists, engineers and creative wizards. AI takes the data, trains on it, and uses it for its own ideas and concepts.
Read: Elon Musk and others may try, but controlling AI is an impossible mission
So, what happens when humans stop publishing new information online and majority of marketing experts, journalists, writers, artists, scientists and average folk are replaced by bots — AI algorithms in disguise? We get internet that’s inundated by AI-based content. We get websites designed and coded by AI, with AI personas writing content. We get AI-created scientific papers and Wikipedia entries.
But it doesn’t stop there. Next is AI going online and learning from AI-generated content. This will enable artificial intelligence to talk to and learn from itself, and then use that data to “better” its own algorithms.
There’s nothing better about this. Not only would this echo chamber make AI dumber and more uniform in its output, but it would also rob humans of their creativity. We can already see this happening in generative art models — original art styles are dying off, replaced by the current pop culture references. Few care about classical art anymore. The same simplification and bastardization awaits other data, which would become the same type of popular, “optimized” cloned reflections. Needless to say, this is detrimental to human culture and civilization as we know it.
Watch: These 3 trends will power the future of artificial intelligence
While this outcome is rather horrible and definitely possible, how far from it are we at the moment? My take is that in less than 10 years, if we keep the same (parabolic) pace, we will see first palpable effects of lumping online knowledge into uniform outputs.
Is there hope? Definitely. Some open-source AI projects are not trained on AI refuse and instead learn from direct human input. While this is more expensive and demanding way to grow a model, it’s also far more valuable and sustainable in the long term.
Another important mechanism that can help stave off the AI echo chamber is the introduction of human rating/feedback modules. When used correctly, they can dynamically fine-tune an existing model’s outputs so it always adapts and learns what’s best for its human users, not pre-trained nonsense based on mechanically ingested data.
Instead of “banning all AI”, we should consider the delicate balance between its capabilities and human creativity. By fostering a synergistic relationship between AI and human input, we can unlock new horizons in art, science, and communication. The key lies in developing AI systems that augment human intellect instead of overshadowing or replacing it. By embracing this collaborative mindset, we can navigate a world where AI-generated content flourishes alongside human ingenuity, fostering an enriching and diverse digital landscape that benefits humanity as a whole.
More: ChatGPT’s Sam Altman: If AI goes wrong, it ‘can go quite wrong’
This story originally appeared on Marketwatch