Technology

Artificial intelligence models can’t be built on synthetic data alone. Human-sourced data is needed to prevent AI model failure.

Join our daily and weekly emails to receive the latest news and exclusive content about AI. Learn More

My how the tech world has changed in a flash. Two years ago, AI had been hailed as “the next transformational technology that would rule them all”. Now, AI has, ironically degraded.

Once heralding a new age of intelligence, AI now stumbles over its own code and struggles to live up the brilliance that it promised. Why? To feed these data hungry models, researchers have turned increasingly to synthetic data. This practice, which has been used in AI for years, is now a dangerous one, as it leads to a gradual degradation of AI. And this isn’t just a minor concern about ChatGPT producing sub-par results — the consequences are far more dangerous.

When AI models are trained on outputs generated by previous iterations, they tend to propagate errors and introduce noise, leading to a decline in output quality. This recursive cycle turns the old saying “garbage-in, garbage-out” into an issue that is self-perpetuating, reducing the effectiveness and efficiency of the system. As AI drifts further from human-like understanding and accuracy, it not only undermines performance but also raises critical concerns about the long-term viability of relying on self-generated data for continued AI development.

But this isn’t just a degradation of technology; it’s a degradation of reality, identity, and data authenticity — posing serious risks to humanity and society. As these models lose accuracy and reliability, the consequences could be dire — think medical misdiagnosis, financial losses and even life-threatening accidents. As these models lose accuracy and reliability, the consequences could be dire — think medical misdiagnosis, financial losses and even life-threatening accidents.

Another major implication is that AI development could completely stall, leaving AI systems unable to ingest new data and essentially becoming “stuck in time.” This stagnation would not only hinder progress but also trap AI in a cycle of diminishing returns, with potentially catastrophic effects on technology and society.

But, practically speaking, what can enterprises do to ensure the safety of their customers and users? We need to first understand the mechanics of this. It’s happening faster, which makes it harder for developers to filter anything that isn’t pure, human-created data. Model collapse occurs when AI is trained recursively on content it generated. This can lead to a number of issues:

Loss of nuance

: Models begin to forget outlier data or less-represented information, crucial for a comprehensive understanding any dataset.

Reduced diversity

: There is a noticeable decrease in the quality and diversity produced by the models.

Amplification biases

  • : Existing biases, particularly against marginalized groups may be ampl This often occurs when AI is trained recursively on content it generated, leading to a number of issues:Loss of nuance
  • : Models begin to forget outlier data or less-represented information, crucial for a comprehensive understanding of any dataset.Reduced diversity
  • : There is a noticeable decrease in the diversity and quality of the outputs produced by the models.Amplification of biases
  • : Existing biases, particularly against marginalized groups, may be exacerbated as the model overlooks the nuanced data that could mitigate these biases.Generation of nonsensical outputs

: Over time, models may start producing outputs that are completely unrelated or nonsensical.

A case in point: A study published in Nature highlighted the rapid degeneration of language models trained recursively on AI-generated text. By the ninth iteration, these models were found to be producing entirely irrelevant and nonsensical content, demonstrating the rapid decline in data quality and model utility.

Safeguarding AI’s future: Steps enterprises can take today

  • Enterprise organizations are in a unique position to shape the future of AI responsibly, and there are clear, actionable steps they can take to keep AI systems accurate and trustworthy:Invest in data provenance tools
  • : Tools that trace where each piece of data comes from and how it changes over time give companies confidence in their AI inputs. With clear visibility into data origins, organizations can avoid feeding models unreliable or biased information.Deploy AI-powered filters to detect synthetic content:
  • Advanced filters can catch AI-generated or low-quality content before it slips into training datasets. These filters help ensure that models are learning from authentic, human-created information rather than synthetic data that lacks real-world complexity.Partner with trusted data providers:
  • Strong relationships with vetted data providers give organizations a steady supply of authentic, high-quality data. This means AI models get real, nuanced information that reflects actual scenarios, which boosts both performance and relevance.Promote digital literacy and awareness

: By educating teams and customers on the importance of data authenticity, organizations can help people recognize AI-generated content and understand the risks of synthetic data. Building awareness around responsible data use fosters a culture that values accuracy and integrity in AI development.

The future of AI depends on responsible action. Businesses have the opportunity to ensure that AI is based on accuracy and integrity. Organizations can put AI on a smarter, safer path by choosing human-sourced data instead of shortcuts, prioritizing filters that detect and remove low-quality content and promoting awareness about digital authenticity. Welcome to the VentureBeat community!

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

story originally seen here

Editorial Staff

Founded in 2020, Millenial Lifestyle Magazine is both a print and digital magazine offering our readers the latest news, videos, thought-pieces, etc. on various Millenial Lifestyle topics.

Leave a Reply

Your email address will not be published. Required fields are marked *