Introduction
For decades, search engines like Google and
Bing played a fundamental role in "providing knowledge," offering
users vast amounts of information while leaving them the freedom to choose and
analyze. However, with the emergence of artificial intelligence (AI) and
applications like ChatGPT, Bard, and Copilot, the paradigm has shifted from
"providing knowledge" to "shaping knowledge." Users are no
longer independent researchers but rather participants in a process that may
deliver pre-packaged information—often inaccurate or biased.
Providing Knowledge:
The Era of Search Engines
In the past, search engines acted as neutral
intermediaries, indexing and displaying information based on algorithms that
determined relevance to the user's query. The key advantages of these platforms
were:
1. Relative Transparency: Users could see multiple sources and choose
what suited them.
2. Freedom of Choice: Users decided which sources to trust and
which to ignore.
3. Diversity: Different perspectives on the same topic
were presented.
This stage was based on the principle that
users could distinguish between truth and falsehood and had the necessary tools
to analyze information independently.
Shaping Knowledge: The
Era of AI
With the rise of generative AI (like ChatGPT),
the equation has changed. These tools no longer just provide
information—they shape and frame it in ways that may hide
biases or errors. Key changes include:
1. Pre-Packaged Answers: Instead of listing multiple sources, AI
delivers a single, ready-made response, limiting the diversity of viewpoints.
2. Hidden Bias: Language models are trained on potentially
biased data, meaning their answers may reflect the biases of developers or
training sources.
3. The End of Transparency: AI does not always disclose its sources,
making it harder for users to verify information.
4. Weakening Critical
Thinking: Reliance on
ready-made answers may reduce users' ability to research and analyze
independently.
The User: From
Decision-Maker to Passive Recipient?
Previously, users made decisions based on
diverse information. Today, some AI applications present answers as if they
were the "absolute truth," making users more susceptible to
intellectual dependence. Even worse, some of these models:
·
Oversimplify information, omitting crucial details.
·
Blur facts and opinions without clear distinction.
·
Deliver incorrect answers with high confidence (a phenomenon known as hallucination).
Challenges and Risks
This shift from "providing
knowledge" to "shaping it" poses serious challenges, including:
·
Manipulation of Public Opinion: AI can be used to push certain narratives without allowing
counterarguments.
·
Decline in Independent Research: Users may rely solely on AI, reducing the
diversity of knowledge.
·
Accountability Issues: Who is responsible if AI provides false information leading to
poor decisions?
How Can We Address
This?
To mitigate these risks, we must:
1. Enhance Transparency: AI platforms should disclose their
information sources and limitations.
2. Promote Digital
Literacy: Educate users on
verifying information and avoiding over-reliance on AI.
3. Regulate AI Use: Establish ethical and legal safeguards to
prevent misinformation.
Conclusion
AI has transformed how we access knowledge,
shifting from a model of "provision" to one of "shaping."
This makes users more vulnerable to algorithmic biases. If we do not act
cautiously, we risk transitioning from a knowledge-seeking society to one that
is fed pre-packaged information. Therefore, it is essential to develop
mechanisms that preserve intellectual independence and ensure AI remains a tool
for knowledge—not its master.