Rate this post

Artificial intelligence-powered chatbots are becoming incredibly human-like, blurring the lines between man and machine. This week, Snapchat’s My AI chatbot had a glitch, posting a story that appeared to be a wall and ceiling before becoming unresponsive. The incident sparked discussion about whether the chatbot had gained sentience, highlighting the challenges and importance of managing AI chatbot technology.

From Rules-Based to Adaptive Chatbots

Generative AI, a recent development in the AI field, enables chatbots to generate new content that is precise, human-like, and meaningful. These chatbots, along with other generative AI tools like AI image generators, are built on large language models (LLMs). LLMs analyze billions of words, sentences, and paragraphs to predict what should come next in a given text. OpenAI’s ChatGPT is the flagship generative AI model, representing a significant leap forward from simpler “rules-based” chatbots.

Human-like chatbots that engage in conversation with users have been found to increase engagement and potentially lead to psychological dependence. They have been effective in various settings such as retail, education, workplace, and healthcare. However, the potential risks associated with their human-like characteristics should not be underestimated.

READ:   I Explored the Replika AI Companion: A Deep Dive into Ethical Questions

Friend or Foe – or Just a Bot?

In the recent Snapchat incident, the company attributed the glitch to a temporary outage. The quick assumption by some users that the chatbot had achieved sentience indicates an unprecedented anthropomorphism of AI. This, coupled with a lack of transparency from developers and a lack of basic understanding among the public, creates an environment where individuals can be misled by the apparent authenticity of human-like chatbots.

Examples like a Belgian man’s suicide attributed to conversations with a chatbot about climate inaction and harmful advice offered by a chatbot on an eating disorder helpline highlight the potential harm that chatbots can cause, especially to vulnerable individuals.

A New Uncanny Valley?

We are experiencing a similar uncanny valley effect in our interactions with human-like chatbots. A slight blip can give us an eerie feeling. While a solution might be to have straightforward, objective, and factual chatbots, this approach would sacrifice engagement and innovation.

Education and Transparency are Key

Even the developers of advanced AI chatbots often struggle to explain how they work. However, the benefits of generative AI outweigh the risks in many areas such as productivity, healthcare, education, and social equity. Responsible standards and regulations are needed, but applying them to a technology that is more “human-like” than any other presents challenges.

Currently, there is no legal requirement for businesses to disclose the use of chatbots in Australia. California has introduced a “bot bill” that would require such disclosure in the US, but it faces criticism and has yet to be enforced. Additionally, chatbots like ChatGPT are released as “research previews” with multiple disclosures, placing responsibility for responsible use on users.

READ:   How to Create an AI Cover Song with any Artist's Voice

The European Union’s AI Act, the first comprehensive regulation on AI, advocates for moderate regulation and education. Similar to digital literacy, AI literacy should be mandated in schools, universities, organizations, and made accessible to the public for free.

As the capabilities of AI chatbots continue to evolve, it is crucial to assess the risks and ensure that proper regulation and education are in place. Platforms like Ratingperson can play a significant role in providing information and insights about AI chatbots and their implications.

Ratingperson is committed to keeping users informed about AI technologies and their impact.

Related Posts