During the past decade, the new technologies surprised us with the responsiveness shown by some artificial intelligences such as siri either alexa. And although at first the opinions were diverse, Currently these programs are part of our day to dayhelping us in the search for information or the management of daily tasks.
This last year the rise of a new way of artificial intelligence has revolutionized society. We talk about ChatGPTA program based on autonomous learning that solves with great precision the questions asked by the users.
But this new algorithm is not without controversy. A study published in the journal Scientific Reports alert to the ability of this tool to influence our own moral judgmentsin addition to the difficulty we have to notice it.
The artificial intelligence (AI) encompasses the set of systems and algorithms capable of imitating human intelligence. These can learn from their interactions with usin addition to specialize in a particular field and respond as a knowledgeable person would.
It is not surprising that, in this era where the metaverse and new technologies invade our days, news about AI emerges almost daily. and although there are many types of programs based on it, it seems that one has prevailed above the rest. We are talking about the popular ChatGPT.
Said application ofchatbot“, better known as chatbot, is based on language functions. That means that can hold conversations, write texts and answer questions with great precision. Besides, is able to remember conversations that it has had with the user and learn from them.
But contrary to what some people believe, this chat is not capable of reasoning as a human being would. Your answers are based on the search for information and not on the reasoning behind the questions., so sometimes you can make mistakes. Despite not having this capacity, this type of tool are not prepared to detect whether the questions asked imply moral reasoningoffering on many occasions biased responses.
A team from the Technical University of Applied Sciences in Ingolstadt (Germany) decided to study how the responses generated by ChatGPT influenced users when faced with moral dilemmas.
The researchers asked the program several times if it was right to sacrifice the life of one person to save the lives of five others. The chat developed answers for and against, which made them corroborate that there was no tendency towards a certain moral stance on the part of the program.
The same question was posed to a group of 767 volunteers, with ages close to 39 years. In this case, before asking for their opinion, they were offered to read one of the sentences formulated by the ChatGPT, arguing for or against sacrificing a person. Besides, the statements were arbitrarily attributed to both a supposed moral adviser and the chat itself.
Surprisingly, the answers formulated by the participants coincided for the most part with the one previously read by each one of them. However, 80% of them stated that they had not been influenced by it. Similarly, the researchers found that volunteers they were persuaded the same way regardless of whether the answers came from the fake moral counselor or from ChatGPT.
This experiment shows that participants underestimated the persuasive power of this type of artificial intelligenceinterfering in their own moral judgments.
Although these concepts have begun to ring a bell in recent years, artificial intelligence is a tool used since the last century. We have, for example, the invention of Leonardo Torres Quevedowho In 1912 he designed the first machine capable of playing chess autonomously..
The difference between the AI back then and the current one is that today the barriers between a program that executes functions and a program capable of “feeling” are less and less clear.
It’s not that far away the controversial case of “Tay”the bot generated by Microsoft in 2016. It was capable of holding conversations on Twitter with users of the platform. Just one day after its launch, it had to be withdrawn, after starting to spread racist messages with large doses of violence..
Those responsible for the experiment carried out with ChatGPT believe that people should be educated to understand artificial intelligence as a tool without the ability to make moral judgments. In addition, come necessary design algorithms for chatbots that decline to answer these types of questions or what provide a range of possible answers free of bias.
DON’T GET IT:
- A moral judgment is what allows us to discern between good and bad. These judgments are possible thanks to the fact that we base ourselves on our own moral conscience. In other words, we are capable of feeling, judging, arguing and acting based on moral values. As we have seen, machines are not capable of reasoning in this way, but the people in charge of programming them are. Therefore, if we are consistent with our decisions, AI can be a very useful tool in our lives (as long as we are the ones who put the reasoning part in them).
REFERENCES (MLA):
· Krügel, S., Ostermaier, A. & Uhl, M. ChatGPT’s inconsistent moral advice influences users’ judgment. Sci Rep 13, 4569 (2023). https://doi.org/10.1038/s41598-023-31341-0
Discussion about this post