AI: An ‘imitation game’ that is a utopia for some, a dystopia for others
Baku, April 14, AZERTAC
Besides the autonomous conveniences it offers users, artificial intelligence, which is increasingly making its way into our daily lives and is said to make life easier, also involves the collection and sharing of personal data, Anadolu Agency reports.
This technology, which is known for imitating human intelligence and is being applied in a variety of fields, including health, transportation, security and law, is a utopia for some and a dystopia for others.
All of this started in 1950 with the question: "Can machines think?"
The Turing test is the first known experiment involving artificial intelligence in which Alan Turing, a British mathematician who cracked German codes in World War II (which the Germans thought unsolvable), examined the idea of machines thinking like human beings.
Originally called "The Imitation Game" by Turing, it was a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.
In the experiment, a human participant would exchange a series of texts with two respondents -- a computer and a human being -- both of which remained hidden behind a partition. If, after a set period of time, the participant failed to distinguish one from the other, the computer would "win," and such a machine could be said to "think."
These technologies, which have been part of people's lives since those days, are efficient, fast and low-cost for some but potentially harmful for others due to an algorithm that learns over time and makes its own decisions.
Otto Mattas, head of artificial intelligence at AI & Robotics Estonia (AIRE), told Anadolu about the effective use of artificial intelligence, especially in public services.
Mattas said that artificial intelligence is now replacing stenographers in parliament, where the negotiations that determine Estonia's future are taking place, and described this as a positive example of artificial intelligence that makes life easier.
But this is where people's concern about losing their jobs due to artificial intelligence and robotic technologies comes into play.
"Artificial intelligence helps people in their tasks, jobs, and lives. But it is very dependent on what is the purpose of the solution or the technology being used. Is it a general-purpose technology where a lot of people have to be able to use it to achieve some goal, or the opposite, where there are specific technologies that are very specific to a niche question or a task that needs to be solved?" he said.
Mattas said he had read articles published about shrinking human brain capacity in recent years and that this may be true because, in some areas, people no longer have to use their memory.
"Just recently, I read about research where they said human brain capacity has grown smaller or diminished over the past few thousand years because we don't need to remember things anymore. We can use databases. Nowadays, we can use our phones for almost everything, right? Google, or use another online search engine. We will find the answer. So on the one side, yes, we are becoming less capable in that sense. We are losing some abilities and skills. Taxi drivers (for example) don't need to remember the whole city anymore," he added.
Despite the pooling of data and the uncertainty over where and how it will be utilized, humans continue to provide personal information to artificial intelligence systems.
Dr. Wilhelm Bielert, who does research and gives conferences on the effects of artificial intelligence at society in Germany, told Anadolu that people do not hesitate to share their personal data with artificial intelligence programs for some reason.
"Some people may be unaware of the scale and ramifications of AI-powered surveillance, whereas others may believe that the government or other organizations will utilize data responsibly," he said.
Bielert observed that in some circumstances, people sacrifice their privacy concerns to minimize crime or increase public safety.
Individuals' desire for security trumps their privacy concerns.
According to Bielert, society must carefully analyze the potential repercussions and impacts of AI-assisted monitoring and ensure it is utilized ethically.
Text contains orthographic mistake
Enter your note