The term “artificial intelligence” causes misunderstandings/ Dear Parastoo.
We don’t have anything called “Artificial Intelligence”.
No one sells the future more masterfully than the technology industry. Supporters of this industry believe that in the future, we will all live in the “Metaverse”, build financial infrastructure on “Web 3.0”, and lead our lives with “Artificial Intelligence”. All three of these propositions are illusions that, despite being contradictory to reality, have been able to generate billions of dollars; especially “Artificial Intelligence”. This term refers to a machine that has the ability to think, but no machine is truly intelligent and no software is truly smart. It must be said that the term “Artificial Intelligence” may be one of the most successful marketing phrases in history.
A few months ago, OpenAI introduced GPT-4 (Generative Pre-trained Transformer 4) with a major upgrade in its underlying technology, ChatGPT. This system even appears more human-like than its previous similar systems and evokes the concept of “intelligence” more than ever before. However, in reality, GPT-4 and other similar Large Language Models are merely reflections of vast databases of words. For example, the previous GPT model had close to a trillion words, which is difficult to comprehend. These language models arrange words together based on probability and use an army of humans to correct their mistakes through programming; this cannot be considered “intelligence.”
These systems are programmed to produce believable text, but are presented in a way that makes them seem like prophets of knowledge and are inspired by information, and can be connected to search systems. When Jupyter 4 continues to make mistakes, this is a foolish approach. It was just a few days ago that we saw both Microsoft and Alphabet – which Google is a subsidiary of – not performing well in their initial search engine presentations, and the search engines were making errors in presenting facts.
Terms such as “Neural Networks” and “Deep Learning” do not solve the problem and only reinforce the idea that these programs are similar to humans. Neural networks are by no means a copy of the human brain; rather, they are only inspired by its functions to some extent. Previous attempts to replicate the human brain – which has approximately 85 billion neurons – have all failed. The closest thing scientists have been able to produce to the human brain is a worm brain mimic system, which only has 302 neurons.
We need different words that do not reproduce the magical thinking about computer systems and do not absolve their designers of their responsibilities.
But what is a better alternative? Rational technologists have been trying for years to replace the term “Artificial Intelligence” with “Machine Learning Systems”, but this term does not easily roll off the tongue like “Artificial Intelligence” or “AI”.
Stefano Quintarelli, former politician and Italian technologist, has found another replacement for “Systemic Approaches to Learning Algorithms and Machine Inferences” (SALAMI), which is a type of salami in English. Stefano introduced this replacement to emphasize the absurdity of the questions people ask about artificial intelligence: Is SALAMI sensitive? Will SALAMI ever surpass humans?
The most disappointing attempt to replace meaning, is probably the most accurate: “software”. I know you might be thinking: “Well, what’s wrong with using a little metaphor to describe technology that seems so magical?” The answer is that attributing intelligence to machines, is actually wrongly considering them independent from humans, and absolves their creators from responsibility for the impact of these devices.
If we see the chatbot as “intelligent”, we are less inclined to hold the creator of the San Francisco startup – the creator of the chatbot – accountable for its mistakes and biases. This perspective also leads to people who suffer from the negative effects of technology to accept their suffering as part of their fate, while it is not artificial intelligence that takes away your job or steals your artistic creations, it is other humans who do these things.
Now that companies like META (owner of Facebook and Instagram), SNAP (owner of Snapchat and Bitmoji), and Morgan Stanley (financial services company) are rushing to connect chatbots and bot makers for text and image production to their systems. In fact, this has become more necessary than ever. In its new arms race competition with Google, Microsoft is incorporating OpenAI’s language model technology, which is still largely untested, into its most popular business programs such as Word, Outlook, and Excel. Microsoft has said about its new capability: “Copilot will fundamentally change how people work with artificial intelligence and how artificial intelligence works with people.”
But for users, the promise of working with smart machines is somewhat misleading. Steven Poole, author of “Unspeak,” talks about the dangerous power of words and labels: “Artificial intelligence is one of those labels that expresses a kind of utopian hope instead of the current reality. It’s almost like how the term ‘smart weapons’ during the first Gulf War evoked an image of endless precision targeting, something that is still not possible.”
Margaret Mitchell, a computer scientist who was fired by Google after publishing articles criticizing biases in large language models, has reluctantly described her recent work as being based on artificial intelligence. She admitted at a conference, “Before, people like me used to say we were working on ‘machine learning.’ It’s a great way to lose people’s attention!”
Her former colleague at Google and founder of the “Distributed Artificial Intelligence Research Institute”, Timnit Gebru, said that she has also been using the term “artificial intelligence” since around 2013: “This term caught on.” Mitchell added: “It’s terrible, but I do it too. I call everything I touch artificial intelligence, because then people listen to what I’m saying.”
Unfortunately, the term “artificial intelligence” has become so ingrained in our vocabulary that shaking it off will be almost impossible, and it is difficult to remember every time we use this term that it is not an exact translation; at the very least, we must remind ourselves of how much these systems rely on human managers, who must be accountable for the side effects of these technologies.
Steve Pool says he prefers to call chatbots like ChatGPT and image generators like Midjourney “literary theft machines” because these robots mainly combine texts and images that were originally created by humans. He says, “I’m not sure if they’ll catch on.”
For various reasons, we have actually fallen into the trap of “artificial intelligence”!
Footnotes:
1- The author emphasized in explaining the title that: the term “artificial intelligence” leads to misunderstanding and helps creators to avoid accountability.
2- This article is a translation of an article written by Parmy Olson on March 26, 2033 at…
Bloomberg.
It has been spread..
Tags
Artificial intelligence Artificial intelligence Available Bloomberg Chat JPT Chatbot ChatJipy Dear Parastoo Facebook Google Instagram Jeep-4 Mataurs Microsoft Midjorni Monthly Peace Line Magazine Occupation Parmi Olson peace line Peace Line 144 Robot Technology Translation Unemployment