As AI becomes more powerful and pervasive, concerns about its impact on society continue to mount.
Recently, we have seen incredible advances like GPT-4, the ChatGPT language model’s new version from Open-AI, able to learn so fast and generate high-quality responses. At the same time, it raised many concerns about the future of our society.
Last week, in an “open letter” signed by Tesla CEO, Elon Musk, Apple co-founder Steve Wozniak, and also by representatives from various fields (including robotics, machine learning, and computer science), urged for a 6-month pause on “giant AI experiments,” saying it represents a risk for humanity.
Since then, I’ve been following some specialists’ opinions and I invite you to join me in a reflection on this scenario.
The Open Letter
The “Pause Giant AI Experiments: An open letter”, which currently has almost 6k signatures asks, as an urgent matter, that artificial intelligence laboratories pause some projects.
It warns, “AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
And it also predicts an “apocalyptic” future: “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”
What is The Real “Weight” of This Letter?
At first, it’s easy to sympathize with the cause, but let’s reflect on all the global contexts involved.
Despite being endorsed by leading technology authorities, such as Google and Meta engineers, the letter has generated controversy due to some subscribers being inconsistent with their practices regarding security limits involving their technologies, including Elon Musk. Musk himself fired his “Ethical AI” Team last year, as reported by Wired, Futurism, and many other news sites at that time.
It’s worth mentioning that Musk, who co-founded Open-AI and left the company in 2018, has repeatedly attacked them on Twitter with scathing criticisms of ChatGPT’s advances.
Sam Altman, co-founder of Open-AI, in a conversation with podcaster Lex Fridman, asserts that concerns around AGI experiments are legitimate and acknowledges that risks, such as misinformation, are real.
Also, in an interview with WSJ, Altman says the company has long been concerned about the security of its technologies and that they have spent more than 6 months testing the tool before its release.
What Are Its Practical Effects?
Andrew Ng, founder and CEO of Landing AI, founder of DeepLearning.AI, and managing general partner of AI Fund, argues on Linkedin “The call for a 6 month moratorium on making AI progress beyond GPT-4 is a terrible idea. I’m seeing many new applications in education, healthcare, food, … that’ll help many people. Improving GPT-4 will help. Let’s balance the huge value AI is creating vs. realistic risks.”
He also said “There is no realistic way to implement a moratorium and stop all teams from scaling up LLMs, unless governments step in. Having governments pause emerging technologies they don’t understand is anti-competitive, sets a terrible precedent, and is awful innovation policy.”
Like Ng, many other technology specialists also disagree with the main point of the letter, asking for a pause on the experiments. In their opinion, we could harm huge advances in science and health discoveries, such as detecting breast cancer, as published in the NY Times last month.
AI Ethics And Regulation: A Real Need
Despite a race to develop increasingly intelligent LLM solutions, little progress has been made toward regulation and other necessary precautions.
If we think about it, it would not even be necessary to focus on “apocalyptic” events, those of long duration, such as those mentioned in the letter, to confirm the urgency. The current and fateful problems generated by “misinformation” would suffice.
Moreover, we have recently seen how AI can create “truths” with perfect montages of images, like the viral one of the Pope using a puffer coat that has dominated the web the last few days, among many other “fake” productions, using celebrities’ voices and faces in manipulative videos.
In this sense, AI laboratories, including Open-AI, have been working to ensure the identification of content (texts, images, videos, etc.) generated by AI can be easily identified, as shown in this article from What’s New in Publishing (WNIP) about watermarking.
Do you want to continue to be updated with Marketing best practices? I strongly suggest you subscribe to The Beat, Rock Content’s interactive newsletter. There, you’ll find all the trends that matter in the Digital Marketing landscape. See you there!