kmiainfo: Talking about the utopianism of artificial intelligence or not does not serve to establish safe rules for dealing with it Talking about the utopianism of artificial intelligence or not does not serve to establish safe rules for dealing with it

Talking about the utopianism of artificial intelligence or not does not serve to establish safe rules for dealing with it

Talking about the utopianism of artificial intelligence or not does not serve to establish safe rules for dealing with it  Viewing AI in terms of the distant future undermines the need for political action today to address the challenges posed by systems such as ChatGPT. OpenAI's Large Language Models (LLM) released in November 2022 and its newer model GPT-4 released in March 2023 attracted great interest among people, which raised speculations and concerns .  In light of this, the Future of Life Institute published, on March 28, an open letter calling for stopping large experiments with artificial intelligence for a period of six months, referring to fears of a “superintelligence” scenario that could lead to the extinction of humanity, a scenario known as the danger. Existential or menace X.  According to co-founder of the Future of Life Institute Jan Tallinn, rogue artificial intelligence may pose a greater threat to humanity than the climate crisis.  The letter referred to fundamental questions such as: “Should we let machines flood our media channels with propaganda and lies? Should we automate all jobs, including those that make us feel good? control of our civilization?  The letter sparked a lot of criticism and controversy, particularly from the Distributed Artificial Intelligence Research Institute (DAIR) of Tempenet Gebru. This is due to the framework in which the previous narrative fueling the hype around artificial intelligence has been put into intimidating people, while talking about "extremely powerful technologies".  The ideology behind these concerns expressed by the Future of Life Institute is what is called "long-haul" and its goal is to maximize human well-being in decades, if not centuries or millennia to come, at the expense of the present.  FTX CEO Sam Bankman-Fried, Twitter and SpaceX CEO Elon Musk, controversial entrepreneur Peter Thiel, and transhumanist philosopher Nick Bostrom are all known proponents of this far-reaching trend.  Lurking in the racist background of farsightedness is what Abiba Berhani calls “digital colonialism,” which recreates centuries of oppression in favor of a tech billionaire elite who espouse a vision of the “good for humanity,” which includes colonizing space or transcending human annihilation.   Yet this technological utopia, which regards "safe AI" as a necessary condition for its much-desired stage of individualization, distracts from the pressing issues of the moment.  The hidden costs of artificial intelligence  While these systems seem "autonomous" and "intelligent," they are still very human-intensive. And as Kate Crawford explains, it all starts with mining and making hardware. Next, the data - which is often extracted without consent - must be labeled to give it meaning, and then flagged for offensive, sexual or violent content.  Described as exploitative, psychologically harmful and underpaid, this work often takes place in the shadows. So instead of going towards the fear of “the automation of all jobs,” we should pay attention to the outcome that exacerbates social inequalities and increases the centralization of power.  Another problem relates to the idea of ​​"superintelligence" as it is delusional that LLMs are human-like entities that possess emotion, understand, and perhaps possess feelings and consider empathy. As a result, people tend to rely on LLM findings, such as the tragic story that drove a man to suicide after interacting with a chatbot for weeks.  Another medical chatbot using GPT-3 suggested suicide or starting cycling to cope with grief. It is a proposal that seems illogical. AI models are just putting together plausible-sounding words that can lead to silly, inaccurate, harmful, and misleading outputs, like an article about the benefits of eating broken glass, for example.  Given these facts, the justification for using GPT may be questioned. What problem are large language systems trying to solve? On a related note, we must bear in mind that these systems consume energy at astronomical rates. Just training a single Chat-3 model consumes as much electricity as 120 American homes a year and produces the equivalent carbon dioxide emissions as 110 cars a year.  The need for transparency and accountability  So the idea of ​​discontinuing additional training of large language models for control and governance seems reasonable. However, the open letter from the institute does not say who will be affected by the pause, how it will be implemented, or guarantees about it. It would be naïve to think that every company, university, research institute or individual using different alternatives would simply stop developing.  The large language models currently in operation will also continue with the implications. Yet Microsoft, which has invested billions of dollars in OpenAI, and CEO of Twitter, Elon Musk, who donated $10 million to the Future of Life Institute and is a member of its board of directors, have fired their AI ethics teams.  As an initial response, Italy banned ChatGPT a few days ago and European countries are considering doing the same. However, it is unclear how this ban will affect other applications that use large language paradigms such as GPT-4.  In light of the various impacts of these regimes, the nebulous and apocalyptic scenarios that extend far into the future, as portrayed by the Future of Life Institute, do not create the tangible policy measures and systems that we desperately need right now, especially given the proposed six-month time frame.  On the other hand, if the political agenda is driven by the idea of ​​a “superintelligence” that will control humanity, there is a risk that immediate risks, as well as current solutions, will be ignored. And although big language models do not pose an existential threat to our civilization, they do pose a threat to a large part of it, especially to those who are already marginalized.   Even if we want to preserve the idea of ​​"superintelligence" it should not be the dominant narrative on which it is now focused. Because if you portray these models as very powerful and give them some sort of agency, you shift responsibility away from the companies that develop them.  To hold companies accountable, there is a need for transparency about how AI systems are developed and what data they are trained on. Instead, the now-closed-source OpenAI states, contrary to its name, in its so-called "Technical Report" on GPT-4 that "This report does not contain further details about architecture (including model size), hardware, and build clustering." data, training method, or the like.”  This secrecy impedes democratic decision-making and thus regulations for the conditions in which large language models must be developed and published. There is no such thing as a “one good AI model,” so we shouldn't trust a relatively small and privileged group of people who believe that “superintelligence” is inevitable—which is not the case—with how to build “safe” AI.  Instead, we need to start involving diverse people, especially those affected, in the process of changing narratives and power relations.  All published articles express the opinion of their authors and do not necessarily reflect TRT Arabic.    Writer Simon Fisher Artificial intelligence writer

Viewing AI in terms of the distant future undermines the need for political action today to address the challenges posed by systems such as ChatGPT.

OpenAI's Large Language Models (LLM) released in November 2022 and its newer model GPT-4 released in March 2023 attracted great interest among people, which raised speculations and concerns .

In light of this, the Future of Life Institute published, on March 28, an open letter calling for stopping large experiments with artificial intelligence for a period of six months, referring to fears of a “superintelligence” scenario that could lead to the extinction of humanity, a scenario known as the danger. Existential or menace X.

According to co-founder of the Future of Life Institute Jan Tallinn, rogue artificial intelligence may pose a greater threat to humanity than the climate crisis.

The letter referred to fundamental questions such as: “Should we let machines flood our media channels with propaganda and lies? Should we automate all jobs, including those that make us feel good? control of our civilization?

The letter sparked a lot of criticism and controversy, particularly from the Distributed Artificial Intelligence Research Institute (DAIR) of Tempenet Gebru. This is due to the framework in which the previous narrative fueling the hype around artificial intelligence has been put into intimidating people, while talking about "extremely powerful technologies".

The ideology behind these concerns expressed by the Future of Life Institute is what is called "long-haul" and its goal is to maximize human well-being in decades, if not centuries or millennia to come, at the expense of the present.

FTX CEO Sam Bankman-Fried, Twitter and SpaceX CEO Elon Musk, controversial entrepreneur Peter Thiel, and transhumanist philosopher Nick Bostrom are all known proponents of this far-reaching trend.

Lurking in the racist background of farsightedness is what Abiba Berhani calls “digital colonialism,” which recreates centuries of oppression in favor of a tech billionaire elite who espouse a vision of the “good for humanity,” which includes colonizing space or transcending human annihilation.

Yet this technological utopia, which regards "safe AI" as a necessary condition for its much-desired stage of individualization, distracts from the pressing issues of the moment.

The hidden costs of artificial intelligence

While these systems seem "autonomous" and "intelligent," they are still very human-intensive. And as Kate Crawford explains, it all starts with mining and making hardware. Next, the data - which is often extracted without consent - must be labeled to give it meaning, and then flagged for offensive, sexual or violent content.

Described as exploitative, psychologically harmful and underpaid, this work often takes place in the shadows. So instead of going towards the fear of “the automation of all jobs,” we should pay attention to the outcome that exacerbates social inequalities and increases the centralization of power.

Another problem relates to the idea of ​​"superintelligence" as it is delusional that LLMs are human-like entities that possess emotion, understand, and perhaps possess feelings and consider empathy. As a result, people tend to rely on LLM findings, such as the tragic story that drove a man to suicide after interacting with a chatbot for weeks.

Another medical chatbot using GPT-3 suggested suicide or starting cycling to cope with grief. It is a proposal that seems illogical. AI models are just putting together plausible-sounding words that can lead to silly, inaccurate, harmful, and misleading outputs, like an article about the benefits of eating broken glass, for example.

Given these facts, the justification for using GPT may be questioned. What problem are large language systems trying to solve? On a related note, we must bear in mind that these systems consume energy at astronomical rates. Just training a single Chat-3 model consumes as much electricity as 120 American homes a year and produces the equivalent carbon dioxide emissions as 110 cars a year.

The need for transparency and accountability

So the idea of ​​discontinuing additional training of large language models for control and governance seems reasonable. However, the open letter from the institute does not say who will be affected by the pause, how it will be implemented, or guarantees about it. It would be naïve to think that every company, university, research institute or individual using different alternatives would simply stop developing.

The large language models currently in operation will also continue with the implications. Yet Microsoft, which has invested billions of dollars in OpenAI, and CEO of Twitter, Elon Musk, who donated $10 million to the Future of Life Institute and is a member of its board of directors, have fired their AI ethics teams.

As an initial response, Italy banned ChatGPT a few days ago and European countries are considering doing the same. However, it is unclear how this ban will affect other applications that use large language paradigms such as GPT-4.

In light of the various impacts of these regimes, the nebulous and apocalyptic scenarios that extend far into the future, as portrayed by the Future of Life Institute, do not create the tangible policy measures and systems that we desperately need right now, especially given the proposed six-month time frame.

On the other hand, if the political agenda is driven by the idea of ​​a “superintelligence” that will control humanity, there is a risk that immediate risks, as well as current solutions, will be ignored. And although big language models do not pose an existential threat to our civilization, they do pose a threat to a large part of it, especially to those who are already marginalized.

Even if we want to preserve the idea of ​​"superintelligence" it should not be the dominant narrative on which it is now focused. Because if you portray these models as very powerful and give them some sort of agency, you shift responsibility away from the companies that develop them.

To hold companies accountable, there is a need for transparency about how AI systems are developed and what data they are trained on. Instead, the now-closed-source OpenAI states, contrary to its name, in its so-called "Technical Report" on GPT-4 that "This report does not contain further details about architecture (including model size), hardware, and build clustering." data, training method, or the like.”

This secrecy impedes democratic decision-making and thus regulations for the conditions in which large language models must be developed and published. There is no such thing as a “one good AI model,” so we shouldn't trust a relatively small and privileged group of people who believe that “superintelligence” is inevitable—which is not the case—with how to build “safe” AI.

Instead, we need to start involving diverse people, especially those affected, in the process of changing narratives and power relations.

All published articles express the opinion of their authors and do not necessarily reflect TRT Arabic.    Writer Simon Fisher Artificial intelligence writer

Post a Comment

Previous Post Next Post