kmiainfo: Introducing human emotions to machines, Does it solve artificial intelligence problems? Introducing human emotions to machines, Does it solve artificial intelligence problems?

Introducing human emotions to machines, Does it solve artificial intelligence problems?

Introducing human emotions to machines..Does it solve artificial intelligence problems?  As artificial intelligence begins to enter our social and consumer lives, it is "theoretically" supposed to eliminate all the flaws that humans make.  But the reality is completely different, of course, according to an article published by the American “Washington Post”, which says that it is from Facebook’s algorithms that promote hate to get more viewers to facial recognition applications that do not recognize people of color AI often doesn't help solve humanity's problems.  But former Google data scientist Alan Quinn, who has a background in psychology who has started a research company called Hume AI, says it can help make the messy business of AI more compassionate and humane. .  By training on hundreds of thousands of facial and voice expressions from around the world, the AI ​​on Hume's platform can truly interact with how users feel and more closely address their emotional needs, Quinn said.  He said he hoped the platform would eventually be integrated into consumer applications such as videos and digital assistants.  The beta launch of the platform will be next March, with a more official reveal later. It will also be free for many researchers and developers.  "We know this is going to be a long battle," Quinn said in an interview. "But we need to start improving this area."  Not the first The 31-year-old entrepreneur is not the first technologist to try to inject human emotions into digital spaces.  There is the "Ethical AI" movement whose goal is to incorporate fairness and justice into algorithms, which has several organizations among its members such as the Marc Rotenberg's policy-oriented Center, and the new Distributed Artificial Intelligence Research Institute. To combat bias that includes AI ethicist Timent Gibru, who has been working with Google on the field.  Timent Gibro was hired at Google to be an outspoken critic of unethical AI, and then was fired for it. There are also a large number of academic experts who have taken strong public positions on the development of AI to eliminate social prejudice.  But what Quinn brings to this field is a high degree of psychological research to accompany those ethical goals. His previous work includes studying emotional responses across cultures (such as studying similar reactions to sad songs in the United States and China) and working on the many nuances of sound effects.  Empathy machine learning aids Quinn also comes with an army of names that have sold in this field. Hume's Initiative has established an ethics committee with several scientists in the field of emotional and ethical AI, including Empathy Lab founder Daniel Kretek Cobb, "computational fairness" expert Karthik Dinakar, as well as University of California professor California) Dasher Keltner, who was Quinn's graduate tutor and provided his services to Pixar on sentiment when producing the animated film Inside Out.  Quinn said he raised $5 million from studio Aegis Ventures with another round to follow. The money will be directed to investigate how AI is crafted not only to process very quickly and see unseen patterns, but also to advance its understanding of humans, an approach Quinn called "empathic AI."  Quinn's Google search included "emotional computing," which aims to increase the ability of machines to read and simulate emotions.  Features and Cons The idea of ​​having more emotions may seem to contradict the prevailing ideas about artificial intelligence, whose primary strength is often seen as making decisions without taking human emotion into account.  But many in the affective computing community say that it's the inability of AI to read people that makes it dangerous, and makes it important for AI to see the human side in the humans it serves.  Of course, there is no guarantee that if AI can measure emotions, it will not exploit them, especially if the big tech companies are trying to maximize their profits.  Another challenge in developing emotion in AI is how to avoid building on the emotions of its human programmers, which may be biased.  Cowen's partners say they believe Hume's model avoids bias. "Hume's models are informative but still far from biased," said Arjun Nagindran, co-founder of VR employee training firm Mursion.  In turn, University of Maryland professor and AI expert Ben Schneiderman said initiatives like Cowen could play a role in creating racially biased AI, but it isn't.  A study by the Pew Research Center published last June found that more than two-thirds of AI experts do not believe that AI will mostly be used for social good by 2030.  Quinn acknowledged the dangers of providing fast-growing AI with more emotional data. But he also said the alternative is even scarier: “If we keep improving these algorithms to improve engagement without projects like empathic AI, then kids are going to spend 10 hours a day on social media... and I don't think it helps anyone.”

Introducing human emotions to machines, Does it solve artificial intelligence problems?

As artificial intelligence begins to enter our social and consumer lives, it is "theoretically" supposed to eliminate all the flaws that humans make.

But the reality is completely different, of course, according to an article published by the American “Washington Post”, which says that it is from Facebook’s algorithms that promote hate to get more viewers to facial recognition applications that do not recognize people of color AI often doesn't help solve humanity's problems.

But former Google data scientist Alan Quinn, who has a background in psychology who has started a research company called Hume AI, says it can help make the messy business of AI more compassionate and humane. .

By training on hundreds of thousands of facial and voice expressions from around the world, the AI ​​on Hume's platform can truly interact with how users feel and more closely address their emotional needs, Quinn said.

He said he hoped the platform would eventually be integrated into consumer applications such as videos and digital assistants.

The beta launch of the platform will be next March, with a more official reveal later. It will also be free for many researchers and developers.

"We know this is going to be a long battle," Quinn said in an interview. "But we need to start improving this area."

Not the first

The 31-year-old entrepreneur is not the first technologist to try to inject human emotions into digital spaces.

There is the "Ethical AI" movement whose goal is to incorporate fairness and justice into algorithms, which has several organizations among its members such as the Marc Rotenberg's policy-oriented Center, and the new Distributed Artificial Intelligence Research Institute. To combat bias that includes AI ethicist Timent Gibru, who has been working with Google on the field.

Timent Gibro was hired at Google to be an outspoken critic of unethical AI, and then was fired for it. There are also a large number of academic experts who have taken strong public positions on the development of AI to eliminate social prejudice.

But what Quinn brings to this field is a high degree of psychological research to accompany those ethical goals. His previous work includes studying emotional responses across cultures (such as studying similar reactions to sad songs in the United States and China) and working on the many nuances of sound effects.

Empathy machine learning aids

Quinn also comes with an army of names that have sold in this field. Hume's Initiative has established an ethics committee with several scientists in the field of emotional and ethical AI, including Empathy Lab founder Daniel Kretek Cobb, "computational fairness" expert Karthik Dinakar, as well as University of California professor California) Dasher Keltner, who was Quinn's graduate tutor and provided his services to Pixar on sentiment when producing the animated film Inside Out.

Quinn said he raised $5 million from studio Aegis Ventures with another round to follow. The money will be directed to investigate how AI is crafted not only to process very quickly and see unseen patterns, but also to advance its understanding of humans, an approach Quinn called "empathic AI."

Quinn's Google search included "emotional computing," which aims to increase the ability of machines to read and simulate emotions.

Features and Cons

The idea of ​​having more emotions may seem to contradict the prevailing ideas about artificial intelligence, whose primary strength is often seen as making decisions without taking human emotion into account.

But many in the affective computing community say that it's the inability of AI to read people that makes it dangerous, and makes it important for AI to see the human side in the humans it serves.

Of course, there is no guarantee that if AI can measure emotions, it will not exploit them, especially if the big tech companies are trying to maximize their profits.

Another challenge in developing emotion in AI is how to avoid building on the emotions of its human programmers, which may be biased.

Cowen's partners say they believe Hume's model avoids bias. "Hume's models are informative but still far from biased," said Arjun Nagindran, co-founder of VR employee training firm Mursion.

In turn, University of Maryland professor and AI expert Ben Schneiderman said initiatives like Cowen could play a role in creating racially biased AI, but it isn't.

A study by the Pew Research Center published last June found that more than two-thirds of AI experts do not believe that AI will mostly be used for social good by 2030.

Quinn acknowledged the dangers of providing fast-growing AI with more emotional data. But he also said the alternative is even scarier: “If we keep improving these algorithms to improve engagement without projects like empathic AI, then kids are going to spend 10 hours a day on social media and I don't think it helps anyone.”

Post a Comment

Previous Post Next Post