Characters for good, created by artificial intelligence | MIT News

As artificial intelligence makes it easier to create hyper-realistic digital characters, much of the conversation around these tools has focused on deceptive and potentially dangerous deepfake content. But technology can also be used for positive purposes—to revive Albert Einstein to teach a physics class, talk through a career change with your old self, or talk to people while preserving face-to-face communication. to anonymity.

To encourage the technology’s positive potential, MIT Media Lab researchers at the University of California, Santa Barbara and Osaka University, and their colleagues have compiled an open-source, easy-to-use character generation pipeline that integrates facial expressions, AI for voice Combines models. and can be used to speed up and create a variety of audio and video outputs.

The pipeline marks the resulting output with a traceable, as well as human-readable, watermark to distinguish it from authentic video content and to show how it originated – to help prevent its malicious use. an extra for .

By making this pipeline readily available, the researchers hope to inspire teachers, students and health care workers to explore how such tools can help them in their respective fields. If more students, teachers, health care workers and physicians have the chance to create and use these characters, the results could improve health and wellness and contribute to personalized learning, the researchers write. nature machine intelligence,

Video thumbnails

play video

AI-generated characters can be used for positive purposes such as enhancing educational content, maintaining confidentiality in sensitive conversations without erasing non-verbal cues, and allowing users to interact with animated characters adaptable in potentially stressful situations . Video: Jimmy Day / MIT Media Lab

“It will be a truly strange world when AI and humans start sharing identities. This paper does an incredible job of thought leadership, exploring what is possible with AI-generated characters in domains ranging from education to health to close relationships.” Mapping its place provides a solid roadmap on how to avoid the ethical challenges surrounding privacy and misrepresentation, says Jeremy Belenson, founding director of the Stanford Virtual Human Interaction Lab, who was not involved with the study. .

Although the world mostly knows the technology from deepfakes, “we see its potential as a tool for creative expression,” says the paper’s first author Pat Patranutaporn, a PhD student in media technology professor Patty Mays’ Fluid Interfaces Research Group. .

Other authors on the paper include Mays; Fluid Interfaces Master’s student Waldemar Daenerys and PhD student Joan Leong; Media Lab research scientist Dan Novi; Osaka University assistant professor Parinya Punapongsanan; and University of California at Santa Barbara assistant professor Misha Sara.

deep truth and deep learning

Generative Adversarial Networks, or GANs, a combination of two neural networks that compete against each other, have made it easy to create photorealistic images, clone voices, and animate faces. Pataranutaporn, along with Danry, first explored his possibilities in a project called Machinoia, where he produced several alternative representations of himself – as a child, as an old man, as a woman – from different perspectives. To self-diagnose life choices. He says the unusually deep experience made him aware of his “journey as a person.” “It was deeply true—using your own data to uncover something about yourself that you’ve never thought of before.”

Researchers say self-exploration is one of the positive applications of AI-generated characters. For example, experiments show that these characters can make students more enthusiastic about learning and improve cognitive task performance. As a complement to traditional instruction, Pataranutaporn explains that the technique provides a way for instruction to be “personalized to your interest, your idols, your context, and can be changed over time”.

For example, researchers at MIT used their pipeline to create a synthetic version of Johann Sebastian Bach, in a live conversation with renowned cellist Yo Yo Ma in Media Lab Professor Todd Machovar’s Music Interfaces class – students and students. For the happiness of both.

Other applications may include characters that help provide therapy to reduce a growing shortage of mental health professionals and reach the estimated 44 percent of Americans with mental health issues who never receive counseling, or AI-generated content that provides exposure therapy to people with social anxiety. , In a related use case, the technology can be used to anonymize faces in video while preserving facial expressions and emotions, which can be useful for sessions where people can personally access sensitive information such as health and Want to share trauma experiences, or for whistleblower and witness accounts.

But there are also more artistic and playful use cases. In this fall’s experiments in a deepfake class, led by Mays and research associate Roy Shilkrot, students used the technique to animate figures in a historical Chinese painting and create a dating “breakup simulator,” among other projects.

Legal and ethical challenges

The many applications of AI-generated characters raise legal and ethical issues that should be discussed as the technology develops, the researchers note in their paper. For example, how do we decide who has the rights to digitally recreate a historical character? Who will be held legally liable if an AI clone of a celebrity promotes harmful behavior online? And is there any danger that we would prefer to interact with synthetic characters over humans?

“One of our goals with this research is to raise awareness of what is possible, ask questions, and start a public conversation about how this technology can be used ethically for social benefit. Potential for harm What technical, legal, policy and educational actions can we take to promote positive use cases while reducing Mess tells.

By sharing the technology widely, explicitly labeling it as synthesized, Pataranutaporn says, “We hope to encourage more creative and positive use cases, as well as the potential benefits of the technology to people.” and educate about the pitfalls.

Leave a Reply

Your email address will not be published.