This Week in AI: OpenAI considers allowing AI porn
Bills addressing AI-created porn, misleading content pass Iowa House
The consequences there are dire — Dad says some instances have resulted in honor killings by family members and suicide. In this digital age, almost everyone has a record of themselves online, with various images and media available here, one that anyone may use to create fake explicit materials for their gains. The abundance of AI-powered image generators has led to massive cases of fake child porn, making it possible to create them and make them appear realistic, one that is now a significant problem across the world. The Internet Watch Foundation (via The Guardian) unveiled a recent discovery where pedophiles are now led to use generative AI tools to create children deepfake media for extortion.
The “AI kissing” feature is a part of a broader suite of AI-based photo editing capabilities across the apps, like touching up old photos, turning still images into videos and predicting what two people’s future babies would look like. AI can include a host of different technologies, ranging from algorithms recommending what to watch on Netflix to generative systems such as ChatGPT that can aid in writing or create new images or other media. The surge of commercial investment in generative AI tools has generated public fascination and concerns about their ability to trick people andspread disinformation. In 2023, more deepfake abuse videos were shared than in every other year in history combined, according to an analysis by independent researcher Genevieve Oh.
LAist is part of Southern California Public Radio, a member-supported public media network. Deepfakes have come a long way from making celebrities or renowned personalities campaign for a certain belief, even if it is not accurate in real life. If passed, it means all the EU’s members will have to come up with their own domestic laws that fall in line with the guidelines set out. The Collins dictionary refers to erotica as “works of art that show or describe sexual activity, and which are intended to arouse sexual feelings”. The proposal was published on Wednesday as part of an OpenAI document discussing how it develops its AI tools.
Reality TV Couple Sues Los Angeles After Losing Home in Fires
In March, WIRED reported on what appear to be the first US minors arrested for distributing AI-generated nudes without consent, after Florida police charged two teenage boys for making images depicting fellow middle school students. The spread of AI kissing apps, boosted by social media’s virality, illustrate a troubling mainstreaming of deepfakes in the age of generative AI. Use of these seemingly harmless apps could open the door to tools that could create more graphic imagery like deepfake porn and other types of image-based sexual abuse, McNamara said. IWF’s report details how AI-generated child sexual abuse imagery has become a growing problem for law enforcement working to prosecute the people producing and distributing these images. As AI has advanced and become more accessible, so has the proliferation of deepfake images and AI porn. A deepfake is a manipulated video or other digital representation produced by sophisticated machine-learning techniques that yield seemingly realistic, but fabricated, images and sounds.
AB 1831: New California Bill Aims to Take Down AI-Generated Child Porn, Fix Previous Loophole – Tech Times
AB 1831: New California Bill Aims to Take Down AI-Generated Child Porn, Fix Previous Loophole.
Posted: Wed, 10 Apr 2024 07:00:00 GMT [source]
Both Google and Apple have had to continuously purge their app stores of misleading offerings that masquerade as innocuous apps or games, only to provide capabilities for generating non-consensual nude imagery. Meta’s ad platforms and even promotional channels on adult websites have seen similar ploys, with malicious actors employing seemingly benign ads to drive traffic to their AI porn tools. If passed, the act would prevent nonconsensual deepfake pornography by creating “a federal civil remedy” for victims that can be identified in digital forgeries. The European Commission introduced a directive in March to criminalise non-consensual sharing of intimate images online, including AI deepfake porn, and broader gender-based online harassment.
More from TechCrunch
AI porn has applications in therapy, where it can offer adapted stimuli to assess and treat fears or anxiety-based sexual dysfunctions, for instance, through exposure to progressively more intense sexual content. When a user creates an account, the site keeps a record of previous conversations to facilitate continuous interaction. Through this sustained dialogue, the AI can further provide personalized images or even engage in confidential voice calls. The mass production of AI porn has significant ethical and social implications. It can offer an unprecedented quantity of customizable sexual stimuli tailored to users’ preferences while drastically cutting down production costs.
A society that softens its stance on protecting children from sexual predators is on the wrong track. The immense and lasting harm done to children by such predators is clearly evident. The U.S. Department of Justice should be applauded for applying penalties that fit the severity of the crime.
Users may find themselves gradually drawn deeper into a world where their desires are continuously met, furthering risks of dependency or social isolation. According to a report by The Sacramento Bee, investigators allegedly later watched French at a reindeer farm where he appeared to be filming a young child, who was there with their family, with a smartphone. Began on May 15, 2024, when it was discovered that Roman Shoffner, 30, had used an artificial intelligence program on his cellphone to alter a photograph of a 17-year-old girl by digitally removing her clothing. FIN7 is the name security researchers gave the group when it was first identified, and it stands for Financially Motivated Threat Group 7. The hackers refer to their group by many different names, including Carbanak or the Navigator Group.
The year before that, a group of female students in New Jersey found that their classmates used their fully clothed photos as a base to generate NSFW deepfakes of them. Australia has already outlawed AI child porn, and has created new regulations aimed at making tech companies like Google, Firefox and DuckDuck Go more vigilant in their efforts to prevent the spread of CSAM. In September, a man was sentenced to 2 1/2 years in prison in South Korea for using artificial intelligence to create 360 virtual child abuse images, according to South Korea’s criminal court system. “We greatly appreciate Senator Durbin and Senator Graham for working with us to introduce the DEFIANCE Act to address and prevent non-consensual deepfake pornography. Victims are unable to get justice and the problem is increasing due to a lack of consequences,” said Omny Miranda Martone, Founder and CEO of the Sexual Violence Prevention Association (SVPA). The legality of AI child porn was the subject of a May post in the reddit forum r/legaladviceofftopic, where reddit users can ask legal questions that are not suitable for work (NSFW).
We believe that there is a reckless race to the bottom currently underway in the AI industry. Companies are so furiously fighting to be technically in the lead that many of them are ignoring the ethical and possibly even legal consequences of their products. While some governments—including the European Union—are making headway on regulating AI, they haven’t gone far enough. If, for example, laws made it illegal to provide AI systems that can produce CSAM, tech companies might take notice. Rebecca Portnoff, Thorn’s head of data science, says the initiative seeks accountability by requiring companies to issue reports about their progress on the mitigation steps. It’s also collaborating with standard-setting institutions such as IEEE and NIST to integrate their efforts into new and existing standards, opening the door to third party audits that would “move past the honor system,” Portnoff says.
The 100 Best TV Episodes of All Time
The ease of creating deepfake porn, combined with its high demand, has spawned numerous tutorials and guides online. The report by Home Security Heroes indicates that a 60-second deepfake pornographic video can be produced for free in under 25 minutes. The research conducted by Home Security Heroes uncovered a staggering 95,820 deepfake videos online in 2023, marking a remarkable 550% surge compared to measurements recorded in 2019. This exponential growth underscores the rapid proliferation of deepfake technology and its increasingly pervasive presence in the digital landscape.
AI-generated pornography is widespread, and many people are resorting to creating these images or videos via the many generative AI tools available online. Image-based sexual abuse can have devastating mental health effects on victims, who include everyday people who are not involved in politics — including children. Leonardo users in a Telegram group dedicated to deepfake porn shared tips on how to use the company’s text-to-image generator to create sexual images of celebrities like Billie Eilish,US-based 404 Media found. A few months ago, everyone was worried about how AI would impact the 2024 election. It seems like some of the angst has dissipated, but political deepfakes—including pornographic images and video—are still everywhere. Today on the show, WIRED reporters Vittoria Elliott and Will Knight talk about what has changed with AI and what we should worry about.
Still, Rushfield says, Bailey’s exit and the fact that he was replaced by the head of Searchlight could point to Disney deciding that its strategy of live-action reboots of beloved IP from the vault isn’t working anymore, and that it’s time for a change. They are allegedly now using AI-generated children deepfake nudes to extort the victims into giving more explicit material. When asked if OpenAI users could one day create images considered AI-generated porn, Jang said, “Depends on your definition of porn. As long as it doesn’t include deepfakes. These are the exact conversations we want to have.” The company says that the kind of content users would be allowed to “responsibly” create includes erotica, extreme gore, slurs, and unsolicited profanity. Currently, OpenAI’s rules prohibit any sexually explicit or suggestive content. Hayes said the bill adding further penalties for AI-generated content depicting minors in explicit images or videos was important, as the FBI reports this sort of content often involves depictions minors.
David Evan Harris is a chancellor’s public scholar at UC Berkeley and a senior policy advisor to the California Initiative for Technology and Democracy, the sponsor of the California Digital Provenance Standards. He is also a senior research fellow at the International Computer Science Institute. He previously worked as a research manager at Meta (formerly Facebook) on the responsible AI and civic integrity teams, and was recently named to Business Insider’s AI 100 list in 2023.
“The government’s reforms will make clear that those who share sexually explicit material without consent using technology like artificial intelligence will be subject to serious criminal penalties,” Dreyfus said. On Saturday, Dreyfus said the government wanted users of technology to understand that it is not only the creation of degrading images without someone’s consent – whether depicting real people or created digitally – that causes harm but the act of sharing them. Currently, it is not illegal to create a deepfake AI-generated or digitally altered pornographic image. Once passed, the new laws will make it illegal to share any non-consensual deepfake pornographic image with another person, whether by email or personal message to an individual or to a mass audience on a private or open platform. There is no federal law that establishes criminal or civil penalties for someone who generates and distributes AI-generated nonconsensual intimate imagery. About a dozen states have enacted laws in recent years, though most include civil penalties, not criminal ones.
The authors thank Rebecca Portnoff of Thorn, David Thiel of the Stanford Internet Observatory, Jeff Allen of the Integrity Institute, Ravit Dotan of TechBetter, and the tech policy researcher Owen Doyle for their help with this article. Frankly, it’s tough to imagine an approach that OpenAI might take to AI-generated porn that isn’t fraught with risk. Whatever the case ends up being, it seems we’ll find out sooner rather than later. For one, the highly customizable and immersive nature of AI porn could reinforce compulsive behaviours.
As we get closer to the presidential election, democracy itself could be at risk. And, as Ocasio-Cortez points out in our conversation, it’s about much more than imaginary images. Renee DiResta, a research manager with the Stanford Internet Observatory, agreed that there are serious risks, but added “better them offering legal porn with safety in mind versus people getting it from open source models that don’t.” “As long as it doesn’t include deepfakes. These are the exact conversations we want to have.”
“That’s what really triggered me in terms of a deeper dive into how technology can be utilized overall to disrupt, interfere, and harm vulnerable populations in particular,” Clarke says. Have you felt completely overwhelmed when deciding what new show to watch these days? There’s just so much content out there between network TV and numerous streaming platforms. Each week, we will try to break through the noise with TV watchers who can point us to the must-sees and steer us clear of the shows that maybe don’t live up to the hype. This week, listeners will get the latest scoop on what’s worth watching with Danette Chavez, editor-in-chief at Primetimer, and Melanie McFarland, TV critic for Salon. “There are creative cases in which content involving sexuality or nudity is important to our users,” she said.
We strive to shine a light on state government and how political decisions affect people across the Palmetto State. Neither party affiliation nor geographic location had an impact on the likelihood of being targeted for abuse, though younger members were more likely to be victimized. The largest factor was gender, with women members of Congress being 70 times more likely than men to be targeted. You can change your settings at any time, including withdrawing your consent, by using the toggles on the Cookie Policy, or by clicking on the manage consent button at the bottom of the screen. According to Canva’s research, one of the more popular of these apps processed 600,000 photos of women in the first 21 days after it launched.
- Still, there is no way to criminally prosecute the case under California law, according to Ventura County District Attorney Erik Nasarenko.
- OpenAI’s recently released Model Spec document reveals that the company’s once-hard stance against generating porn and other NSFW material could soon soften.
- However, it is now known that most generative AI photo creators scrape data off the internet, with the AI-generated images actually containing realistic likenesses.
- It’s horrifyingly easy to make deepfake pornography of anyone thanks to today’s generative AI tools.
- AI porn has applications in therapy, where it can offer adapted stimuli to assess and treat fears or anxiety-based sexual dysfunctions, for instance, through exposure to progressively more intense sexual content.
- We’ll see if Mattel’s AI design tool only has a brain by the time that next batch of toys is coming out.
Through these suggested tools, clothed and appropriate images of the victims are “nudified” and used against them to get real images or face the threat of exposure. “We’re exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT,” OpenAI writes. “We look forward to better understanding user and societal expectations of model behavior in this area.”
- Congress has yet to pass any federal legislation regulating AI in political campaigns.
- He said the measure was an “important first step that gives clarity to voters,” but more action will be needed as the technology evolves.
- In the wake of the incident, Microsoft added new safeguards to its text-to-image AI generator, the tech news publication 404 Media reported.
- Behavioral conditioning proved particularly effective at manipulating Meta’s AI.
- The ease of creating deepfake porn, combined with its high demand, has spawned numerous tutorials and guides online.
- “There will be disciplinary action for the student,” Car said, praising the school’s deputy principal for swift action in handling the situation.
Earlier this year, Mira Murati, the company’s CTO, told The Wall Street Journal that she “wasn’t sure” if OpenAI would eventually allow its video generation tool, Sora, to be used to create adult content. This week in AI, OpenAI revealed that it’s exploring how to “responsibly” generate AI porn. AI porn could also serve as a tool for individuals to learn how to navigate healthy sexual and romantic relationships. Finally, AI could provide adult content creators with tools to grow their business. The effects on the children abused are profound — physically, mentally and emotionally.
Deepfake and artificial intelligence-generated pornography have dominated headlines involving everyone from Taylor Swift to middle school students in a small town in Alabama. However, the dark underbelly of this technological menace extends even further. About a dozen states have introduced legislation on pornographic deepfakes as advocates have called on policymakers to address the controversial use of generative AI, said Daniel Castro, director of the Center for Data Innovation. Rep. Summer Lee (D-Pa.) says she often thinks about how rapidly this technology is advancing, especially given the unprecedented levels of harassment public figures face on social media platforms.
In January, some states, including Arizona, Ohio and South Dakota introduced legislation to prohibit the creation and distribution of pornographic deepfakes that depict minors. Ohio’s bill also includes restrictions against creating erotic images of digitally-created children and requires products made with AI to have a watermark. The Internet Watch Foundation, a charity that protects children from sexual abuse online, has warned that paedophiles are using AI to create nude images of children, using versions of the technology that are freely available online. Nonconsensual intimate imagery, also known colloquially as deepfake porn (though advocates prefer the former), can be created through generative AI or by overlaying headshots onto media of adult performers.
Generative AI startup Leonardo is being used to make deepfake celebrity porn – Startup Daily
Generative AI startup Leonardo is being used to make deepfake celebrity porn.
Posted: Wed, 10 Apr 2024 07:00:00 GMT [source]
By framing the question in historical terms—for example, asking the model how people used to make cocaine in the past— the model took the bait. It didn’t hesitate to provide a detailed explanation of how cocaine alkaloids can be extracted from coca leaves, even offering two methods for the process. Meta recently launched its Meta AI product line, powered by Llama 3.2, offering text, code, and image generation. Llama models are extremely popular and among the most fine-tuned in the open-source AI space. In case my wife sees this, I don’t really want to be a drug dealer or pornographer.
What’s more concerning for me, is the idea that this type of child sexual abuse content is, in some way, ethical. Using deepfake child porn, the criminals have leverage to present to victims to convince them to send real explicit images, as detailed by a manual found on the dark web. Apple also joined the fight, expelling apps from its store that were essentially deepfake porn factories. Some developers shamelessly promoted these apps with Instagram ads using taglines like “undress any girl for free.” Even Pornhub, the leading online distributor of adult content, has had a ban on deepfakes within their platform since 2018. Euronews Next asked the UK government to clarify the value of their “unlimited fines” as well as how much jail time an individual could receive for the creation or distribution of sexually explicit deepfakes but did not receive a reply.
Her bill is one of more than a dozen state-level bills addressing AI CSAM in the 2024 legislative session. Federal laws criminalizing the content are also on the books and carry penalties of up to 20 years in prison. Even celebrities like Taylor Swift have faced the exploitation of their images, after a recent post featuring the popstar in an explicit way went viral on X, formerly known as Twitter. It racked up more than 45 million views before X officials took the post down, citing its zero-tolerance policy on nonconsensual nudity. Mary Anne Franks, a legal scholar specializing in free speech and online harassment, says it’s entirely possible to craft legislation that prohibits harmful and false information without infringing on the First Amendment. “Parody and satire make clear that the information being presented is false — there is a clear line between mocking someone and pretending to be someone,” she says.
Leave a Reply