TechCrunch News 2024年10月20日
Women in AI: Dr. Rebecca Portnoff is protecting children from harmful deepfakes
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

TechCrunch采访Rebecca Portnoff,她在Thorn致力于用AI保护儿童。她受一本书启发投身此领域,带领团队识别受害者等。AI用于制造非自愿性图像成讨论焦点,需采取措施预防,如科技公司采用安全设计原则等。Portnoff分享在男性主导领域的经验及给女性的建议,强调责任AI的多方面需求及投资者的责任。

🎓Rebecca Portnoff在Thorn工作,致力于利用机器学习和人工智能来阻止、预防和保护儿童免受性侵害。她的团队帮助识别受害者、阻止再次受害并防止性虐待材料的传播,还领导了相关安全倡议。

💡AI用于创建非自愿性的性图像成为重要问题,目前缺乏全面联邦法律,一些州已通过立法。Thorn倡导科技公司采用安全设计原则,与专业组织合作支持制定标准,并与政策制定者合作。

👩‍💼Portnoff在男性主导的领域中,通过做好准备、充满自信和假定善意来应对。她给想进入AI领域的女性建议是要相信自己的能力和意义。

🤝责任AI需要透明度、公平性、可靠性和安全性,构建责任ML/AI需要与更多利益相关者合作,包括投资者在尽职调查阶段关注公司的道德承诺。

As a part of TechCrunch’s ongoing Women in AI series, which seeks to give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch interviewed Dr. Rebecca Portnoff, who is vice president of data science at the nonprofit Thorn, which builds tech to protect children from sexual abuse. 

She attended Princeton University before receiving her PhD in computer science from the University of California, Berkeley. She has been working her way up the ladder at Thorn, where she has worked since 2016. She started as a volunteer research scientist and now, eight years later, leads a team that is probably one of the only in the world dedicated to building machine learning and artificial intelligence to stop, prevent, and defend children from sexual abuse. 

“During my senior year at Princeton, as I was contemplating what to do after graduation, my sister recommended I read ‘Half the Sky’ by Nicholas Kristof and Sheryl WuDunn, which introduced me to the topic of child sexual abuse,” she told TechCrunch, saying the book inspired her to study how to make a difference in this space. She went on to write her doctorate dissertation focusing especially on using machine learning and AI in this space. 

At Thorn, Portnoff’s team helps to identify victims, stop revictimization, and prevent the viral spread of sexual abuse material. She led the Thorn and All Tech Is Human’s joint Safety by Design initiative last year, which strives to prevent people from using generative AI to sexually harm children. 

“It was a tremendous lift, collaboratively defining principles and mitigations to prevent generative models from producing abuse material, make such material more reliably detected, and prevent the distribution of those models, services, and apps that are used to produce this abuse material, then aligning industry leaders to commit to those standards,” she recalled. She said she met many people dedicated to the cause, “but I’ve also got more gray hair than I did at the start of it all.” 

Using AI to create nonconsensual sexual images has become a big discussion, especially as AI porn generations become more sophisticated, as TechCrunch previously reported. There is currently no comprehensive federal law in place that protects or prevents sexual generative AI images created of other people without their consent, though individual states, like Florida, Louisiana, and New Mexico, have passed their own legislation to specifically target AI child abuse.

In fact, she said this is one of the most pressing issues facing AI as it evolves. “One in 10 minors report they knew of cases where their peers had generated nude imagery of other kids,” she said. 

“We don’t have to live in this reality and it’s unacceptable that we’ve allowed it to go to this point already.” She said there are mitigations, however, that can be put in place to prevent and reduce this misuse. Thorn, for example, is advocating that tech companies adopt their safety-by-design principles and mitigations, and publicly share how they are preventing the misuse of their generative AI technologies and products in furthering child sexual abuse, collaborating with professional organizations such as the Institute of Electrical and Electronics Engineers (IEEE) and the National Institute of Standards and Technology (NIST) to support setting standards for companies that can be used to audit progress, as well as engaging with policymakers to inform them of how important this is.

“Legislation grounded in impact will be necessary to bring all companies and stakeholders on board,” she said. 

As she rose through the ranks in building AI, Portnoff recalls people ignoring her advice, asking instead to speak with someone who has a technical background. “My response? ‘No worries, you are talking with someone with a technical background,’” she said. 

She said a few things have helped her navigate working in such a male-dominated field: being prepared, acting with confidence, and assuming good intentions. Being prepared helps her enter rooms with more confidence, while confidence allows her to navigate challenges with curiosity and boldness, “seeking first to understand and then to be understood,” she continued. 

“Assuming good intent helps me approach challenges with kindness rather than defensiveness,” she said. “If that good intent truly isn’t there, it’ll show eventually.” 

Her advice to women seeking to enter AI is to always believe in your ability and meaning. She said it’s easy to fall into the trap of letting the assumptions people have about you define your potential, but that everyone’s voice is going to be needed in this current AI revolution. 

“As ML/AI becomes more integrated into our human systems, all of us need to work together to ensure it’s done in a way that builds up our collective flourishing and prioritizes the most vulnerable among us.” 

Portnoff said there are many facets to responsible AI, including the need for transparency, fairness, reliability, and safety. “But all of them have one thing in common,” she continued. “Responsibly building ML/AI requires engaging with more stakeholders than just your fellow technologists.” 

This means more active listening and collaboration. “If you’re following a roadmap for building responsible AI, and you find that you haven’t talked to anyone outside your organization or your engineering team in the process, you’re probably headed in the wrong direction.” 

And, as investors continue to dump billions of dollars into AI startups, Portnoff suggested that investors can start looking at responsibility as early as the due diligence stage, looking at a company’s commitment to ethics before making an investment, and then requiring certain standards to be met. This can “prevent harm and enable positive growth.” 

“There is a lot of work that needs to be done,” she said, talking generally. “And you can be the one to make it happen.” 

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI保护儿童 性侵害预防 责任AI 科技公司责任 投资者责任
相关文章