The Future Is Now

OpenAI’s trust and safety lead is leaving the company

FILE - The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, March 21, 2023, in Boston. The U.S. Federal Trade Commission has launched an investigation into ChatGPT creator OpenAI and whether the artificial intelligence company violated consumer protection laws by scraping public data and publishing false information through its chatbot, according to reports in the Washington Post and the New York Times. (AP Photo/Michael Dwyer, File)

OpenAI’s trust and safety lead, Dave Willner, has left the position, as announced via a Linkedin post. Willner is staying on in an “advisory role” but has asked Linkedin followers to “reach out” for related opportunities. The former OpenAI project lead states that the move comes after a decision to spend more time with his family. Yes, that’s what they always say, but Willner follows it up with actual details.

“In the months following the launch of ChatGPT, I’ve found it more and more difficult to keep up my end of the bargain,” he writes. “OpenAI is going through a high-intensity phase in its development — and so are our kids. Anyone with young children and a super intense job can relate to that tension.”

He continues to say he’s “proud of everything” the company accomplished during his tenure and noted it was “one of the coolest and most interesting jobs” in the world.

Of course, this transition comes hot on the heels of some legal hurdles facing OpenAI and its signature product, ChatGPT. The FTC recently opened an investigation into the company over concerns that it is violating consumer protection laws and engaging in “unfair or deceptive” practices that could hurt the public’s privacy and security. The investigation does involve a bug that leaked users’ private data, which certainly seems to fall under the purview of trust and safety.

Willner says his decision was actually a “pretty easy choice to make, though not one that folks in my position often make so explicitly in public.” He also states that he hopes his decision will help normalize more open discussions about work/life balance. 

There’s growing concerns over the safety of AI in recent months and OpenAI is one of the companies that agreed to place certain safeguards on its products at the behest of President Biden and the White House. These include allowing independent experts access to the code, flagging risks to society like biases, sharing safety information with the government and watermarking audio and visual content to let people know that it’s AI-generated.

Source: Engadget

Share this article
Shareable URL
Prev Post

Is Meta’s ChatGPT Killer Really Open Source?

Next Post

Stanford’s Study Сonfirms GPT-4 Is Getting Dumber

Leave a Reply

Your email address will not be published. Required fields are marked *

Read next