Meet the scientist teaching AI to police human speech

There are some clues that AI researchers are now more often reckoning with what they’re built. In May, when Google debuted the new language-AI system LaMDA, which it said had been trained to emulate the meandering style of human dialogue that leads most by-the-book chatbots to awkwardly fail, the company acknowledged that such systems could also learn to internalize online biases, replicate hateful speech or misleading information, or otherwise be “put to ill use.” The company said it was “working to ensure we minimize such risks,” though some, including Gebru, were skeptical, panning the claims as “ethics washing.

More To Explore

Do You Want To learn about our IT managed Services?

Contact Us

GENERAL SALES HOURS: MON-FRI 09:00 - 19:00, SAT-SUN 10:00 - 16:00

2717 Commercial Center Blvd.,
Suite E200, Katy, TX, 77494

Purple Falcon Partners

Industries We Serve

Weekly Newsletter