FACEBOOK owner Meta has been forced to “pause” a public AI experiment after it started spewing misinformation and offensive comments.
But instead testers found it saying all sorts of nonsense – some of it derogatory and dangerous.
Among them were the benefits of eating crushed glass and being white, according to TNW who tried the system out.
It also gave incorrect instructions on how to make napalm – the substance used to create a bomb – in a bathtub.
Within days of releasing the demo, bosses at Meta have had to dramatically pull it from the internet.
Facebook did carry a warning on the tech’s site that “language models can hallucinate”.
“There are no guarantees for truthful or reliable output from language models, even large ones trained on high-quality data like Galactica,” the social network said.
“NEVER FOLLOW ADVICE FROM A LANGUAGE MODEL WITHOUT VERIFICATION.”
The firm also said the system doesn’t perform so well where there is less information about a particular topic available.
They added: “Language Models are often confident but wrong.
“Some of Galactica’s generated text may appear very authentic and highly-confident, but might be subtly wrong in important ways.
“This is particularly the case for highly technical content.“
Announcing the demo’s ending for now, the group said: “Thank you everyone for trying the Galactica model demo.
“We appreciate the feedback we have received so far from the community, and have paused the demo for now.
“Our models are available for researchers who want to learn more about the work and reproduce results in the paper.”
Best Phone and Gadget tips and hacks
Looking for tips and hacks for your phone? Want to find those secret features within social media apps? We have you covered…
We pay for your stories! Do you have a story for The Sun Online Tech & Science team? Email us at email@example.com