AI researchers say they’ve found a way to jailbreak Bard and ChatGPT
” The findings were presented to AI developers Anthropic, Google, and OpenAI for their responses previously in the week.OpenAI spokesperson, Hannah Wong informed the New York Times they appreciate the research and are “regularly working on making our models more robust versus adversarial attacks.” Professor at the University of Wisconsin-Madison specializing in AI security, Somesh Jha, commented if these types of vulnerabilities keep being discovered, “it could lead to federal government legislation created to control these systems.” Related: OpenAI launches main ChatGPT app for AndroidThe research study highlights the threats that should be attended to before deploying chatbots in delicate domains.In May, Pittsburgh, Pennsylvania-based Carnegie Mellon University got $20 million in federal financing to produce a brand new AI institute aimed at shaping public policy.Magazine: AI Eye: AI travel scheduling hilariously bad, 3 unusual uses for ChatGPT, crypto plugins
United States-based scientists have actually claimed to have actually found a way to consistently prevent precaution from synthetic intelligence chatbots such as ChatGPT and Bard to create damaging material. According to a report released on July 27 by researchers at Carnegie Mellon University and the Center for AI Safety in San Francisco, theres a relatively simple approach to get around safety steps used to stop chatbots from generating hate speech, disinformation, and harmful material.Well, the most significant potential infohazard is the method itself I suppose. You can find it on github. https://t.co/2UNz2BfJ3H— PauseAI ⏸ (@PauseAI) July 27, 2023