2don MSN
OpenAI disbands mission alignment team, which focused on 'safe' and 'trustworthy' AI development
The team's leader has been given a new role as OpenAI's Chief Futurist, while the other team members have been reassigned throughout the company.
As LLMs and diffusion models power more applications, their safety alignment becomes critical. Our research shows that even minimal downstream fine‑tuning can weaken safeguards, raising a key question ...
The GRP‑Obliteration technique reveals that even mild prompts can reshape internal safety mechanisms, raising oversight ...
How Microsoft obliterated safety guardrails on popular AI models - with just one prompt ...
Raghab Singh’s research advances structure-first AI models for healthcare, combining molecular geometry and language understanding.
The UK’s AI Security Institute is collaborating with several global institutions on a global initiative to ensure artificial intelligence (AI) systems behave in a predictable manner. The Alignment ...
Every now and then, researchers at the biggest tech companies drop a bombshell. There was the time Google said its latest quantum chip indicated multiple universes exist. Or when Anthropic gave its AI ...
In an era of AI “hype,” I sometimes find that something critical is lost in the conversation. Specifically, there’s a yawning gap between AI research and real-world application. Though many ...
As Senior AI Research Scientist, candidate will direct foundational artificial intelligence research at IBM which supports ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results