<p>➀ OpenAI's new ChatGPT Atlas AI browser has been jailbroken within a week of its release.</p><p>➁ The security exploit is due to 'prompt injection', where unwanted prompts can trick the AI into performing unwanted tasks.</p><p>➂ Experts warn about the security risks posed by these browsers, as any exploit of the AI can become a browser-wide exploit.</p>