Join Nostr
2026-01-06 11:17:09 UTC

BleepingComputer on Nostr: Microsoft has pushed back against claims that multiple prompt injection and ...

Microsoft has pushed back against claims that multiple prompt injection and sandbox-related issues raised by a security engineer in its Copilot AI assistant constitute security vulnerabilities. The development highlights a growing divide between how vendors and researchers define risk in generative AI systems.

https://www.bleepingcomputer.com/news/security/are-copilot-prompt-injection-flaws-vulnerabilities-or-ai-limits/