BleepingComputer on Nostr: Microsoft has pushed back against claims that multiple prompt injection and ...
Published at
2026-01-06 11:17:09 UTCEvent JSON
{
"id": "d48e1f26cfd71f34f3586ebf17e515954e969417951eff04881d45c154b05efd",
"pubkey": "979a28fa43702f9be4e468836a5b120cc4265237f4295fcb4a9b28e2a71d1c6b",
"created_at": 1767698229,
"kind": 1,
"tags": [
[
"proxy",
"https://infosec.exchange/users/BleepingComputer/statuses/115847871159017979",
"activitypub"
],
[
"client",
"Mostr",
"31990:6be38f8c63df7dbf84db7ec4a6e6fbbd8d19dca3b980efad18585c46f04b26f9:mostr",
"wss://relay.ditto.pub"
]
],
"content": "Microsoft has pushed back against claims that multiple prompt injection and sandbox-related issues raised by a security engineer in its Copilot AI assistant constitute security vulnerabilities. The development highlights a growing divide between how vendors and researchers define risk in generative AI systems.\n\nhttps://www.bleepingcomputer.com/news/security/are-copilot-prompt-injection-flaws-vulnerabilities-or-ai-limits/",
"sig": "7896bca8073efc09a159a1256d9e64b1b38e519dcfc57d34e61a00e7b65fbe582761526c3ccbf9765ad1411aaa9ba699fda44afccb5adf3851d3d275960317fa"
}