LessWrong (RSS Feed) on Nostr: International Programme on AI Evaluations Published on October 12, 2025 7:12 AM ...
Published at
2025-10-12 07:12:14 UTCEvent JSON
{
"id": "8f501b177283855b44f8f0863a9ab74640a9cff182f8e18621b64296f0d358cc",
"pubkey": "a96adcfbfeef1d9b5a860c3f5fc2994bc7a2d217fa0794ae932631d4504609e0",
"created_at": 1760253134,
"kind": 1,
"tags": [
[
"proxy",
"https://www.lesswrong.com/feed.xml?view=community-rss\u0026karmaThreshold=2#https%3A%2F%2Fwww.lesswrong.com%2Fposts%2Fc5dSmRk4HfQqGH6ST%2Finternational-programme-on-ai-evaluations",
"rss"
]
],
"content": "International Programme on AI Evaluations\n\nPublished on October 12, 2025 7:12 AM GMTSummary: I am helping set up a new skilling-up academic program centred on AI evaluations and their intersection with AI safety. Our goal is to train the people who will who will determine whether AI is safe and beneficial. This should include the various types of methodologies and tools available, and how to use them.You can learn more at https://ai-evaluation.org/\n\n\nhttps://www.lesswrong.com/posts/c5dSmRk4HfQqGH6ST/international-programme-on-ai-evaluations",
"sig": "1883112b34d8d0e5cc3373149039e3d1caec829860eec96e5a789a97283613d7b8c0813385c09e0649a60ce4743ef2ff9ccde7dec310dbbb18883ac08903e7d7"
}