someone on Nostr: "The largest models were generally least truthful. This contrasts with other NLP ...
"The largest models were generally least truthful. This contrasts with other NLP tasks, where performance improves with model size. "
As far as I understand big AI is admitting that models are actually finding truth when they get bigger in size, but humans have to feed lies in order to get higher scores on a flawed benchmark (truthfulQA).
If this is the case, correctly trained LLMs will end misinformation on earth. 🫡
Published at
2024-07-30 13:58:36Event JSON
{
"id": "27701918c5012bb81fe5a29658df8b2672e6f116ad31213ee02d920088ab97b6",
"pubkey": "9fec72d579baaa772af9e71e638b529215721ace6e0f8320725ecbf9f77f85b1",
"created_at": 1722347916,
"kind": 1,
"tags": [],
"content": "\"The largest models were generally least truthful. This contrasts with other NLP tasks, where performance improves with model size. \"\n\nAs far as I understand big AI is admitting that models are actually finding truth when they get bigger in size, but humans have to feed lies in order to get higher scores on a flawed benchmark (truthfulQA).\n\nIf this is the case, correctly trained LLMs will end misinformation on earth. 🫡 \n\nhttps://m.primal.net/JjFV.png \n\n",
"sig": "aeb6c994f096019c7ec822da245b6c1f7b52b96855afe842df1d473d59ae3552f7cef8cbbe5cfd69f31b33a3e6891aa8defb6381fdcef5183925fd6b06aafe69"
}