Embedding models don’t use an LLM but a dedicated architecture.
And it’s not an LLM looking at a documents, but your query + documents being converted into a number vector that is compared.
So, it can’t “reason” about the content of documents compared to your query, and the current context.
Also, OAI embeddings are not best anymore, many open source variants perform better for a lot of tasks.
