... as long as ChatGPT isn't just making stuff up.

Give an upvote to this comment by pix3l
pix3l 1 month ago

If LLMs could just find a way to detect when something might not be backed by sources, that would go so far to improve trust. Google searches used to be able to highlight where in a source document it matches your query. The same for LLM responses would be awesome.

reply

share report