I’ve seen this with gpt4. If I ask it to proofread text with errors it consistently does a great job, but if I prompt it to proofread a text without errors, it hallucinates them. It’s funny to see Microsoft having the same issue.
I’m pretty sure MS uses GPT-4 as the foundation of all their AI stuff, so it’s not surprising to see them have the same issues. Funny, as you said, but not surprising.
I’ve seen this with gpt4. If I ask it to proofread text with errors it consistently does a great job, but if I prompt it to proofread a text without errors, it hallucinates them. It’s funny to see Microsoft having the same issue.
I’m pretty sure MS uses GPT-4 as the foundation of all their AI stuff, so it’s not surprising to see them have the same issues. Funny, as you said, but not surprising.