AI can only go so far when evaluating new ideas, but the technology can convince people to surrender their good judgment and agree with incorrect conclusions.
AI can only go so far when evaluating new ideas, but the technology can convince people to surrender their good judgment and agree with incorrect conclusions.
its become more of a confidence man than anything.
ive been testing various models for about 4 years now. conclusions are always the same. even when referencing online material the accuracy is sub par compared to simply looking it up yourself.
its good for certain tasks, but as far as verifying information you are better off using ai to find sources, and then reading those sources yourself as opposed to the write ups it offers.
an example. last night i came across a character in the game “kingdom come: deliverence 2” named the “odd trader” after saving him from bandits he offers to sell you various oddities, one of which being a “broken roman spear” that may or may not be the spear of destiny that pierced jesus christ.
GPT said that you could reforge the spear and that it was VERY powerful, when i asked it specifically how to do that, it said “oh sorry! turns out you cant do that, its just an easter egg!”
this happens constantly. but many people who use it dont verify the output, and take it at face value. especially adamant defenders of AI and especially chatgpt.
i know some fan boys will get pissy hearing this, and thats okay. in order for a technology to improve you have to acknowledge its flaws. thats a part of innovation.
also, i do believe AI will cause far more harm than good in the coming years if it is capable of advancing the way people suggest it might. and this particular flaw is a part of that harm. we cant speed run AI without severe risks.
The problem is that the quality of “just look it up” is seriously degraded these days (except Wikipedia)
AI is a massive part of why search sucks now.