Why ChatGPT Is Not Dependable
[ad_1]
I’ll begin with the easy reality: ChatGPT just isn’t a dependable answerer of questions.
To attempt to clarify why from scratch could be a heavy carry, however fortuitously, Stephen Wolfram has already performed the heavy lifting for us in his article, “What’s ChatGPT Doing… and Why Does It Work?” [1] In a PF thread discussing this text, I attempted to summarize as briefly as I may the important thing message of Wolfram’s article. Here’s what I mentioned in my submit there [2]:
ChatGPT doesn’t make use of the meanings of phrases in any respect. All it’s doing is producing textual content phrase by phrase based mostly on relative phrase frequencies in its coaching information. It’s utilizing correlations between phrases, however that isn’t the identical as correlations within the underlying data that the phrases signify (a lot much less causation). ChatGPT actually has no concept that the phrases it strings collectively signify something.
In different phrases, ChatGPT just isn’t designed to truly reply questions or present data. The truth is, it’s explicitly designed not to do these issues, as a result of, as I mentioned within the quote above, it solely works with phrases in themselves; it doesn’t work with, and doesn’t even have any idea of, the data that the phrases signify. And that makes it unreliable, by design.
So, to offer some examples of misconceptions that I’ve encountered: whenever you ask ChatGPT a query that you simply may suppose could be answerable by a Google Search, ChatGPT is not doing that. Once you ask ChatGPT a query that you simply may suppose could be answerable by wanting in a database (as Wolfram Alpha, for instance, does whenever you ask it one thing like “what’s the distance from New York to Los Angeles?”), ChatGPT is not doing that. And so forth, for any worth of “which you may suppose could be answerable by…”. And the identical is true in case you substitute “searching for data in its coaching information” for any of the above: the truth that, for instance, there are an enormous physique of posts on Instagram in ChatGPT’s coaching information doesn’t imply that in case you ask it a query about Instagram posts, it’ll have a look at these posts in its coaching information and analyze them to be able to reply the query. It gained’t. Whereas there may be, after all, voluminous data in ChatGPT’s coaching information for a human reader, ChatGPT doesn’t use, and even comprehend, any of that data. Actually all it will get from its coaching information is relative phrase frequencies.
So why do ChatGPT responses appear like they’re dependable? Why do they appear like they have to be coming from a course of that “is aware of” the data concerned? As a result of our cognitive techniques are designed to interpret issues that method. Once we see textual content that seems syntactically, grammatically right and appears like it’s confidently asserting one thing, we assume that it will need to have been produced, if not by an precise human, not less than by an “AI” that’s producing the textual content based mostly on some form of precise data. In different phrases, ChatGPT fools our cognitive techniques into attributing qualities to it that it doesn’t even have.
This safety gap, if you’ll, in our cognitive techniques just isn’t a current discovery. Human con artists have made use of a lot the identical methods all through human historical past. The one distinction with the human con artists is that they had been doing it deliberately, whereas ChatGPT has no intentions in any respect and is doing it as a facet impact of its design. However the finish result’s a lot the identical: let the reader beware.
[1] https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/
[ad_2]