In this episode, Thomas Domville introduces and demonstrates the official ChatGPT iOS app. Thomas walks through the app's interface, demonstrates some examples of how ChatGPT can be utilized, and shares tips to get the best results.
Note that this app is currently only available in the United States.
It seems that your emphasis on checking answers from the AI chat is a very good idea. For example, the math solution given by the AI chat was incorrect. The number of dogs showing up to the show was incorrectly derived thus making the ratio incorrect. According to the parameters given, the number of large dogs was very much smaller than the number of small dogs. In the answer, the number of large dogs was actually larger than the number of small dogs. The number of small dogs signed up for the show is 192. The number of large dogs is 56. After the adjustments made for those not showing up and added him, the number of large dogs is 74 and the number of small dogs is 172. The ratio of small dogs to large dogs then becomes 2.3 to 1. It would be interesting to try regeneration on this to see whether it would be solved differently by the AI. This was a very interesting and informative podcast, as usual. I love listening to your very informative podcast. Keep up the good work! :-) (I should have put this through the AI chat. It probably would have been much better written :-))
What is the point of having it if you have to check everytime!! I am curious if the pay version is better regarding checking! If so, why?
An A.I. that is 100% accurate would be quite useful. To that end, you can ask ChatGPT to provide references, which you can check. It's also worth noting that developers are working hard to address the accuracy issue.
The web version of ChatGPT allows users to report inaccurate information, which means ChatGPT will become more accurate over time. I once asked it, "Who is Paul Martz?", and it spat out a biography that was utter make-believe, listing fictitious books I had allegedly authored and claiming I had founded several companies I had never heard of. It was a well-written biography, but it was total fiction. I reported it. Now, when I ask it "who is Paul Martz?", it says it has never heard of me. LOL.
It's worth noting that an A.I. does not have to be 100% accurate to be useful. We all use spelling and grammar checkers even though many of the issues they flag are invalid. Even an average chess computer can still teach me a few tricks. A self-driving A.I. might not be ready for the streets of Cairo, but it can still serve food at restaurants (do a web search for BellaBot). ChatGPT might not write an accurate letter of recommendation for my intern, but it gives me something to start with that I can modify.
From a more philosophical perspective, people aren't 100% accurate either, yet we interact with them all the time. The benefit comes from the interaction, which can spark new ideas and get us thinking along different lines. This is why we ask friends about tax issues, or car problems, or marital headaches - not because they are experts, but because we're looking for a sounding board. In its present state, ChatGPT fills this role very well.
Ultimately we must ask what the goal of A.I. is. If we (and by we I mean human society) are trying to create a perfect flawless intelligence, we have a ways to go. But if we're trying to reproduce human spoken interaction, then we're already quite close.
Is not about it but whoever put that info in google or bin or any other search engine. Garbage in, garbage out. chat GPT could not tell you unless they make it to check and check.
Accurate information is always better.
I must admit I'm a bit confused why inaccurate information (called hallucination in the A.I. industry) is so hard for A.I. developers to eliminate. It seems that it would be fairly straightforward to have an A.I. check its own output to identify and correct inaccuracies before it displays it to the user. But, even in my software days, I never did much with A.I., so I'm sure the problem is more complex than it appears to be. Otherwise they would've solved this problem by now.
The reason for some of the inaccuracies or "hallucinations" as they are called, is that these LLM's (Large Language Models) like Chat GPT generate strings of words based on the statistics of what word or words is likely to appear next in a sentence based on its large data set from scouring the internet. thus, the AI doesn't "know" what it is spouting out.
That being said, the reason this works is because if a string of text reads "five plus five equals" the database tells the AI that the most likely next word is "ten". Similarly for sentences like "The cat sat on the", the most likely next words are things like "floor", "sofa", etc. Considering that is the basic algorithm, it is incredible that these LLM's do as well as they do.
They will surely continue to make their output more "accurate" as time goes by, but, as someone already pointed out in this thread, even humans aren't 100% accurate. Doing a Google search on a topic will produce lots of web sites along with some that will give incorrect, misleading, and/or contradictory results. As humans, it is our job to vet the results of our research no matter what tool we use.
Right now, AI seems to be able to gather up a large quantity of information and produce a good first draft of a result that summarizes the answer for the question asked. It can even clarify and expound upon the feedback when queried about the result. The human can then vet and edit the result to his/her taste and needs. So AI can be a good tool, but like any other tool, one has to know how to use it and for what purposes.