Dalija Prasnikar 1396 Posted January 5, 2023 40 minutes ago, Rollo62 said: Anyway, you have to check out for yourself. I already checked. Domains where it might work better are in a creative domain where there are no right or wrong answers, but then it is just a parrot that repeats someone's thoughts. 40 minutes ago, Rollo62 said: The AI developments and technology were sleeping ever since on a low level and only in the last few years some significant, exponential changes happened in that field. I know the power of exponential growth and that it might show reasonable results soon. So many people nowadays are getting hot on AI and working on that topic, don't you agree we will see a real chat or human-like AI soon, by fixing the last 5%-10% of issues ? I do and I don't say that chatGPT is the last step of the evolution, its only the start. When people talk about AI they like to focus on intelligence. But ChatGPT is just a language model, with some corrective mechanisms on top. What it means? It means there is no intelligence involved. It is just fancy "text completion" model that uses probability to determine which word should follow next in some sentence. Even when the model grows, and its probability analysis improves, it is still just dumb "text completion" It will never be able to reason about what it writes. And again, it may get better in avoiding ridiculous mistakes, but you are still stuck with original issue: you cannot trust the information it gives you, because it may not be correct and you will not be in position to realize that. 2 Share this post Link to post
Anders Melander 1784 Posted January 5, 2023 3 hours ago, Rollo62 said: So many people nowadays are getting hot on AI and working on that topic, don't you agree we will see a real chat or human-like AI soon, by fixing the last 5%-10% of issues ? I do and I don't say that chatGPT is the last step of the evolution, its only the start. I'm sorry but I don't think you understand what ChatGPT is and what it is not. I think you see something that closely mimics a certain aspect of human behavior and interpret that as some level of intelligence. Well, it's not. It just appears that way - which is exactly what it was designed to do. Mission Accomplished. Share this post Link to post
Lars Fosdal 1792 Posted January 5, 2023 I'd love an AI that would suggest improvements to the code that I write, instead of writing the code for me. 3 Share this post Link to post
Sherlock 663 Posted January 5, 2023 ChatGPT is way to verbose with its answers and takes forever to finally get to the point. I don't consider it fascinating, and I don't consider it to be an AI. It has had it's moments and those have been enough for one investor or another to inject money into the project or to actually assume they might be able to put it to use (Bing - imagine the loooong answers for a simple search). In short ChatGPT is Eliza without the need to program responses but have responses stored in a DB filled by training. Share this post Link to post
Rollo62 536 Posted January 5, 2023 48 minutes ago, Anders Melander said: Well, it's not. It just appears that way - which is exactly what it was designed to do. Mission Accomplished. Yes that's true, so what ? Yes, I'm not an AI engineer working at OpenAI, so I am probably a complete AI noob. But I know enough about AI, neural nets, neural processors and backpropagation to make my thoughts about it. At least I'm not alone in thinking a kind of "AI consciousness" could possibly arise from a critical mass of "neurons" and data, these theories exists I think already since the 60's also from renowned scientists. I only said that chatGPT and other AI's point clearly into that direction that this could be the case, from my point-of-view. I also might add that of course an AI will never be able to mimick a human brain, since it works on completely different hardware and structures, but both brain and AI were investigated and knowledge combined heavily in the past years, there have been too many breaktroughts to count them here. Would it make any difference if AI is perfectly "mimicking consciousness", or not, if it probably has the same outcome as from a human brain's "consciousness" ? Moreover I pointed more to improving the usefulness of AI than to enforce "AI consciousness", which is not much relevant to me if the output is good. The "AI consciousness" lays in the astonishing creativity of writing lyrics, painting images, making music, all this works astonishing well, you should be able to agree to that fact. Yes its only a clever tool, sometimes we see complete rubbish, but sometimes we can harvest some pearls, what so wrong with it ? If you can get the same output with some Delphi classes, then I completely follow your words. I'm off now, I see so much negative energy against, instead of looking at the current AI possibilities and possible future optimizations. Future will tell. Share this post Link to post
Lars Fosdal 1792 Posted January 6, 2023 I think calling AI for AI still is a misnomer. It is various facets of specialized ML. An actual self-organizing AI is very, very far in the future. A self-aware AI, even further in the future. 16 hours ago, Rollo62 said: sometimes we see complete rubbish, but sometimes we can harvest some pearls, what so wrong with it ? As the problem domain becomes more complex, it will be harder and harder to tell the rubbish from the pearls. That is a real problem. I like tools that I can rely on. That I can trust. Actual knowledge that reflects reality, not constructs generated by algorithms with a certain risk of failure. Art generators like MidJourney, are fun and useful, and I even subscribe for $10/month to be able to play around with it - but AI art also pose a risk as it undermines actual artists. MidJourney prompt: "A computer programmer asks an AI to assist him in writing complex code, photography, ultrarealistic --v 4" It looks great until you notice the glaring mistakes. 3 Share this post Link to post
Fr0sT.Brutal 900 Posted January 9, 2023 On 1/3/2023 at 2:29 PM, TigerLilly said: Restricting the question to living Austrians is funny enough: So it claims Arnie is not living??? 😞 Share this post Link to post
Nigel Thomas 35 Posted January 15, 2023 (edited) Every time I've tried asking ChatGPT how to do something using ICS components it answers with Indy samples. Has Remy paid for promoted listings on ChatGPT, so Indy results appear above ICS? <g> Edited January 15, 2023 by Nigel Thomas 1 Share this post Link to post
Alberto Fornés 22 Posted January 15, 2023 5 hours ago, Nigel Thomas said: Every time I've tried asking ChatGPT how to do something using ICS components it answers with Indy samples. Has Remy paid for promoted listings on ChatGPT, so Indy results appear above ICS? <g> If ChatGPT do taht, we can say that it is the closest thing to human intelligence that we have seen so far. Share this post Link to post
FPiette 383 Posted January 15, 2023 You can tell ChatGPT that it gave a bad answer and what is wrong in his answer. ChatGPT will then provide another answer. Sometimes you have to tell it the answer is wrong and finally it gives the right one. That's funny. Share this post Link to post
dummzeuch 1505 Posted January 15, 2023 2 hours ago, FPiette said: You can tell ChatGPT that it gave a bad answer and what is wrong in his answer. ChatGPT will then provide another answer. Sometimes you have to tell it the answer is wrong and finally it gives the right one. That's funny. Ask it that very same question again, it will then fall back the the same wrong answer as the first time or possibly modify it slightly but still be wrong. I tried that several times with different questions from different domains (and within the same session as it only "learns" within such a session). Share this post Link to post
FPiette 383 Posted January 15, 2023 You have to tell it that the answer is wrong and why it is wrong, not ask the same question. Each time you say it is wrong, you get a new answer which finally is a correct answer, at least when I asked a program using ICS. Share this post Link to post
dummzeuch 1505 Posted January 15, 2023 (edited) 1 hour ago, FPiette said: You have to tell it that the answer is wrong and why it is wrong, not ask the same question. Each time you say it is wrong, you get a new answer which finally is a correct answer, at least when I asked a program using ICS. Yes, that's exactly what I did. And then I asked the first question again and got the same wrong answer again. So it doesn't "learn" but simply adjusts the answer when you tell it it's wrong, but not for the actual question but only as a reply to you telling it that the answer is wrong. Question -> wrong answer -> "I's wrong, beause ..." -> correct answer (maybe, otherwise rinse an repeat) -> orignal question -> original wrong answer Edited January 15, 2023 by dummzeuch Share this post Link to post