Jump to content
David Schwartz

TO ChatGPT: In Delphi, is there any kind of an adapter or class that takes a TList<T> and makes it look like a TDataSet?

Recommended Posts

5 hours ago, Stefan Glienke said:

I think we are down to the real issue here: one's own personality.

 

I rather filter out some ego-stroking to get good peer-reviewed advice than some untested garbage but I know people that are already offended when you say: "Sorry, but there is a slight possibility of you being not 100% correct here" (actually meaning: you could not be more wrong). People with a personality like that are usually the ones complaining about SO. I am worried that their solution is using AI-provided help that is completely unchecked and will turn into the software that runs the world in the future.

I personally don't care. My experience is that most programmers tend to be fairly blunt about stuff anyway, which is how my mind works. 

 

But what you're saying points to the Achilles heel in ChatGPT which is that it's egoless, and there's really nobody to blame. So you can cuss and fume and say whatever you want, and nobody is going to care.  Over the years, people have used the Bible to support just about every side of every moral issue known to man, and there have been more people slaughtered in the name of "god" over disputes springing from the various interpretations of it than perhaps anything else in history. Nobody knows the true authors, and whomever they were they lived 2000 years ago and certainly aren't around to defend how their words are abused today! What's going to keep that from happening with material generated by AI things like ChatGPT? 

 

I've worked with interns over the years, and on a good day they're marginally better than ChatGPT at writing code. For demos and samples, that's fine, but most of the code "isn't ready for prime time", as they say. That doesn't mean it's not instructive.

 

For me, the greatest value I've seen in ChatGPT so far is that it can generate code and transform it (reliably) in ways that's beyond the ability of editors and regex expressions and would otherwise require some rather expert programming to implement. Said another way, it's good at generating what I call "plumbing" code, sort of what ETL covers.

 

It also seems useful for getting a bunch of code that's reasonably useful for implementing an algorithm -- it may not be 100%, but it might take less time to get there than starting completely from scratch. (Of course, that's a matter of personal opinion, but it's a valid use-case.)

Share this post


Link to post
6 hours ago, Sherlock said:

@David Schwartz But that is exactly the point: ChatGPT just blurts out unchecked stuff someone might naively use and if it compiles ... ships it. No matter what happens then. And that is simply unacceptable and dangerous, no matter how fascinating (or in my case not) on may consider the subject.

So basically, ChatGPT and similar tools are the computer programming equivalent of Fox Entertainment Network?

 

It seems to work fine for Fox! It has made lots of people very rich telling lies, and the people who are lied to have no problem defending them with their lives. 

 

Sadly, there are programmers who cannot tell the difference any more than people who can't tell that pretty much everything talked about on Fox is a lie.

 

This only seems to argue for better and more disciplined unit testing -- unless you're working at an organization that will throw out the tests (votes?) if they don't give the results they want.

Share this post


Link to post
14 hours ago, Lars Fosdal said:

ChatGPT is still ShitGPT, IMO.

 

It very much depends on what you want to do.
There are unlimited use cases and just because it can't do enough Delphi doesn't mean it's "shit".

For example, I often use it at the moment to optimise old JavaScript/PHP code and it saves a lot of time that way.

Edited by toms

Share this post


Link to post

The problem is - the more complex a question, the more complex an answer. As long as ChatGPT is mostly right, most of the time - it is high risk that an answer can be a little wrong or a lot wrong. I will not waste time on solutions that might be partially correct or completely wrong, nor will I spend time on researching the problems other people have with shit they got from ChatGPT. When it gets better, we can discuss it again, but I suspect it would be better use of ML to find flaws or room for improvements in our own code. There is learning in the struggle to understand a new topic. Ready made and wrong answers from an "inventive" "AI" are not good for learning. 

  • Like 4

Share this post


Link to post

Wrong code generation or not wrong code generation, I don't think that this discussions lead to anything :classic_rolleyes:

Everybody knows the current limits of ChatGPT meanwhile.

Instead of poking into the failures you should better look for ways to make use of it and to break those limits.

 

Take this fantasic use case of GPT, to make it a "CoPilot for Delphi", programmed in just one hour.

https://www.youtube.com/watch?v=hnBKfrBHUIE

 

That is something I would expect from the experienced programmers community here, instead of whining about this and that problems.

Isn't our daily task to solve such algorithmic problems ?

 

If you like it or not, the AI's will take over sooner as we can blink with an eye and the coming versions they will remove most of your concerns from today.

I see AI as a partner and tool for now, not as a replacement of coders, and it can help us with our routine work to get the head free for more challenging stuff.

From the Delphi community I see only the moaning about any such new features, same like with discussions on inline variables, instead to admit what a fantastic achivement modern AI could be soon :classic_biggrin:

  • Confused 1

Share this post


Link to post

Again, I think most of you are missing the point. In any workplace scenario, most code written by inexperienced programmers is checked by more experienced programmers. Also, just because the code passes unit tests doesn't mean it's "accepetable" code, although I'm sure there are organiations that would disagree with that.

 

The point is ... ChatGPT seems to generate code somewhat on par with newbie programmers. There's a LOT of useful code it generates, and some that's wrong. It may not compile right off the bat, but that doesn't mean it's 100% useless. I can't speak for anybody else here, but while I know that there are lots of eyeballs on code posted to SO, most of it is short and STILL isn't something that can simply be copy-and-pasted into anything without spending time reviewing it and making it "fit in" with your existing code. That seems to be the "ideal" that everybody is holding up and saying, "Well, when it's as good at programming as I am, then I'll consider it worthwhile." I'd say, well bubba ... when it's THAT good, you and I will be out of work.

 

There are plenty of things on SO where nobody offered up good solutions that ChatGPT can at least help with.

 

Looking for stuff on SO is like going on a Treasure Hunt, and what you get is often no better than what you can get directly from ChatGPT. The thing is, ChatGPT is not just FASTER, but it will invariably get better over time.

 

I've hired supposedly "experienced" devs over the years to write relatively small and highly-specific code units for me, and the quality of their code they sent me was mostly pretty low. Most, I'm assuming, couldn't even solve the problem I gave them b/c they just disappeared after a while.

 

There's also a problem I encounter a lot on SO (among other places) where I'll ask for a solution to X and end up with a bunch of suggestions to redefine my problem to solve Y instead because they don't understand X. At least ChatGPT answers the question without trying to redefine things! And know-it-alls on SO frequently downvote QUESTIONS they don't like and argue about whether they should even be asked. In fact, I find that MOST Delphi questions submitted lately all have negative upvotes ON THE QUESTION ITSELF. This just screams out, "Don't post stupid questions here!"

 

ChatGPT is far less judgmental in that respect, which I think is a Good Thing.

 

 

  • Like 3

Share this post


Link to post
20 minutes ago, David Schwartz said:

I've hired supposedly "experienced" devs over the years to write relatively small and highly-specific code units for me, and the quality of their code they sent me was mostly pretty low. Most, I'm assuming, couldn't even solve the problem I gave them b/c they just disappeared after a while.

Maybe because they did not want to look for the problem description within an entire novel?

 

21 minutes ago, David Schwartz said:

There's also a problem I encounter a lot on SO (among other places) where I'll ask for a solution to X and end up with a bunch of suggestions to redefine my problem to solve Y instead because they don't understand X.

https://en.wikipedia.org/wiki/XY_problem - addressing that is better than giving a solution to what has been asked. Granted the structure and the goal of SO are not well suited for that at times.

 

ChatGPT is like a device that gives you fish whenever you need some but it won't teach you to fish. This leads to the dilemma I wrote about before: maybe it will be the future where all we know is how to ask good questions to our AI that solves the issues for us that we are not able to solve ourselves anymore.

  • Like 1
  • Haha 1

Share this post


Link to post
10 hours ago, David Schwartz said:

There's also a problem I encounter a lot on SO (among other places) where I'll ask for a solution to X and end up with a bunch of suggestions to redefine my problem to solve Y instead because they don't understand X. At least ChatGPT answers the question without trying to redefine things! And know-it-alls on SO frequently downvote QUESTIONS they don't like and argue about whether they should even be asked. In fact, I find that MOST Delphi questions submitted lately all have negative upvotes ON THE QUESTION ITSELF. This just screams out, "Don't post stupid questions here!"

Interesting. Could you post the link to your profile? I'm curious what I'd think about your questions.

You know, there's a rule I learned from my own experience: if you ask Google for a question and find nothing, likely you're trying to do something wrong. The same with questions.

Share this post


Link to post
12 hours ago, David Schwartz said:

There are plenty of things on SO where nobody offered up good solutions that ChatGPT can at least help with.

Right, ChatGPT is pretty much always immediately "on-topic" and can provide useful information of any case, even if your prompt is not so well-designed.

Beside coding, there were many fields where this is highly useful.

You only have to sort out nonsense from useful data, which is not that difficult if you're not too naive looking at the answers.

This helps to find surrounding answers and ideas or a direct solution, I asked for example to analyze the NATO alphabet, which of those terms were globally the best recognizable and pronounceable and also its proposals to optimize the terms.

This gives a lot of good insights if you play around with it, from where you can move further.

 

I never expect ChatGPT to write the perfect code to solve any sophisticated problem.

It can help a lot in doing routine work, like convert this CSV to JSON, convert this C++ class to Delphi, to summarize code pieces, scientific texts or the like.

It is a text-processor, pattern recognition, dictionary, and more …. And if you ask appropriate questions that fit to these strengths, it's doing quite well, IMHO.

Share this post


Link to post

Tell ChapGPT to show you how to write a class with multiple inheritance in Delphi.

 

It will confidently show you how to do it.

 

Of course, you can't do it, and the code doesn't work.

 

How useful is this stuff when you have to already be an expert on the topic to make sure you aren't being fed complete BS?

 

Programmers ought to know better than to trust a massive auto-complete algorithm with any real work.

Edited by Brandon Staggs
  • Like 1

Share this post


Link to post
20 hours ago, David Schwartz said:

In any workplace scenario, most code written by inexperienced programmers is checked by more experienced programmers.

I don't buy this line of argument at all. ChatGPT will try its best to give you want you say you want, but it will try so hard that it will give you completely useless time-wasting information. If I tell an intern to write a base class for me that uses multiple inheritance in Delphi, I would not expect to be given a unit full of plausible-looking code that cannot compile. I would expect to be told that what I asked for cannot be done in Delphi. ChatGPT fails that basic test. Who has time to check that kind of work product? Sticking with your workplace analogy, wouldn't you fire someone who blithely pretended to fulfil your request like that? I wouldn't put up with it.

  • Like 4

Share this post


Link to post

I don't know if its just Delphi developers that are negative on ChatGPT but it does seem fairly unreasonable.

 

Firstly, our IDE struggles to put begin's and ends and other basic structures in place at present, yet a newish product in its early versions can be given an english statement about what you require and knock out code in multiple languages. 

 

For example you can ask it to write a class about a patient going to hospital or ask it to show a delphi example of how a function in curl might be coded.

Sure, not every output of code is perfect in every day but seriously, show me the previous system that was even close to this amazing and giant step in computing.

 

This breakthrough is as least as important as the internet and iPhone were to changing the world.

 

If you watch this video and don't see how much this is going to impact business systems, something is wrong with you.

 

 

Share this post


Link to post
2 minutes ago, hsvandrew said:

If you watch this video and don't see how much this is going to impact business systems, something is wrong with you.

Oh, I see it, believe me. And I'm looking forward to making more money by fixing stuff this so called AI breaks. Nothing wrong there.

  • Haha 3

Share this post


Link to post

@hsvandrew There is no doubt "AI" (Machine Learning) will impact our work and business systems.

What's wrong with me, is that I don't care for a deluge of "My AI generated code doesn't work. Why?" posts.

  • Like 3
  • Thanks 1
  • Haha 1

Share this post


Link to post
56 minutes ago, Lars Fosdal said:

I don't care for a deluge of "My AI generated code doesn't work. Why?" posts.

Especially when your own code doesn't work and you don't even know why, amirite? 😉

  • Haha 2

Share this post


Link to post
26 minutes ago, Stefan Glienke said:

Especially when your own code doesn't work and you don't even know why, amirite? 😉

For sure! Who needs an AI to f... things up, when I am perfectly capable of f...ing stuff up myself? 😄

  • Haha 3

Share this post


Link to post
3 hours ago, hsvandrew said:

I don't know if its just Delphi developers that are negative on ChatGPT but it does seem fairly unreasonable.

I don't see any language bias in this. The outlets I follow have lots of people trying to be rational about this and most of them probably don't know Delphi still exists, LOL.

 

I am impressed by these tools to be sure. But I also know what they are, and I also know that when I ask AI a non-trivial question that I already have expert knowledge about, I am surprised at how bad the answer is. So, I am extremely skeptical when it comes to using it for things I am NOT an expert on. Is that unreasonable?

 

Also, it's a simple fact that these are not any form of intelligence. The mainstream reporting on this issue has been mostly absurd and with far too much cheerleading or doomsaying. This is how these tools work:

 

https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work

 

TL;DR: it's autocomplete. Very complex and impressive autocomplete. When you ask ChatGPT a question, no entity is giving it consideration or thought, no abstract reasoning is occurring, no intuition is being exercised. An algorithm is scraping its database with a "this word should come next" algorithm. A really complex one. Knowing how it works should make anyone skeptical about broadly generalized applications of the technology. I don't think I am being unreasonable at all.

  • Like 5

Share this post


Link to post

Finding tools for Delphi is not easy. 


Look at GitHub - out of some 27+ million public repositories, less that 3k are Delphi.

Not much material for the AIs to make patterns from.


GitHub Advanced Security has no tools that can be directly leveraged for Delphi.

 

Share this post


Link to post
On 3/22/2023 at 3:13 PM, Brandon Staggs said:

Knowing how it works should make anyone skeptical about broadly generalized applications of the technology. I don't think I am being unreasonable at all.

Knowing that this AI works pretty well for pattern-related tasks, not logical-related tasks, it should be pretty much clear where the strengths and weaknesses are.

 

I have the impression that many people might not even be satisfied with AI even if it would show abilities like the "Terminator".

They eagerly were dreaming of the "T1000" abilities instead, to demolish the current achievements, while completely ignoring what's knocking at their front door :classic_biggrin:

 

Available now  ...                                                         Desired future ...

image.png.22fa4d6d2e25d705d19c28051e0abf87.png     image.thumb.png.f2f1c8bb9b1302861a0a495390f79b8b.png

Share this post


Link to post
On 3/22/2023 at 9:13 AM, Brandon Staggs said:

 

Also, it's a simple fact that these are not any form of intelligence.

Reality.

Share this post


Link to post

@Rollo62 I think we can say without doubt that none of us wants an AI like the one in the Terminators.

 

I do want AI that is accurate and reliable for the areas that it is applied to.

Share this post


Link to post
On 3/24/2023 at 8:24 AM, Lars Fosdal said:

@Rollo62 I think we can say without doubt that none of us wants an AI like the one in the Terminators.

 

I do want AI that is accurate and reliable for the areas that it is applied to.

If you don't like to follow my visions towards AI capabilities and the usage thereof, maybe Stephen Wolfram can convince you.

More info here.

He seems to have integrated ChatGPT with Wolfram Alpha quite recently, I haven't checked out yet, but definitively will.

This may close the gap of the missing logical part of ChatGPT, if it can request hard info from Wolfram Alpha.

I see this current capabilities and future evolutions pretty much the same, that these kinds of "pattern recognizers" may explore this huge and probably infinite space of algorithmic rules and patterns.

Finding patterns that were not found yet, to make use of the "unknown unknowns" for the mankind, as probably Donald Rumsfeld would say.
Or maybe cite as: "To boldly go where no man has been gone before", like James T. Kirk would explain it.

The only question is, if such findings would be good or bad for mankind, but that's another story. We will see that evolving anyway.

 

At least I see this whole process very positive, of course with the necessary respect and caution.

Edited by Rollo62

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×