Jump to content
David Schwartz

TO ChatGPT: In Delphi, is there any kind of an adapter or class that takes a TList<T> and makes it look like a TDataSet?

Recommended Posts

New technology always comes with a lot of "noise". If you're not willing to tolerate that "noise", then you can just stand by and wait while others master the underlying technology while simultaneously figuring out how to reduce the "noise level".

Share this post


Link to post
On 3/25/2023 at 2:26 PM, David Schwartz said:

... wait while others master the underlying technology ...

Sure, but what I do consider: Isn't that part of our daily tasks, to master such kind of technology ?

Why wait, and what for ?
I have already seen a lot of very interesting use cases out there, beside asking for coding problems, of course we can wait until others take over this job, like the Python community :classic_biggrin:

I'm only missing just a tiny bit of constructive dialog here, how to make use and make best of such AI, instead many people seems to ask AI to solve problems which it cannot solve (yet).

Share this post


Link to post

/off-topic


Me: I need help defusing a bomb

AI: What kind of bomb

Me: posts a series of photos of the bomb

AI: Cut the red wire ...

*BOOM*

AI: ... after cutting the green wire

  • Like 1
  • Haha 2

Share this post


Link to post

Also, on the term "AI":

Quote

Stefano Quintarelli, a former Italian politician and technologist came up with another alternative, “Systemic Approaches to Learning Algorithms and Machine Inferences” or SALAMI, to underscore the ridiculousness of the questions people have been posing about AI: Is SALAMI sentient? Will SALAMI ever have supremacy over humans?

Uncovered in https://www.bloomberg.com/opinion/articles/2023-03-26/even-with-chat-gpt-4-there-s-no-such-thing-as-artificial-intelligence

Share this post


Link to post
4 hours ago, Lars Fosdal said:

/on-topic

This is application  of "AI" in ways that I can like

https://visualstudiomagazine.com/articles/2023/03/23/vs-ai.aspx

This looks like the same stuff everybody above is saying is too error-prone and therefore should be totally avoided. 

 

Most code I write seems to be what I call "plumbing code" -- moving data back and forth between object A to object B -- and it's boring, repetative, and very mechanical. ChatGPT has done a great job writing examples of code like this for me. But like all coders, it's not 100% accurate 100% of the time. Earlier commenters in this thread assert we shouldn't be using it at all if it needs to have it's work constantly checked, and just crank the same mundane code out ourselves. I find this type of coding neither fun or creative. 

 

But thanks for pointing out an article that basically reinforces what I said earlier. It's sad that so many people expect 100% perfection before they'll even consider anything like this. 

 

 

 

Edited by David Schwartz

Share this post


Link to post

@David Schwartz There is no such thing as 100% perfection.

 

Did you actually read the article - or did you just quote the wrong link?

 

There is nothing about AI being error prone, AI mistakes, or AI immaturity in that article in the quoted link.

It is about AI that is proven to work well for code.

 

On another note:

I also hear that Co-Pilot X is quite a bit better than the initial Co-pilot.

 

Also - this discussion appear to have two legs

- generative AI on the path to general AI (where I disagree)

- generative AI applied to code (where I see some benefits, but a lot of pitfalls - particularly for languages with a small sample base)

 

I don't like mundane code or scaffolding either, so I write generic frameworks to minimize that sort of code.

Share this post


Link to post
2 hours ago, Lars Fosdal said:

Also - this discussion appear to have two legs

- generative AI on the path to general AI (where I disagree)

- generative AI applied to code (where I see some benefits, but a lot of pitfalls - particularly for languages with a small sample base) 

Define "to code":

Wouldn't generating SQL, summarize a class or function, generate a simple class template, generate unit test cases, proposals of possible OS API for a certain task, modification CSV, JSON, translation from C++ to Delphi or many of such similar tasks, would'nt they fall under your objective ?

Neither of those tasks were 100% error free, but they can be somewhat helpful or time-saving as well.

Edited by Rollo62

Share this post


Link to post
1 minute ago, Rollo62 said:

Define "to code":

More or less what you write - but as mentioned - the smaller the sample base, the lower the chance of actual working code.

Share this post


Link to post
32 minutes ago, Lars Fosdal said:

More or less what you write - but as mentioned - the smaller the sample base, the lower the chance of actual working code.

That's why I use  widely-adopted programming languages instead of Delphi, as I get good outputs.

Edited by toms

Share this post


Link to post
4 hours ago, Lars Fosdal said:

@David Schwartz There is no such thing as 100% perfection.

 

Did you actually read the article - or did you just quote the wrong link?

 

There is nothing about AI being error prone, AI mistakes, or AI immaturity in that article in the quoted link.

It is about AI that is proven to work well for code.

 

Yes, I did read the article. I was referring to all of the posts made here in this thread earlier about how chatGPT isn't 100% perfect, so therefore it's a waste of time to even think about using it -- at least, that's my interpretation.

 

If this tool uses chatGPT, then I assume the complaints made earlier about how error-prone chatGPT is (and AI in general) will apply equally to any tool that uses it.

 

I started this discussion by showing some code that, while not impeccible, had a lot of useful stuff in it. I suspect that it's possible to get the same nonsense out of the tool mentioned in that article unless the inputs are highly constrained. 

 

ChatGPT happily accepts unconstrained input, and seems to make unfounded inferences that it uses to create nonsense output. Kinda like what most people I know do from time to time.

 

But if the argument is that AI-based systems should protect against stupid questions that will prevent it from spitting out nonsense answers, then it will never give accurate answers 100% of the time. Nor will people ... but that's another story.

 

I didn't write chatGPT, I just entered some queries and got back some stuff that I found interesting and shared here. The folks here say it's worthless nonsense and they don't want to have to worry about double-checking the quality of responses because of that.

 

I'd like to remind everybody that it's possible to write LOTS of code in Delphi that will compile, and might even appear to make sense, but will give random and often unpredictable results, and sometimes even no results at all. Not to mention the long list of documented errors that Delphi has, and they exist as land-mines that will periodically produce unexpected results at best and run-time errors at worst.

 

 

Share this post


Link to post
On 3/27/2023 at 4:29 PM, Lars Fosdal said:

More or less what you write - but as mentioned - the smaller the sample base, the lower the chance of actual working code.

Yes and no, I think it highly depends also on the "pattern-ness" of the structures, like JSON, XML, which have pretty clear rules.

That should be easy to analyze and generate be AI, even with smaller training base, whereas any "logic", like a programming language, mathematics or the like, I see as not really solvable yet for AI.

That said, I think also the training base of such files are huge.

 

I had recently a very good example where ChatGPT can excel, I had to translate official EU terms into different languages, where I wanted to keep the meaning and wordings as close as possible to the official, national sources.

Where all translations from Google/DeepL were not that convincing, even after many tries, always trying to fall back into the same 2-3 variants that I DIDNT want.


This process worked out pretty well with ChatGPT, although this took me about 20 cycles to get clear and final on every aspect of the translation.

I pretty much could change parts of the sentences and re-analyze.

ChatGPT was able to follow my intention very good even over 2-3 cycles and helped to analyze, translate and explain the results very perfectly, knowing also the differences and nuances between official language and usual language.

 

I really think that I was able to find the best possible translations, with the help of ChatGPT.

You still can say that AI is stupid, if not having a result under 1-3 cycles, but I also know how this process works with traditional, human translation offices.

Also the human translators can never follow my intention 100%, especially if those technical topics were out of their common repertoire.

There I clearly see the advantage of the huge AI repository, from Shakespeare, official law terms to slang and dialects, ChatGPT is perfect in all of them.

 

This is why I'm pretty sure that all simpler structures with only limited logic, like SQL, JSON, XML, CSV, etc. will do very good as well.

I never expect a perfect result in 1. cycle, neither do I, when asking the same from a colleague.

Only after a few cycles of explaining background and goals, the colleagues might prepare the desired result ( at least the colleagues I know 🙂 ).

Edited by Rollo62

Share this post


Link to post

 

ChatGPT is basically in "infant" at the moment.  It'll only get better and there will probably be systems specifically trained to do specific tasks only, and be very good at it.   I think it's always been pitched as augmentation, not a blind "Here, I'll do everything for you".    Plus by using it and generating code we're actually helping train the thing. 

 

Oh, what happened to Unit Testing?  Wouldn't you know what you're expecting from ANY code and write unit test to verify?  I read the first 15 or so responses (maybe I missed it), no one mentioned unit tests.  

 

But, I get it, if you don't like it don't use it, but I'm guessing at some point we all will (or another companies version of it).   

 

Same argument at for Stack Overflow.  Someone hurts your feelings, don't use it.  

 

All news sources are biased. 

 

 

 

 

 

Edited by Curt Krueger

Share this post


Link to post
12 hours ago, Curt Krueger said:

Oh, what happened to Unit Testing?  Wouldn't you know what you're expecting from ANY code and write unit test to verify?  I read the first 15 or so responses (maybe I missed it), no one mentioned unit tests. 

I personally think that ChatGPT is not great in complex coding, but can be a very helpful tool in creating Unit Tests probably, because this is usually a more routine work with several test cases.

Many programmers dislike writing unit tests, because they think this is too much work, not worth the effort.

I think that ChatGPT for creating unit tests be a great help too, because such code is typically not too complex, also probably convincing more and more people to make use of unit tests in the first place :classic_wink:.

Edited by Rollo62

Share this post


Link to post
5 hours ago, Rollo62 said:

I personally think that ChatGPT is not great in complex coding, but can be a very helpful tool in creating Unit Tests probably, because this is usually a more routine work with several test cases.

Many programmers dislike writing unit tests, because they think this is too much work, not worth the effort.

I think that ChatGPT for creating unit tests be a great help too, because such code is typically not too complex, also probably convincing more and more people to make use of unit tests in the first place :classic_wink:.

I have no experience with ChatGPT writing unit tests but there have been some other approaches explicitly for unit tests that are based on static code analysis because then it knows exactly what code to write to exercise all possible paths.

For Delphi land that however is utopia since all the tools in these areas typically don't know anything about delphi/pascal.

Share this post


Link to post

Ok, just for fun I just tried to create some unit test for a random file, I found in the web.

Since this is getting off-topic a bit, I opened a new thread.

https://en.delphipraxis.net/topic/8688-try-chatgpt-for-creating-unit-tests/

 

The result after only 2 cycles is quite complete, would save some time, I think.

I think there is not many tools out there at the moment, which can do that, and this works even for Delphi 🙂

 

 

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×