Jump to content
Registration disabled at the moment Read more... ×
Maxidonkey

[Info & Feedback] DelphiGenAI v1.1.0 — Sharing Updates and an Educational Approach

Recommended Posts

Posted (edited)

Hi everyone,

I wanted to share the main updates in version 1.1.0 of DelphiGenAI (OpenAI wrapper for Delphi) here—mainly to offer what I can to the community, and maybe get some feedback if anyone feels like weighing in on my approach.

 

Main Updates
Compatibility with the Latest OpenAI Features (including Remote MCP & Code interpreter)
Ready-to-Use Templates

  •  Two archives (TestGenAI_VCL.zip and TestGenAI_FMX.zip) let you copy and paste any snippet from the documentation and test it out, with no complicated setup.

Variety of Code Snippets

  •  For each feature: synchronous, asynchronous (event/callback), and asynchronous with async/await.
  •  The idea is to allow everyone to compare the approaches and pick the one that fits them best.

Documentation That’s Directly Usable

  •  All markdown examples can be tested as-is within the supplied templates.

A Gradual Learning Path

  •  The aim: make it easier to learn about AI in Delphi, whatever your starting level, and to keep things as accessible as possible.

 

About This Approach

I’m not an experienced “Delphi old-timer” and I don’t know the community very well yet; that’s why I chose to offer several approaches (synchronous, asynchronous, and async/await) for each example.
But maybe that’s not really necessary—maybe some would prefer strictly asynchronous, or even just synchronous?
I’d be interested in your thoughts or experiences on this (no pressure—just curious, and trying to do what's most helpful).

 

Also Tried on Other Wrappers

I’ve used the same principle in a Delphi wrapper for Deepseek. Depending on the feedback I get here, I’ll decide whether to take this approach with a few other wrappers I’ve put up on Github.

 

Thanks in advance for reading, and best wishes to all.

Edited by Maxidonkey
  • Like 3
  • Thanks 1

Share this post


Link to post

Hi. Two comments here:
 

1. Library description

Many libraries use big words to describe what they do. In the end you don't get it - unless you invest some time to look deep into the code or documentation.
So, maybe you can "dumb down" a bit the project description. Maybe something like: "if you use this library right now in your Delphi project your will be able to xyz", where xyz is the most important feature of your library. It is like when you go to the store and you see a can and you instantly know that inside is tomato soup. 

Then add the rest of the "complicated" description 🙂 

 

PS: I read the documentation and now I think I get it.  But it took a while.

 

2.  asynchronous

In general when I use the AI, I ask then I wait for the answer.
Is there something you can do if you work in asynchronous mode, while waiting for the answer from the AI? Maybe starting a second request???
If no, then there is no point in using asynchronous.

 

Great library!
Thanks for sharing.

Share this post


Link to post
Posted (edited)

Thank you very much for your detailed feedback and suggestions.

 

  • Simplified description of the wrapper :

 

I understand your point about the clarity of the project description. It’s not easy to explain simply, especially since my main background is in mathematics, and both coding and documentation are not my core activities. The DelphiGenAI project provides a “wrapper” to simplify working with OpenAI’s APIs in Delphi—in other words, it lets you easily integrate features like ChatGPT or DALL-E into your Delphi projects without having to deal with the technical complexity of API calls yourself.
I’m definitely open to any ideas or help that could make this explanation even clearer or more engaging for the community.

 

  • Asynchronous: Basic and advanced use cases 

 

Regarding the asynchronous aspect, there are two main uses:

– The “simple” asynchronous mode is for launching a single task that doesn’t require immediate interaction or feedback from the user. This avoids blocking the application, but it’s somewhat limited in scope.
– For more advanced scenarios, the use of promises, also provided in the library, enables true orchestration: chaining multiple requests or operations smoothly and without blocking, as illustrated in the File2knowledge project, where several operations are chained together during file registration for RAG.

 

Refert to the units : https://github.com/MaxiDonkey/file2knowledge/blob/main/providers/Provider.OpenAI.FileStore.pas

and https://github.com/MaxiDonkey/file2knowledge/blob/main/providers/Provider.OpenAI.VectorStore.pas

 

Thanks again for your feedback. Any help with wording or contributions to the clarity of the project are most welcome.

Edited by Maxidonkey

Share this post


Link to post

I have just updated the DelphiGenAI wrapper to support the OpenAI APIs. Below is the list of changes made compared to version 1.1.0.
These enhancements are designed to fully leverage the capabilities of the GPT-5 model while ensuring optimal compatibility with previous models.

You can find the complete repository here: https://github.com/MaxiDonkey/DelphiGenAI

 

 

Version 1.2.0

 

JSON Normalization Before Deserialization

  • New GenAI.API.Normalizer module (TJSONNormalizer, TWrapKind, TNormalizationRule) to unify polymorphic fields (e.g., string vs. object).

  • Direct integration into the HTTP layer: new Get(..., Path) | Post(..., Path) overloads enabling targeted normalization of a JSON subtree before object mapping.

 

Canceling Background Requests

  • New Responses.AsyncAwaitCancel(response_id) method to cancel an asynchronous response (background = true), with full callback support (OnStart, OnSuccess, OnError).

 

Streaming Enhancements

  • Extended typed coverage for streaming events and outputs (MCP, Code Interpreter, Image Generation, etc.) via new Responses.OutputParams classes (TResponseOutput*, TResponseImageGenerationTool, TResponseCodeInterpreter, etc.).

 

New Types and Parameters

  • InputParams: full coverage for computer interactions, local shell, MCP, web search, code, image generation, reasoning, text/JSON formats, tool choice/hosted tool, and file search filters.

  • OutputParams: states (Created, InProgress, etc.), events (Added, Delta), usage metrics, and statistics.

  • New enums: TOutputIncluding, TReasoningGenerateSummary, TFidelityType, etc.

 

API v1/chat/completions

  • New parameters:

    • prompt_cache_key (prompt caching)

    • safety_identifier (stable ID for safety monitoring)

    • verbosity (low / medium / high)

 

API v1/responses

  • New parameters:

    • max_tool_calls

    • prompt (template reference via TPromptParams)

    • prompt_cache_key, safety_identifier

    • stream_options, top_logprobs, verbosity

 

Structured System and Developer Messages

  • New overloads:

    • TMessagePayload.Developer(const Content: TArray; const Name: string = '')

    • TMessagePayload.System(const Content: TArray; const Name: string = '')

  • Improves parity between plain text and structured content flows.

 

  • Like 1
  • Thanks 1

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×