Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation on 05/06/24 in all areas

  1. Uwe Raabe

    Slow rendering with SKIA on Windows

    Looks similar to this report: https://quality.embarcadero.com/browse/RSP-42714
  2. Dllama, a simple and easy to use library for doing local LLM inference directly from Delphi (any language with bindings). It can load GGUF formatted LLMs into CPU or GPU memory. Uses Vulkan back end for acceleration. Simple Example uses System.SysUtils, Dllama, Dllama.Ext; var LResponse: string; LTokenInputSpeed: Single; LTokenOutputSpeed: Single; LInputTokens: Integer; LOutputTokens: Integer; LTotalTokens: Integer; begin // init config Dllama_InitConfig('C:\LLM\gguf', -1, False, VK_ESCAPE); // add model Dllama_AddModel('Meta-Llama-3-8B-Instruct-Q6_K', 'llama3', 1024*8, '<|start_header_id|>%s %s<|end_header_id|>', '\n assistant:\n', ['<|eot_id|>', 'assistant']); // add messages Dllama_AddMessage(ROLE_SYSTEM, 'you are Dllama, a helpful AI assistant.'); Dllama_AddMessage(ROLE_USER, 'who are you?'); // display the user prompt Dllama_Console_PrintLn(Dllama_GetLastUserMessage(), [], DARKGREEN); // do inference if Dllama_Inference('llama3', LResponse) then begin // display usage Dllama_Console_PrintLn(CRLF, [], WHITE); Dllama_GetInferenceUsage(@LTokenInputSpeed, @LTokenOutputSpeed, @LInputTokens, @LOutputTokens, @LTotalTokens); Dllama_Console_PrintLn('Tokens :: Input: %d, Output: %d, Total: %d, Speed: %3.1f t/s', [LInputTokens, LOutputTokens, LTotalTokens, LTokenOutputSpeed], BRIGHTYELLOW); end else begin Dllama_Console_PrintLn('Error: %s', [Dllama_GetError()], RED); end; Dllama_UnloadModel(); end.
  3. pyscripter

    New ChatLLM application.

    I have created a new Delphi application called ChatLLM for chatting with Large Language Models (LLMs). Its primary purpose is to act as a coding assistant. Features: Supports both cloud based LLM models (ChatGPT) and local models using Ollama. Supports both the legacy completions and the chat/completions endpoints. The chat is organized around multiple topics. Can save and restore the chat history and settings. Streamlined user interface. Syntax highlighting of code (python and pascal). High-DPI awareness. The application uses standard HTTP client and JSON components from the Delphi RTL and can be easily integrated in other Delphi applications. You do not need an API key to use Ollama models and usage is free. It provides access to a large number of LLM models such as codegemma from Google and codelllama from Meta. The downside is that it may take a long time to get answers, depending on the question, the size of the model and the power of your CPU and GPU. Chat topics The chat is organized around topics. You can create new topics and move back and forth between the topics using the next/previous buttons on the toolbar. When you save the chat all topics are soved and then restored when you next start the application. Questions within a topic are asked in the context of the previous questions and answers of that topic. Screenshots: Settings using gpt-3.5-turbo, which is cheaper and faster than gpt-4: UI: Further prompting: The code is not actually correct (Serialize returns a string) but it is close. If you want to test ChatLLM you can download the executable.
  4. Uwe Raabe

    Slow rendering with SKIA on Windows

    At least not short term.
  5. Thank you for your feedback 👍 We do not plan to provide another product specific to API / code documentation but will investigate your request for a possible integration in a future update of HelpNDoc. I can't promise anything at this time though.
  6. Anders Melander

    Any tool to convert C# to Delphi ?

    Nowadays you could just use one of the many LLMs. It's one of the few things they are actually good at. https://medium.com/@kapoorchinmay231/large-language-models-llms-for-code-conversion-new-age-of-ai-72ebd2c8918d
  7. Rafal.B

    Slow rendering with SKIA on Windows

    I want to confirm slow graphics rendering using GlobalUseSkia=True. I create SCADA applications. There's a lot of graphics there. To make it 'nicer' I use animations. Unfortunately, with the GlobalUseSkia=True option it works terribly slowly (2-3 fps). With the GlobalUseSkia=False option it is much better. However, I would expect much smoother operation. This problem also occurred in D11.3 I use equipment: HP ZBOOK G7 i9; Quadro RTX 3000 Hardware is probably not the problem here.
  8. Lars Fosdal

    Slow rendering with SKIA on Windows

    The more details you provide, the more likely it is that someone with Skia experience may be able to help. Ping @vfbb
  9. Please calm down. There is no sense in hyperventilating now in a 1.5 years old thread. In addition, useless a and total garbage have no objective meaning. If you want things to get better you should clearly describe where the current implementation doesn't fit your needs. Perhaps some slight adjustments to the workflow are enough to get you on track. If there still are issues they should be described properly so they can be evaluated and implemented properly. Pure ranting doesn't help at all. OT: This reminds me of the move to the new Galileo IDE with the docked layout, where people said they will stay with Delphi 7 forever. I still feel sorry for all of them who did.
×