Jump to content

tomye

Members
  • Content Count

    40
  • Joined

  • Last visited

Community Reputation

1 Neutral

Technical Information

  • Delphi-Version
    Delphi 2 - 7

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. finally, i solved the problem there is a file which is named AndroidManifest.template.xml i modified this code ... <uses-sdk android:minSdkVersion="%minSdkVersion%" android:targetSdkVersion="%targetSdkVersion%" /> ... to ... <uses-sdk android:minSdkVersion="%minSdkVersion%" android:targetSdkVersion="30" /> ... re-build and run , it does work now on D12
  2. i have known the reason. the isssue comes from the java sdk version, the high version (D11.2 to D12 used) doesn't support the current TFLite GPU delegate i copy the D11 PAClient to D12 and got an error message: [PAClient Error] Error: E7688 java.lang.UnsupportedClassVersionError: com/embarcadero/dexer/Main has been compiled by a more recent version of the Java Runtime (class file version 61.0), this version of the Java Runtime only recognizes class file versions up to 52.0 At the time of Delphi release, it was bundled with the java sdk version the only way is i have to upgrade the TFLite GPU delegate ( libtensorflowlite_gpu_jni. so ) to high version, maybe it can be supported. thank you for your all help
  3. there are no any useful message 😞 even doesn't raise an error, just return a nil object
  4. i can't get detail error message, beacuase it calls .so library file only return a nil object but it doesn't happen on D11
  5. I am using the inference library provided by Tensorflow Lite, and calling it in Delphi to implement AI inference Tensorflow Lite can support GPU inference well on mobile phones, greatly improving the inference speed CPU inference speed: GPU inference speed: I call the libtensorflowlite_gpu_jni.so of Tensorflow Lite LibHandle := System.SysUtils.LoadLibrary(PWideChar('libtensorflowlite_gpu_jni.so')); InterpreterOptionsAddDelegate := GetProcAddress(LibHandle, 'TfLiteInterpreterOptionsAddDelegate'); And add a GPU delegate during program initialization InterpreterOptionsAddDelegate(InterpreterOptions, GpuDelegate); Interpreter := InterpreterCreate(Model, InterpreterOptions); The problem occurs here, before I used D11, the Interpreter can return an object pointer But after upgrading to D12, it will return a nil, unable to call the GPU, I also tried D11.2 and D11.3 It's the same, all return a nil, only D11 can return an object pointer, and smoothly call the GPU on the phone I thought it might be caused by the upgrade of the Java SDK or NDK version, but after importing the Java SDK and NDK library used by D11, the problem still cannot be solved I am unfamiliar with Java, and in Delphi's SDK Manager, there seem to be only 3 SDK configurations, namely sdk, ndk, and jdk I don't know what these 3 SDKs represent, and I vaguely feel that the versions of these 3 libraries are not correct, causing my problem But I also tried to replace all 3 SDKs with the ones used by D11, and it still doesn't work During testing, I found a detail, I use the following code to call Toast unit Android.Toast; interface {$IFDEF ANDROID} uses FMX.Platform.Android, Androidapi.Helpers, Androidapi.JNIBridge, Androidapi.JNI.JavaTypes, Androidapi.JNI.GraphicsContentViewText; {$ENDIF} {$IFDEF ANDROID} type TToastLength = (LongToast, ShortToast); JToast = interface; JToastClass = interface(JObjectClass) ['{69E2D233-B9D3-4F3E-B882-474C8E1D50E9}'] { Property methods } function _GetLENGTH_LONG: Integer; cdecl; function _GetLENGTH_SHORT: Integer; cdecl; { Methods } function init(context: JContext): JToast; cdecl; overload; function makeText(context: JContext; text: JCharSequence; duration: Integer) : JToast; cdecl; { Properties } property LENGTH_LONG: Integer read _GetLENGTH_LONG; property LENGTH_SHORT: Integer read _GetLENGTH_SHORT; end; [JavaSignature('android/widget/Toast')] JToast = interface(JObject) ['{FD81CC32-BFBC-4838-8893-9DD01DE47B00}'] { Methods } procedure cancel; cdecl; function getDuration: Integer; cdecl; function getGravity: Integer; cdecl; function getHorizontalMargin: Single; cdecl; function getVerticalMargin: Single; cdecl; function getView: JView; cdecl; function getXOffset: Integer; cdecl; function getYOffset: Integer; cdecl; procedure setDuration(value: Integer); cdecl; procedure setGravity(gravity, xOffset, yOffset: Integer); cdecl; procedure setMargin(horizontalMargin, verticalMargin: Single); cdecl; procedure setText(s: JCharSequence); cdecl; procedure setView(view: JView); cdecl; procedure show; cdecl; end; TJToast = class(TJavaGenericImport<JToastClass, JToast>) end; var PToast:JToast; procedure Toast(const Msg: string; duration: TToastLength = ShortToast); {$ENDIF} implementation {$IFDEF ANDROID} uses FMX.Helpers.Android; procedure Toast(const Msg: string; duration: TToastLength); var ToastLength: Integer; begin if duration = ShortToast then ToastLength := TJToast.JavaClass.LENGTH_SHORT else ToastLength := TJToast.JavaClass.LENGTH_LONG; CallInUiThread( procedure begin //TJToast.JavaClass.makeText(SharedActivityContext, StrToJCharSequence(Msg), //ToastLength).show if not Assigned(PToast) then PToast:=TJToast.JavaClass.makeText(SharedActivityContext, StrToJCharSequence(Msg), ToastLength) else begin PToast.setDuration(ToastLength); PToast.setText(StrToJCharSequence(Msg)); end; PToast.show; end); end; {$ENDIF} end. Using this Toast in D11, there is no icon displayed in the Toast prompt, but GPU can be called normally However, in D11.2, D11.3, and D12, when calling, an icon will automatically appear in the Toast prompt, but the GPU cannot be called From this phenomenon, it is inferred that the SDK version with the icon should be a higher version, that is, the higher version of the SDK cannot call the GPU So, how can I make D12 use a lower version of the SDK now (i believe the no icon version should be worked)? (I have tried specifying a lower version of the SDK in the SDK Manager, but the Toast still displays an icon and the GPU cannot be called)
  6. i worte a python script , it can be run on PyCharm very well but can not be run on P4D, always get an error: OSError: [WinError 182] The operating system cannot run %1。 Error loading "C:\ProgramData\Miniconda3\envs\YoloV8\lib\site-packages\torch\lib\nvfuser_codegen.dll" or one of its dependencies. i found the nvfuser_codegen.dll and checked it all dependencies does exits, howevery if missing dependencies why i can run on PyCharm? I have tried all the methods I can try, but they are all failed. anyone can help me ?
  7. Hi, anyone knows how to make the window style when i using a Stylebook component? it does not work on windows just works on components, is there any way to resolve it? Thanks for all Tom
  8. quick view the forked Android project, seems very complex, must use Delphi4Python ? and the iOS reference, i couldn't find anything about how to deploy the Python on iOS, just found some P4D FMX demos at https://github.com/pyscripter/python4delphi/tree/master/Demos/FMX these demos can be run on iOS ? thank you
  9. Recently, I have been using P4D for a long time and it is becoming more and more convenient. I would like to ask whether the P4D supports Android and iOS? if not do you have any planning ? thanks for creating so good stuff. Tom
  10. Yes, Torch is much better than onnx, i have tested. yesterday, i droped the thread mode and create a new project without thread mode but still got this error: very strange, if i run the onnx script first and then run the torch script , then will get this error , but if i run the torch script first and then run the onnx script , all are ok i think it should be an old version of CUDA dlls is called if the onnx script runs first and torch does not support this version but the onnxruntime supports the high version which the torch is used, that's why if i run the torch script first everything is ok.
  11. tomye

    First Python + DelphiVCL Program

    dear shineworld, If you don't involve a lot of deep learning capabilities, i recommend the MITV Lab Packages, all are native compilation , does not need Python https://mitov.com/products/visionlab#overview
  12. I don't want it to be that complicated either, but Python's performance is too low if i don't use thread to run , the speed and GPU useage % as below: if i lanuch 2 threads : if i lanuch 3 threads: single fps will be reduced but Avg fps will be raised if i use 3 threads so this is only way i can figure out to up the speed. 😞 acutally, i have finished the thread mode program running in the past months but recently, i changed the AI engine from ONNX to TORCH, after installed pytorch everything becomes unsteady, but if i switch the pythone envirnoment to old path everything comes back , very steady. i feel the issue comes from CUDA apis and CUDA envirnoment but i still can not find it out, so come here for help . 😞
  13. i found this issue comes from the P4D thread mode i am not sure if it is P4D BUG because now i can run the script in P4D Demo01 after i PIP INSTALL PYSQLITE3, but still can not be run in my project, will raise an error: could not find cudnn_adv_infer64_8.dll or one of its dependent file the cudnn_adv_infer64_8.dll and all of its dependent files are existing as i said, i set TPythonThread.Py_Begin_Allow_Threads; in my code, and all running are in thread mode, i don't know if the Python path is lost in thread mode
  14. Yes, i did , but does not work , so i post the question on here does the P4D not support LOADDLL and UNLOADDLL dynamic call ? for example: i launch the exe and call the LoadDLL function at first time after run one script i need reset a new DLL path and call LoadDLL again, how to do like this?
  15. if write like this , got an error : There is already one instance of TPythonEngine running the P4D can not change Python DLLs in runtime ?
×