WRieder 0 Posted September 1, 2022 I have created a Python Script which uses a WebCam and beautifully displays the captured frames with cv2.imshow('camera',img). Is it possible to redirect the image to a Delphi Application's TImage? Share this post Link to post
stijnsanders 37 Posted September 1, 2022 Why use Python? There are ways to 'read from' the webcam from Delphi code directly. After a quick search I found this: https://stackoverflow.com/questions/9106706/delphi-webcam-simple-program but that's from 2012? There may be others, newers, and more multi-platform options if you search around. Share this post Link to post
shineworld 73 Posted September 1, 2022 (edited) OpenCV's video source backends (CAP_MSMF and CAP_DSHOW) are rather limited and chaotic in CAP properties. Often CAP_MSMF raises internal exceptions, although recommended over CAP_DSHOW. For example, there is no way, unless you get your hands on the code, to directly access the webcam driver supported properties list and range/defaults, which the backend retrieves internally anyway. If you use Python solely to capture a video stream better to act directly with the Windows DirectShow or similar new interfaces. There are examples of how to do this in github. Edited September 1, 2022 by shineworld Share this post Link to post
WRieder 0 Posted September 2, 2022 All of the commercially available solutions are either not available for Delphi or are prohibitively expensive. Using Python4Delpy is not only accurate, but also cost free. The following Python Script (OpenCV based) is, what I am working on: The lines towards the bottom: cv2.imshow('camera',img) stream = BytesIO(img) stream.getvalue() would be ideal, if I could show the Image in a TImage in Delphi rather than in the Window created by the script '''' Real Time Face Recogition ==> Each face stored on dataset/ dir, should have a unique numeric integer ID as 1, 2, 3, etc ==> LBPH computed model (trained faces) should be on trainer/ dir Based on original code by Anirban Kar: https://github.com/thecodacus/Face-Recognition Developed by Marcelo Rovai - MJRoBot.org @ 21Feb18 ''' import cv2 import io import numpy as np import os from PIL import Image from io import BytesIO import face_recognition import sys recognizer = cv2.face.LBPHFaceRecognizer_create() recognizer.read('trainer/trainer.yml') cascadePath = "haarcascade_frontalface_default.xml" faceCascade = cv2.CascadeClassifier(cascadePath); font = cv2.FONT_HERSHEY_SIMPLEX #iniciate id counter id = 0 # names related to ids: example ==> Marcelo: id=1, etc names = ['None', 'Wolfgang', 'Bruno', 'Devlyn', 'Z', 'W'] # Initialize and start realtime video capture cam = cv2.VideoCapture(0) cam.set(3, 640) # set video widht cam.set(4, 480) # set video height # Define min window size to be recognized as a face minW = 0.15*cam.get(3) minH = 0.15*cam.get(4) while True: ret, img =cam.read() gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) faces = faceCascade.detectMultiScale( gray, scaleFactor = 1.2, minNeighbors = 5, minSize = (int(minW), int(minH)), ) for(x,y,w,h) in faces: cv2.rectangle(img, (x,y), (x+w,y+h), (0,255,0), 2) id, confidence = recognizer.predict(gray[y:y+h,x:x+w]) # Check if confidence is less them 100 ==> "0" is perfect match if (confidence < 100): id = names[id] confidence = " {0}%".format(round(100 - confidence)) else: id = "unknown" confidence = " {0}%".format(round(100 - confidence)) cv2.putText(img, str(id), (x+5,y-5), font, 1, (255,255,255), 2) cv2.putText(img, str(confidence), (x+5,y+h-5), font, 1, (255,255,0), 1) cv2.imshow('camera',img) stream = BytesIO(img) stream.getvalue() k = cv2.waitKey(10) & 0xff # Press 'ESC' for exiting video if k == 27: break # Do a bit of cleanup print("\n [INFO] Exiting Program and cleanup stuff") cam.release() cv2.destroyAllWindows() Share this post Link to post
WRieder 0 Posted September 2, 2022 Can this part be displayed in a Delphi TImage component ? cv2.imshow('camera',img) Is it possible? Share this post Link to post
shineworld 73 Posted September 2, 2022 (edited) We need to know the structure of all. 1] A Delphi program which call a Python Script? 2] The python script that captures/elaborates image in an infinite loop? 3] The python script that sends back the elaborated image? It is hard to reply to your question without knowing your project structure. Edited September 2, 2022 by shineworld Share this post Link to post
WRieder 0 Posted September 3, 2022 Thank you for replying so quickly Below the code: Real Time Face Recogition ==> Each face stored on dataset/ dir, should have a unique numeric integer ID as 1, 2, 3, etc ==> LBPH computed model (trained faces) should be on trainer/ dir Based on original code by Anirban Kar: https://github.com/thecodacus/Face-Recognition Developed by Marcelo Rovai - MJRoBot.org @ 21Feb18 ''' import cv2 import io import numpy as np import os from PIL import Image from io import BytesIO import face_recognition import sys recognizer = cv2.face.LBPHFaceRecognizer_create() recognizer.read('trainer/trainer.yml') cascadePath = "haarcascade_frontalface_default.xml" faceCascade = cv2.CascadeClassifier(cascadePath); font = cv2.FONT_HERSHEY_SIMPLEX #iniciate id counter id = 0 # names related to ids: example ==> Marcelo: id=1, etc names = ['None', 'Wolfgang', 'Bruno', 'Devlyn', 'Z', 'W'] # Initialize and start realtime video capture cam = cv2.VideoCapture(0) cam.set(3, 640) # set video widht cam.set(4, 480) # set video height # Define min window size to be recognized as a face minW = 0.15*cam.get(3) minH = 0.15*cam.get(4) while True: ret, img =cam.read() gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) faces = faceCascade.detectMultiScale( gray, scaleFactor = 1.2, minNeighbors = 5, minSize = (int(minW), int(minH)), ) for(x,y,w,h) in faces: cv2.rectangle(img, (x,y), (x+w,y+h), (0,255,0), 2) id, confidence = recognizer.predict(gray[y:y+h,x:x+w]) # Check if confidence is less them 100 ==> "0" is perfect match if (confidence < 100): id = names[id] confidence = " {0}%".format(round(100 - confidence)) else: id = "unknown" confidence = " {0}%".format(round(100 - confidence)) cv2.putText(img, str(id), (x+5,y-5), font, 1, (255,255,255), 2) cv2.putText(img, str(confidence), (x+5,y+h-5), font, 1, (255,255,0), 1) cv2.imshow('camera',img) stream = BytesIO(img) stream.getvalue() k = cv2.waitKey(10) & 0xff # Press 'ESC' for exiting video if k == 27: break # Do a bit of cleanup print("\n [INFO] Exiting Program and cleanup stuff") cam.release() cv2.destroyAllWindows() Share this post Link to post
WRieder 0 Posted September 3, 2022 I started with trying to run it from the Delphi IDE with PythonEngine1.ExecStrings(SynEdit1.Lines); but get an error Project Project1.exe raised exception class EPyException with message 'error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv_contrib\modules\face\src\facerec.cpp:61: error: (-2:Unspecified error) File can't be opened for reading! in function 'cv::face::FaceRecognizer::read''. It runs perfectly from the command prompt as well as in PyScripter. I thought about doing something similar to Demo29 Share this post Link to post
WRieder 0 Posted September 3, 2022 Attached is the code for the complete project 01_face_dataset.py 02_face_training.py 03_face_recognition.py Share this post Link to post
shineworld 73 Posted September 3, 2022 So the question is ? Do you want to run the script using only python + delphivcl and send image to its TImage, or run the script inner a Delphi application, which "share" a TImage object exposed as custom python module so the script place image elaboration result on that ? Share this post Link to post
WRieder 0 Posted September 3, 2022 I need to be able to get the name, date and time of an identified person into a Delphi Application for further processing i.e. Wage Calculation and various reports. It can run standalone, as long as I can communicate with the App in Real Time. Share this post Link to post