출처: https://iosoft.blog/2019/07/31/rpi-camera-display-pyqt-opencv/
OpenCV is an incredibly powerful image-processing tool, but it can be difficult to know where to start – how do you grab an image from a camera, and display it in a user-friendly GUI? This post describes such an application, that runs unmodified on a PC or Raspberry Pi, Windows or Linux, Python 2.7 or 3.x, and PyQt v4 or v5.
Installation
On Windows, the OpenCV and PyQt5 libraries can be installed using pip:
pip install numpy opencv-python PyQt5
pip install numpy opencv-python PyQt5
If pip isn’t available, you should be able to run the module from the command line by invoking Python, e.g. for Python 3:
py -3 -m pip install numpy opencv-python PyQt5
Installing on a Raspberry Pi is potentially a lot more complicated; it is generally recommended to install from source, and for opencv-python, this is a bit convoluted. Fortunately there is a simpler option, if you don’t mind using versions that are a few years old, namely to load the binary image from the standard repository, e.g.
sudo apt update
sudo apt install python3-opencv python3-pyqt5
At the time of writing, the most recent version of Raspbian Linux is ‘buster’, and that has OpenCV 3.2, which is quite usable. The previous ‘stretch’ distribution has python-opencv version 2.4, which is a bit too old: my code isn’t compatible with it.
With regard to cameras, all the USB Webcams I’ve tried have worked fine on Windows without needing to have any extra driver software installed; they also work on the Raspberry Pi, as well as the standard Pi camera with the ribbon-cable interface.
PyQt main window
Being compatible with PyQt version 4 and 5 requires some boilerplate code to handle the way some functions have been moved between libraries:
import sys, time, threading, cv2
try:
from PyQt5.QtCore import Qt
pyqt5 = True
except:
pyqt5 = False
if pyqt5:
from PyQt5.QtCore import QTimer, QPoint, pyqtSignal
from PyQt5.QtWidgets import QApplication, QMainWindow, QTextEdit, QLabel
from PyQt5.QtWidgets import QWidget, QAction, QVBoxLayout, QHBoxLayout
from PyQt5.QtGui import QFont, QPainter, QImage, QTextCursor
else:
from PyQt4.QtCore import Qt, pyqtSignal, QTimer, QPoint
from PyQt4.QtGui import QApplication, QMainWindow, QTextEdit, QLabel
from PyQt4.QtGui import QWidget, QAction, QVBoxLayout, QHBoxLayout
from PyQt4.QtGui import QFont, QPainter, QImage, QTextCursor
try:
import Queue as Queue
except:
import sys, time, threading, cv2
try:
from PyQt5.QtCore import Qt
pyqt5 = True
except:
pyqt5 = False
if pyqt5:
from PyQt5.QtCore import QTimer, QPoint, pyqtSignal
from PyQt5.QtWidgets import QApplication, QMainWindow, QTextEdit, QLabel
from PyQt5.QtWidgets import QWidget, QAction, QVBoxLayout, QHBoxLayout
from PyQt5.QtGui import QFont, QPainter, QImage, QTextCursor
else:
from PyQt4.QtCore import Qt, pyqtSignal, QTimer, QPoint
from PyQt4.QtGui import QApplication, QMainWindow, QTextEdit, QLabel
from PyQt4.QtGui import QWidget, QAction, QVBoxLayout, QHBoxLayout
from PyQt4.QtGui import QFont, QPainter, QImage, QTextCursor
try:
import Queue as Queue
except:
import queue as Queue
The main window is subclassed from PyQt, with a simple arrangement of a menu bar, video image, and text box:
class MyWindow(QMainWindow):
text_update = pyqtSignal(str)
# Create main window
def __init__(self, parent=None):
QMainWindow.__init__(self, parent)
self.central = QWidget(self)
self.textbox = QTextEdit(self.central)
self.textbox.setFont(TEXT_FONT)
self.textbox.setMinimumSize(300, 100)
self.text_update.connect(self.append_text)
sys.stdout = self
print("Camera number %u" % camera_num)
print("Image size %u x %u" % IMG_SIZE)
if DISP_SCALE > 1:
print("Display scale %u:1" % DISP_SCALE)
self.vlayout = QVBoxLayout() # Window layout
self.displays = QHBoxLayout()
self.disp = ImageWidget(self)
self.displays.addWidget(self.disp)
self.vlayout.addLayout(self.displays)
self.label = QLabel(self)
self.vlayout.addWidget(self.label)
self.vlayout.addWidget(self.textbox)
self.central.setLayout(self.vlayout)
self.setCentralWidget(self.central)
self.mainMenu = self.menuBar() # Menu bar
exitAction = QAction('&Exit', self)
exitAction.setShortcut('Ctrl+Q')
exitAction.triggered.connect(self.close)
self.fileMenu = self.mainMenu.addMenu('&File')
self.fileMenu.addAction(exitAction)
There is a horizontal box layout called ‘displays’, that seems to be unnecessary as it only has one display widget in it. This is intentional, since much of my OpenCV experimentation requires additional displays to show the image processing in action; this can easily be done by creating more ImageWidgets, and adding them to the ‘displays’ layout.The main window is subclassed from PyQt, with a simple arrangement of a menu bar, video image, and text box:
Similarly, there is a redundant QLabel below the displays, which isn’t currently used, but is handy for displaying static text below the images.
Text display
It is convenient to redirect the ‘print’ output to the text box, rather than appearing on the Python console. This is done using the ‘text_update’ signal that was defined above:
# Handle sys.stdout.write: update text display
def write(self, text):
self.text_update.emit(str(text))
def flush(self):
pass
# Append to text display
def append_text(self, text):
cur = self.textbox.textCursor() # Move cursor to end of text
cur.movePosition(QTextCursor.End)
s = str(text)
while s:
head,sep,s = s.partition("\n") # Split line at LF
cur.insertText(head) # Insert text at cursor
if sep: # New line if LF
cur.insertBlock()
self.textbox.setTextCursor(cur) # Update visible cursor
The use of a signal means that print() calls can be scattered about the code, without having to worry about which thread they’re in.
Image capture
A separate thread is used to capture the camera images, and put them in a queue to be displayed. The camera may produce images faster than they can be displayed, so it is necessary to check how many images are already in the queue; if more than 1, the new image is discarded. This prevents a buildup of unwanted images.
IMG_SIZE = 1280,720 # 640,480 or 1280,720 or 1920,1080
CAP_API = cv2.CAP_ANY # or cv2.CAP_DSHOW, etc...
EXPOSURE = 0 # Non-zero for fixed exposure
# Grab images from the camera (separate thread)
def grab_images(cam_num, queue):
cap = cv2.VideoCapture(cam_num-1 + CAP_API)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, IMG_SIZE[0])
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, IMG_SIZE[1])
if EXPOSURE:
cap.set(cv2.CAP_PROP_AUTO_EXPOSURE, 0)
cap.set(cv2.CAP_PROP_EXPOSURE, EXPOSURE)
else:
cap.set(cv2.CAP_PROP_AUTO_EXPOSURE, 1)
while capturing:
if cap.grab():
retval, image = cap.retrieve(0)
if image is not None and queue.qsize() < 2:
queue.put(image)
else:
time.sleep(DISP_MSEC / 1000.0)
else:
print("Error: can't grab camera image")
break
cap.release()
The choice of image size will depend on the camera used; all cameras support VGA size (640 x 480 pixels), more modern versions the high-definition standards of 720p (1280 x 720) or 1080p (1920 x 1080).
The camera number refers to the position in the list of cameras collected by the operating system; I’ve defined the first camera as number 1, but the OpenCV call defines the first as 0, so the number has to be adjusted.
The same parameter is also used to define the capture API setting; by default this is ‘any’, which usually works well; my Windows 10 system defaults to the MSMF (Microsoft Media Foundation) backend, while the Raspberry Pi defaults to Video for Linux (V4L). Sometimes you may need to force a particular API to be used, for example, I have a Logitech C270 webcam that works fine on Windows 7, but fails on Windows 10 with an ‘MSMF grab error’. Forcing the software to use the DirectShow API (using the cv2.CAP_DSHOW option) fixes the problem.
If you want to check which backend is being used, try:
print("Backend '%s'" % cap.getBackendName())
Unfortunately this only works on the later revisions of OpenCV.
Manual exposure setting can be a bit hit-and-miss, depending on the camera and API you are using; the default is automatic operation, and setting EXPOSURE non-zero (e.g. to a value of -3) generally works, however it can be difficult to set a webcam back to automatic operation: sometimes I’ve had to use another application to do this. So it is suggested that you keep auto-exposure enabled if possible.
[Supplementary note: it seems that these parameter values aren’t standardised across the backends. For example, the CAP_PROP_AUTO_EXPOSURE value in my source code is correct for the MSMF backend; a value of 1 enables automatic exposure, 0 disables it. However, the V4L backend on the Raspberry Pi uses the opposite values: automatic is 0, and manual is 1. So it looks like my code is incorrect for Linux. I haven’t yet found any detailed documentation for this, so had to fall back on reading the source code, namely the OpenCV videoio ‘cap’ files such as cap_msmf.cpp and cap_v4l.cpp.]
Image display
The camera image is displayed in a custom widget:
# Image widget
class ImageWidget(QWidget):
def __init__(self, parent=None):
super(ImageWidget, self).__init__(parent)
self.image = None
def setImage(self, image):
self.image = image
self.setMinimumSize(image.size())
self.update()
def paintEvent(self, event):
qp = QPainter()
qp.begin(self)
if self.image:
qp.drawImage(QPoint(0, 0), self.image)
qp.end()
A timer event is used to trigger a scan of the image queue. This contains images in the camera format, which must be converted into the PyQt display format:
DISP_SCALE = 2 # Scaling factor for display image
# Start image capture & display
def start(self):
self.timer = QTimer(self) # Timer to trigger display
self.timer.timeout.connect(lambda:
self.show_image(image_queue, self.disp, DISP_SCALE))
self.timer.start(DISP_MSEC)
self.capture_thread = threading.Thread(target=grab_images,
args=(camera_num, image_queue))
self.capture_thread.start() # Thread to grab images
# Fetch camera image from queue, and display it
def show_image(self, imageq, display, scale):
if not imageq.empty():
image = imageq.get()
if image is not None and len(image) > 0:
img = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
self.display_image(img, display, scale)
# Display an image, reduce size if required
def display_image(self, img, display, scale=1):
disp_size = img.shape[1]//scale, img.shape[0]//scale
disp_bpl = disp_size[0] * 3
if scale > 1:
img = cv2.resize(img, disp_size,
interpolation=cv2.INTER_CUBIC)
qimg = QImage(img.data, disp_size[0], disp_size[1],
disp_bpl, IMG_FORMAT)
display.setImage(qimg)
This demonstrates the power of OpenCV; with one function call we convert the image from BGR to RGB format, then another is used to resize the image using cubic interpolation. Finally a PyQt function is used to convert from OpenCV to PyQt format.
Running the application
Make sure you’re using the Python version that has the OpenCV and PyQt installed, e.g. for the Raspberry Pi:
python3 cam_display.py
There is an optional argument that can be used if there are multiple cameras; the default first camera is number 1.
On Linux, some USB Webcams cause a constant stream of JPEG format errors to be printed on the console, complaining about extraneous bytes in the data. There is some discussion online as to the cause of the error, and the cure seems to involve rebuilding the libraries from source; I’m keen to avoid that, so used the simple workaround of suppressing the errors by redirecting STDERR to null:
python3 cam_display.py 2> /dev/null
Fortunately this workaround is only needed with some USB cameras; the standard Raspberry Pi camera with the CSI ribbon-cable interface works fine.
Source code
Full source code is available here.
# USB camera display using PyQt and OpenCV, from iosoft.blog
# Copyright (c) Jeremy P Bentham 2019
# Please credit iosoft.blog if you use the information or software in it
VERSION = "Cam_display v0.10"
import sys, time, threading, cv2
try:
from PyQt5.QtCore import Qt
pyqt5 = True
except:
pyqt5 = False
if pyqt5:
from PyQt5.QtCore import QTimer, QPoint, pyqtSignal
from PyQt5.QtWidgets import QApplication, QMainWindow, QTextEdit, QLabel
from PyQt5.QtWidgets import QWidget, QAction, QVBoxLayout, QHBoxLayout
from PyQt5.QtGui import QFont, QPainter, QImage, QTextCursor
else:
from PyQt4.QtCore import Qt, pyqtSignal, QTimer, QPoint
from PyQt4.QtGui import QApplication, QMainWindow, QTextEdit, QLabel
from PyQt4.QtGui import QWidget, QAction, QVBoxLayout, QHBoxLayout
from PyQt4.QtGui import QFont, QPainter, QImage, QTextCursor
try:
import Queue as Queue
except:
import queue as Queue
IMG_SIZE = 1280,720 # 640,480 or 1280,720 or 1920,1080
IMG_FORMAT = QImage.Format_RGB888
DISP_SCALE = 2 # Scaling factor for display image
DISP_MSEC = 50 # Delay between display cycles
CAP_API = cv2.CAP_ANY # API: CAP_ANY or CAP_DSHOW etc...
EXPOSURE = 0 # Zero for automatic exposure
TEXT_FONT = QFont("Courier", 10)
camera_num = 1 # Default camera (first in list)
image_queue = Queue.Queue() # Queue to hold images
capturing = True # Flag to indicate capturing
# Grab images from the camera (separate thread)
def grab_images(cam_num, queue):
cap = cv2.VideoCapture(cam_num-1 + CAP_API)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, IMG_SIZE[0])
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, IMG_SIZE[1])
if EXPOSURE:
cap.set(cv2.CAP_PROP_AUTO_EXPOSURE, 0)
cap.set(cv2.CAP_PROP_EXPOSURE, EXPOSURE)
else:
cap.set(cv2.CAP_PROP_AUTO_EXPOSURE, 1)
while capturing:
if cap.grab():
retval, image = cap.retrieve(0)
if image is not None and queue.qsize() < 2:
queue.put(image)
else:
time.sleep(DISP_MSEC / 1000.0)
else:
print("Error: can't grab camera image")
break
cap.release()
# Image widget
class ImageWidget(QWidget):
def __init__(self, parent=None):
super(ImageWidget, self).__init__(parent)
self.image = None
def setImage(self, image):
self.image = image
self.setMinimumSize(image.size())
self.update()
def paintEvent(self, event):
qp = QPainter()
qp.begin(self)
if self.image:
qp.drawImage(QPoint(0, 0), self.image)
qp.end()
# Main window
class MyWindow(QMainWindow):
text_update = pyqtSignal(str)
# Create main window
def __init__(self, parent=None):
QMainWindow.__init__(self, parent)
self.central = QWidget(self)
self.textbox = QTextEdit(self.central)
self.textbox.setFont(TEXT_FONT)
self.textbox.setMinimumSize(300, 100)
self.text_update.connect(self.append_text)
sys.stdout = self
print("Camera number %u" % camera_num)
print("Image size %u x %u" % IMG_SIZE)
if DISP_SCALE > 1:
print("Display scale %u:1" % DISP_SCALE)
self.vlayout = QVBoxLayout() # Window layout
self.displays = QHBoxLayout()
self.disp = ImageWidget(self)
self.displays.addWidget(self.disp)
self.vlayout.addLayout(self.displays)
self.label = QLabel(self)
self.vlayout.addWidget(self.label)
self.vlayout.addWidget(self.textbox)
self.central.setLayout(self.vlayout)
self.setCentralWidget(self.central)
self.mainMenu = self.menuBar() # Menu bar
exitAction = QAction('&Exit', self)
exitAction.setShortcut('Ctrl+Q')
exitAction.triggered.connect(self.close)
self.fileMenu = self.mainMenu.addMenu('&File')
self.fileMenu.addAction(exitAction)
# Start image capture & display
def start(self):
self.timer = QTimer(self) # Timer to trigger display
self.timer.timeout.connect(lambda:
self.show_image(image_queue, self.disp, DISP_SCALE))
self.timer.start(DISP_MSEC)
self.capture_thread = threading.Thread(target=grab_images,
args=(camera_num, image_queue))
self.capture_thread.start() # Thread to grab images
# Fetch camera image from queue, and display it
def show_image(self, imageq, display, scale):
if not imageq.empty():
image = imageq.get()
if image is not None and len(image) > 0:
img = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
self.display_image(img, display, scale)
# Display an image, reduce size if required
def display_image(self, img, display, scale=1):
disp_size = img.shape[1]//scale, img.shape[0]//scale
disp_bpl = disp_size[0] * 3
if scale > 1:
img = cv2.resize(img, disp_size,
interpolation=cv2.INTER_CUBIC)
qimg = QImage(img.data, disp_size[0], disp_size[1],
disp_bpl, IMG_FORMAT)
display.setImage(qimg)
# Handle sys.stdout.write: update text display
def write(self, text):
self.text_update.emit(str(text))
def flush(self):
pass
# Append to text display
def append_text(self, text):
cur = self.textbox.textCursor() # Move cursor to end of text
cur.movePosition(QTextCursor.End)
s = str(text)
while s:
head,sep,s = s.partition("\n") # Split line at LF
cur.insertText(head) # Insert text at cursor
if sep: # New line if LF
cur.insertBlock()
self.textbox.setTextCursor(cur) # Update visible cursor
# Window is closing: stop video capture
def closeEvent(self, event):
global capturing
capturing = False
self.capture_thread.join()
if __name__ == '__main__':
if len(sys.argv) > 1:
try:
camera_num = int(sys.argv[1])
except:
camera_num = 0
if camera_num < 1:
print("Invalid camera number '%s'" % sys.argv[1])
else:
app = QApplication(sys.argv)
win = MyWindow()
win.show()
win.setWindowTitle(VERSION)
win.start()
sys.exit(app.exec_())
#EOF
For a more significant OpenCV application, take a look at this post.
'언어 > Python' 카테고리의 다른 글
stream 및 file에 로그 남기기 (logging) (0) | 2019.11.19 |
---|---|
PyQt serial terminal (0) | 2019.11.18 |
Windows에서 PyCharm을 사용하여 Python2와 Python3 동시에 사용하기 (0) | 2018.08.29 |
Windows에 Python2, Python3 설치 하는 방법 (0) | 2018.08.29 |
RPi_GPIO_Code_Samples (0) | 2018.08.26 |