반응형
먼저 realsense를 opencv에서 사용하기 위해
https://github.com/IntelRealSense/librealsense/blob/master/doc/distribution_linux.md
GitHub - IntelRealSense/librealsense: Intel® RealSense™ SDK
Intel® RealSense™ SDK. Contribute to IntelRealSense/librealsense development by creating an account on GitHub.
github.com
Realsense SDK를 다운받는다.
서버 key 등록
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-key F6E65AC044F831AC80A06380C8B3A55A6F3EFCDE || sudo apt-key adv --keyserver hkp
레퍼지스토리 목록에 추가
sudo add-apt-repository "deb https://librealsense.intel.com/Debian/apt-repo $(lsb_release -cs) main" -u
라이브러리 설치
sudo apt-get install librealsense2-dkms
sudo apt-get install librealsense2-utils
디버그 패키지 설치
sudo apt-get install librealsense2-dev
sudo apt-get install librealsense2-dbg
그 후
pip install pyrealsense2
pyrealsense2를 install 해준다.
# Configure depth and color streams...
# ...from Camera 1
pipeline_1 = rs.pipeline()
config_1 = rs.config()
config_1.enable_device('831612073906')
config_1.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config_1.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
# ...from Camera 2
pipeline_2 = rs.pipeline()
config_2 = rs.config()
config_2.enable_device('f1270272')
config_2.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config_2.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
# ...from Camera 3
pipeline_3 = rs.pipeline()
config_3 = rs.config()
config_3.enable_device('233722071891')
config_3.enable_stream(rs.stream.depth, 640,480, rs.format.z16, 30)
config_3.enable_stream(rs.stream.color, 640,480, rs.format.bgr8, 30)
# Start streaming from both cameras
pipeline_1.start(config_1)
pipeline_2.start(config_2)
pipeline_3.start(config_3)
try:
while True:
# Camera 1
# Wait for a coherent pair of frames: depth and color
frames_1 = pipeline_1.wait_for_frames()
depth_frame_1 = frames_1.get_depth_frame()
color_frame_1 = frames_1.get_color_frame()
if not depth_frame_1 or not color_frame_1:
continue
# Convert images to numpy arrays
depth_image_1 = np.asanyarray(depth_frame_1.get_data())
color_image_1 = np.asanyarray(color_frame_1.get_data())
# Apply colormap on depth image (image must be converted to 8-bit per pixel first)
# Camera 2
# Wait for a coherent pair of frames: depth and color
frames_2 = pipeline_2.wait_for_frames()
depth_frame_2 = frames_2.get_depth_frame()
color_frame_2 = frames_2.get_color_frame()
if not depth_frame_2 or not color_frame_2:
continue
# Convert images to numpy arrays
depth_image_2 = np.asanyarray(depth_frame_2.get_data())
color_image_2 = np.asanyarray(color_frame_2.get_data())
# Apply colormap on depth image (image must be converted to 8-bit per pixel first)
frames_3 = pipeline_3.wait_for_frames()
depth_frame_3 = frames_3.get_depth_frame()
color_frame_3 = frames_3.get_color_frame()
if not depth_frame_3 or not color_frame_3:
continue
# Convert images to numpy arrays
depth_image_3 = np.asanyarray(depth_frame_3.get_data())
color_image_3 = np.asanyarray(color_frame_3.get_data())
# Apply colormap on depth image (image must be converted to 8-bit per pixel first)
# depth_colormap_3 = cv2.applyColorMap(cv2.convertScaleAbs(depth_image_3, alpha=0.5), cv2.COLORMAP_JET)
# Stack all images horizontally
# images = np.hstack((color_image_1, depth_image_1,color_image_2, depth_image_2))
# Show images from both cameras
cv2.namedWindow('RealSense1', cv2.WINDOW_NORMAL)
cv2.imshow('RealSense1', color_image_1)
cv2.waitKey(1)
cv2.namedWindow('RealSense2', cv2.WINDOW_NORMAL)
cv2.imshow('RealSense2', depth_image_1)
cv2.waitKey(1)
cv2.namedWindow('RealSense3', cv2.WINDOW_NORMAL)
cv2.imshow('RealSense3', color_image_2)
cv2.waitKey(1)
cv2.namedWindow('RealSense4', cv2.WINDOW_NORMAL)
cv2.imshow('RealSense4', depth_image_2)
cv2.waitKey(1)
cv2.namedWindow('RealSense5', cv2.WINDOW_NORMAL)
cv2.imshow('RealSense5', color_image_3)
cv2.waitKey(1)
cv2.namedWindow('RealSense6', cv2.WINDOW_NORMAL)
cv2.imshow('RealSense6', depth_image_3)
cv2.waitKey(1)
# Save images and depth maps from both cameras by pressing 's'
ch = cv2.waitKey(25)
cv2.imwrite("/home/jaewoong/opencv_calibration/1.jpg",color_image_1)
cv2.imwrite("/home/jaewoong/opencv_calibration/2.jpg",depth_image_1)
cv2.imwrite("/home/jaewoong/opencv_calibration/3.jpg",color_image_2)
cv2.imwrite("/home/jaewoong/opencv_calibration/4.jpg",depth_image_2)
cv2.imwrite("/home/jaewoong/opencv_calibration/5.jpg",color_image_3)
cv2.imwrite("/home/jaewoong/opencv_calibration/6.jpg",depth_image_3)
finally:
# Stop streaming
pipeline_1.stop()
pipeline_2.stop()
pipeline_3.stop()
위 코드는 각 카메라의 rgb, depth 이미지를 출력하면서 저장하는 코드이다.
위에서 640, 480의 해상도를 사용하고 초당 프레인 30, rs.format.z16(16비트)는 depth 데이터 형식을 나타낸다.
rs.format.bgr8(8비트)는 color 데이터 형식을 나타낸다.
반응형
'Opencv 공부' 카테고리의 다른 글
카메라 캘리브레이션(Camera Calibration) 및 solvepnp (1) | 2025.02.24 |
---|---|
realsense camera calibration 캘리브레이션 (0) | 2023.03.28 |
c++ opencv make (0) | 2023.03.11 |
ZED 카메라 파라미터 (0) | 2023.03.11 |
Solvepnp(camera extrinsic parameter, 카메라 외부파라미터) (0) | 2023.03.11 |