반응형
젯슨나노에서 USB 카메라를 인식한 후 rviz에 띄우는 작업을 해봅시다
사용 보드 : Jetson Nano 4GB
환경 : Ubuntu 20.04 + noetic 도커 컨테이너
라이다 : YD LiDAR G4
사용 카메라 : [Arducam] Arducam CSI-USB UVC Camera Adapter Board for 12.3MP IMX477 Raspberry Pi Camera [B0278]
1. usb 인식 확인
ls -ltr /dev/video*
2. 패키지 설치
apt-get install ros-noetic-usb-cam
apt-get install ros-noetic-image-view
sudo apt-get install libv4l-dev
3. 카메라가 어떤 픽셀과 포맷을 지원하는지 확인
v4l2-ctl -d /dev/video0 --list-formats-ext
4. git clone
git clone https://github.com/ros-drivers/usb_cam.git
5. Edit
catkin_ws/src/usb_cam/config/usb_cam.yml파일에 들어가서 3번 내용을 참조하여 아래와 같이 파라미터를 수정한다.
start_service_name: "start_capture" # Defines name suffix for std_srvs::Empty service which restarts suspended streaming
stop_service_name: "stop_capture" # Defines name suffix for std_srvs::Empty service which suspends camera polling timer
video_device: /dev/video0 # Device driver's entrypoint
io_method: mmap # I/O method
# - read - for devices supporting virtual filesystem or block I/O
# - mmap - for devices with direct libusb memory mapping
# - userptr - for userspace devices supporting userspace pointer exchange
pixel_format: mjpeg # Pixel format for Video4linux device (also selects decoder mode)
# https://wiki.videolan.org/YUV#YUV_4:2:0_.28I420.2FJ420.2FYV12.29
# - yuyv - YUV420
# - yuv - synonym for yuyv
# - uyvy - UVY240
# - yuvmono10 - Monochrome 10-bit pseudo-YUV
# - rgb24 - Linear 8-bit RGB
# - bgr24 - OpenCV-compatible 8-bit BGR
# - grey - Grayscale 8-bit monochrome
# - yu12 - YU-reversed YUV420
# - mjpeg - FFMPEG decoder, MotionJPEG, for compatible hardware
# - h264 - FFMPEG decoder, H.264, for compatible hardware
color_format: yuv422p # On-chip color representation mode for the input frame encoded by hardware
# - yuv422p - YUV422 - default, compatible with most MJPEG hardware encoders
# - yuv420p - YUV420 - mandatory for H.264 and H.265 hardware encoders
create_suspended: false # Instructs the node whether to start streaming immediately after launch
# or to wait until the start service will be triggered
full_ffmpeg_log: false # Allows to suppress warning messages generated by libavcodec, cleans log
camera_name: head_camera # ROS internal name for the camera, used to generate camera_info message
camera_frame_id: head_camera # Frame ID used to generate coordinate transformations
camera_transport_suffix: image_raw # Suffix used by image_transport to generate topic names
camera_info_url: "" # URI for camera calibration data (likely a YML file obtained from camera_calibration)
image_width: 640 # Frame dimensions, should be supported by camera hardware
image_height: 480
framerate: 80 # Camera polling frequency, Hz (integer)
# Auxiliary camera parameters provided by libv4l2.
# Names for these parameters are generated automatically according to the intrinsic control names exported by the
# camera driver. The node queries camera's kernel controller module to determine the parameters that can be set up
# via ROS. For these parameters the corresponding ROS parameters with identical names are generated under this
# namespace.
# See also the comprehensive node output describing parameter names and feasible values for them to be set up here.
# It is also possible to have a list of the available control names using v4l2-ctl application from v4l2-util package:
# v4l2-ctl --device=/dev/video<ID> -L
intrinsic_controls:
focus_auto: true
exposure_auto_priority: true
exposure_auto: 3
white_balance_temperature_auto: true
power_line_frequency: 1
ignore: [
brightness,
contrast,
saturation,
gain,
sharpness,
backlight_compensation,
white_balance_temperature,
exposure_absolute,
pan_absolute,
tilt_absolute,
focus_absolute,
zoom_absolute
] # Use this list to enumerate the control names that should be delisted from the camera setup
# NOTE: the ROS parameters for the V4L controls supported but listed here would be STILL
# generated, but the values WILL NOT BE USED to set up the camera. To affect these controls
# once you want to do that, their names should me REMOVED from this list!
6.Build
#catkin_ws 디렉터리로 이동
catkin_make
source /devel/setup.bash
7. 실행
7-1. 터미널 1을 열고 다음 명령어 입력
roscore
7-2. 터미널 2를 열고
roslaunch usb_cam usb_cam.launch
7-3. 터미널 3을 열고
rqt_image_graph
8. 최종 결과
젯슨나노라서 조금 느린감이 없지 않아 있긴 하지만 정상적으로 나오는 것을 확인할 수 있다.
반응형
'ROS Project🦾' 카테고리의 다른 글
ROS2 개발 툴 설치 & Bashrc 편집 [1] (0) | 2023.08.15 |
---|---|
2D LiDAR에 obstacle_detector 라이브러리 적용하기 [5] (2) | 2023.07.26 |
Jetson Nano에서 LiDAR 데이터(point cloud data) 저장하기 [3] (0) | 2023.07.17 |
Jetson Nano에 YD LiDAR G4 연결하기 [2] (0) | 2023.07.06 |
Jetson Nano에 Docker 설치 및 Rviz 실행하기 [1] (0) | 2023.07.06 |