Camera

Camera node is a source of image frames. You can control in at runtime with the InputControl and InputConfig. It aims to unify the ColorCamera and MonoCamera into one node.

Compared to ColorCamera node, Camera node:

  • Supports cam.setSize(), which replaces both cam.setResolution() and cam.setIspScale(). Camera node will automatically find resolution that fits best, and apply correct scaling to achieve user-selected size

  • Supports cam.setCalibrationAlpha(), example here: Undistort camera stream

  • Supports cam.loadMeshData() and cam.setMeshStep(), which can be used for custom image warping (undistortion, perspective correction, etc.)

  • Automatically undistorts camera stream if HFOV of the camera is greater than 85°. You can disable this with: cam.setMeshSource(dai.CameraProperties.WarpMeshSource.NONE).

Besides points above, compared to MonoCamera node, Camera node:

  • Doesn’t have out output, as it has the same outputs as ColorCamera (raw, isp, still, preview, video). This means that preview will output 3 planes of the same grayscale frame (3x overhead), and isp / video / still will output luma (useful grayscale information) + chroma (all values are 128), which will result in 1.5x bandwidth overhead

How to place it

pipeline = dai.Pipeline()
cam = pipeline.create(dai.node.Camera)
dai::Pipeline pipeline;
auto cam = pipeline.create<dai::node::Camera>();

Inputs and Outputs

                          Camera node
               ┌──────────────────────────────┐
               │   ┌─────────────┐            │
               │   │    Image    │ raw        │     raw
               │   │    Sensor   │---┬--------├────────►
               │   └────▲────────┘   |        │
               │        │   ┌--------┘        │
               │      ┌─┴───▼─┐               │     isp
inputControl   │      │       │-------┬-------├────────►
──────────────►│------│  ISP  │ ┌─────▼────┐  │   video
               │      │       │ |          |--├────────►
               │      └───────┘ │   Image  │  │   still
inputConfig    │                │   Post-  │--├────────►
──────────────►│----------------|Processing│  │ preview
               │                │          │--├────────►
               │                └──────────┘  │
               └──────────────────────────────┘

Message types

  • inputConfig - ImageManipConfig

  • inputControl - CameraControl

  • raw - ImgFrame - RAW10 bayer data. Demo code for unpacking here

  • isp - ImgFrame - YUV420 planar (same as YU12/IYUV/I420)

  • still - ImgFrame - NV12, suitable for bigger size frames. The image gets created when a capture event is sent to the Camera, so it’s like taking a photo

  • preview - ImgFrame - RGB (or BGR planar/interleaved if configured), mostly suited for small size previews and to feed the image into NeuralNetwork

  • video - ImgFrame - NV12, suitable for bigger size frames

ISP (image signal processor) is used for bayer transformation, demosaicing, noise reduction, and other image enhancements. It interacts with the 3A algorithms: auto-focus, auto-exposure, and auto-white-balance, which are handling image sensor adjustments such as exposure time, sensitivity (ISO), and lens position (if the camera module has a motorized lens) at runtime. Click here for more information.

Image Post-Processing converts YUV420 planar frames from the ISP into video/preview/still frames.

still (when a capture is triggered) and isp work at the max camera resolution, while video and preview are limited to max 4K (3840 x 2160) resolution, which is cropped from isp. For IMX378 (12MP), the post-processing works like this:

┌─────┐   Cropping to   ┌─────────┐  Downscaling   ┌──────────┐
│ ISP ├────────────────►│  video  ├───────────────►│ preview  │
└─────┘  max 3840x2160  └─────────┘  and cropping  └──────────┘
../../_images/isp.jpg

The image above is the isp output from the Camera (12MP resolution from IMX378). If you aren’t downscaling ISP, the video output is cropped to 4k (max 3840x2160 due to the limitation of the video output) as represented by the blue rectangle. The Yellow rectangle represents a cropped preview output when the preview size is set to a 1:1 aspect ratio (eg. when using a 300x300 preview size for the MobileNet-SSD NN model) because the preview output is derived from the video output.

Usage

pipeline = dai.Pipeline()
cam = pipeline.create(dai.node.Camera)
cam.setPreviewSize(300, 300)
cam.setBoardSocket(dai.CameraBoardSocket.CAM_A)
# Instead of setting the resolution, user can specify size, which will set
# sensor resolution to best fit, and also apply scaling
cam.setSize(1280, 720)
dai::Pipeline pipeline;
auto cam = pipeline.create<dai::node::Camera>();
cam->setPreviewSize(300, 300);
cam->setBoardSocket(dai::CameraBoardSocket::CAM_A);
// Instead of setting the resolution, user can specify size, which will set
// sensor resolution to best fit, and also apply scaling
cam->setSize(1280, 720);

Limitations

Here are known camera limitations for the RVC2:

  • ISP can process about 600 MP/s, and about 500 MP/s when the pipeline is also running NNs and video encoder in parallel

  • 3A algorithms can process about 200..250 FPS overall (for all camera streams). This is a current limitation of our implementation, and we have plans for a workaround to run 3A algorithms on every Xth frame, no ETA yet

Examples of functionality

Reference

class depthai.node.Camera
class Id

Node identificator. Unique for every node on a single Pipeline

getAssetManager(*args, **kwargs)

Overloaded function.

  1. getAssetManager(self: depthai.Node) -> depthai.AssetManager

  2. getAssetManager(self: depthai.Node) -> depthai.AssetManager

getBoardSocket(self: depthai.node.Camera)depthai.CameraBoardSocket
getCalibrationAlpha(self: depthai.node.Camera) → Optional[float]
getCamera(self: depthai.node.Camera)str
getFps(self: depthai.node.Camera)float
getHeight(self: depthai.node.Camera)int
getImageOrientation(self: depthai.node.Camera)depthai.CameraImageOrientation
getInputRefs(*args, **kwargs)

Overloaded function.

  1. getInputRefs(self: depthai.Node) -> List[depthai.Node.Input]

  2. getInputRefs(self: depthai.Node) -> List[depthai.Node.Input]

getInputs(self: depthai.Node) → List[depthai.Node.Input]
getMeshSource(self: depthai.node.Camera)depthai.CameraProperties.WarpMeshSource
getMeshStep(self: depthai.node.Camera) → Tuple[int, int]
getName(self: depthai.Node)str
getOutputRefs(*args, **kwargs)

Overloaded function.

  1. getOutputRefs(self: depthai.Node) -> List[depthai.Node.Output]

  2. getOutputRefs(self: depthai.Node) -> List[depthai.Node.Output]

getOutputs(self: depthai.Node) → List[depthai.Node.Output]
getParentPipeline(*args, **kwargs)

Overloaded function.

  1. getParentPipeline(self: depthai.Node) -> depthai.Pipeline

  2. getParentPipeline(self: depthai.Node) -> depthai.Pipeline

getPreviewHeight(self: depthai.node.Camera)int
getPreviewSize(self: depthai.node.Camera) → Tuple[int, int]
getPreviewWidth(self: depthai.node.Camera)int
getSize(self: depthai.node.Camera) → Tuple[int, int]
getStillHeight(self: depthai.node.Camera)int
getStillSize(self: depthai.node.Camera) → Tuple[int, int]
getStillWidth(self: depthai.node.Camera)int
getVideoHeight(self: depthai.node.Camera)int
getVideoSize(self: depthai.node.Camera) → Tuple[int, int]
getVideoWidth(self: depthai.node.Camera)int
getWidth(self: depthai.node.Camera)int
loadMeshData(self: depthai.node.Camera, warpMesh: buffer)None
loadMeshFile(self: depthai.node.Camera, warpMesh: Path)None
setBoardSocket(self: depthai.node.Camera, boardSocket: depthai.CameraBoardSocket)None
setCalibrationAlpha(self: depthai.node.Camera, alpha: float)None
setCamera(self: depthai.node.Camera, name: str)None
setFps(self: depthai.node.Camera, fps: float)None
setImageOrientation(self: depthai.node.Camera, imageOrientation: depthai.CameraImageOrientation)None
setIsp3aFps(self: depthai.node.Camera, arg0: int)None
setMeshSource(self: depthai.node.Camera, source: depthai.CameraProperties.WarpMeshSource)None
setMeshStep(self: depthai.node.Camera, width: int, height: int)None
setPreviewSize(*args, **kwargs)

Overloaded function.

  1. setPreviewSize(self: depthai.node.Camera, width: int, height: int) -> None

  2. setPreviewSize(self: depthai.node.Camera, size: Tuple[int, int]) -> None

setRawOutputPacked(self: depthai.node.Camera, packed: bool)None
setSize(*args, **kwargs)

Overloaded function.

  1. setSize(self: depthai.node.Camera, width: int, height: int) -> None

  2. setSize(self: depthai.node.Camera, size: Tuple[int, int]) -> None

setStillSize(*args, **kwargs)

Overloaded function.

  1. setStillSize(self: depthai.node.Camera, width: int, height: int) -> None

  2. setStillSize(self: depthai.node.Camera, size: Tuple[int, int]) -> None

setVideoSize(*args, **kwargs)

Overloaded function.

  1. setVideoSize(self: depthai.node.Camera, width: int, height: int) -> None

  2. setVideoSize(self: depthai.node.Camera, size: Tuple[int, int]) -> None

class dai::node::Camera : public dai::NodeCRTP<Node, Camera, CameraProperties>

Camera node. Experimental node, for both mono and color types of sensors.

Public Functions

Camera(const std::shared_ptr<PipelineImpl> &par, int64_t nodeId)

Constructs Camera node.

Camera(const std::shared_ptr<PipelineImpl> &par, int64_t nodeId, std::unique_ptr<Properties> props)
void setBoardSocket(CameraBoardSocket boardSocket)

Specify which board socket to use

Parameters
  • boardSocket: Board socket to use

CameraBoardSocket getBoardSocket() const

Retrieves which board socket to use

Return

Board socket to use

void setCamera(std::string name)

Specify which camera to use by name

Parameters
  • name: Name of the camera to use

std::string getCamera() const

Retrieves which camera to use by name

Return

Name of the camera to use

void setImageOrientation(CameraImageOrientation imageOrientation)

Set camera image orientation.

CameraImageOrientation getImageOrientation() const

Get camera image orientation.

void setSize(std::tuple<int, int> size)

Set desired resolution. Sets sensor size to best fit.

void setSize(int width, int height)

Set desired resolution. Sets sensor size to best fit.

void setPreviewSize(int width, int height)

Set preview output size.

void setPreviewSize(std::tuple<int, int> size)

Set preview output size, as a tuple <width, height>

void setVideoSize(int width, int height)

Set video output size.

void setVideoSize(std::tuple<int, int> size)

Set video output size, as a tuple <width, height>

void setStillSize(int width, int height)

Set still output size.

void setStillSize(std::tuple<int, int> size)

Set still output size, as a tuple <width, height>

void setFps(float fps)

Set rate at which camera should produce frames

Parameters
  • fps: Rate in frames per second

void setIsp3aFps(int isp3aFps)

Isp 3A rate (auto focus, auto exposure, auto white balance, camera controls etc.). Default (0) matches the camera FPS, meaning that 3A is running on each frame. Reducing the rate of 3A reduces the CPU usage on CSS, but also increases the convergence rate of 3A. Note that camera controls will be processed at this rate. E.g. if camera is running at 30 fps, and camera control is sent at every frame, but 3A fps is set to 15, the camera control messages will be processed at 15 fps rate, which will lead to queueing.

float getFps() const

Get rate at which camera should produce frames

Return

Rate in frames per second

std::tuple<int, int> getPreviewSize() const

Get preview size as tuple.

int getPreviewWidth() const

Get preview width.

int getPreviewHeight() const

Get preview height.

std::tuple<int, int> getVideoSize() const

Get video size as tuple.

int getVideoWidth() const

Get video width.

int getVideoHeight() const

Get video height.

std::tuple<int, int> getStillSize() const

Get still size as tuple.

int getStillWidth() const

Get still width.

int getStillHeight() const

Get still height.

std::tuple<int, int> getSize() const

Get sensor resolution as size.

int getWidth() const

Get sensor resolution width.

int getHeight() const

Get sensor resolution height.

void setMeshSource(Properties::WarpMeshSource source)

Set the source of the warp mesh or disable.

Properties::WarpMeshSource getMeshSource() const

Gets the source of the warp mesh.

void loadMeshFile(const dai::Path &warpMesh)

Specify local filesystem paths to the undistort mesh calibration files.

When a mesh calibration is set, it overrides the camera intrinsics/extrinsics matrices. Overrides useHomographyRectification behavior. Mesh format: a sequence of (y,x) points as ‘float’ with coordinates from the input image to be mapped in the output. The mesh can be subsampled, configured by setMeshStep.

With a 1280x800 resolution and the default (16,16) step, the required mesh size is:

width: 1280 / 16 + 1 = 81

height: 800 / 16 + 1 = 51

void loadMeshData(span<const std::uint8_t> warpMesh)

Specify mesh calibration data for undistortion See loadMeshFiles for the expected data format

void setMeshStep(int width, int height)

Set the distance between mesh points. Default: (32, 32)

std::tuple<int, int> getMeshStep() const

Gets the distance between mesh points.

void setCalibrationAlpha(float alpha)

Set calibration alpha parameter that determines FOV of undistorted frames.

tl::optional<float> getCalibrationAlpha() const

Get calibration alpha parameter that determines FOV of undistorted frames.

void setRawOutputPacked(bool packed)

Configures whether the camera raw frames are saved as MIPI-packed to memory. The packed format is more efficient, consuming less memory on device, and less data to send to host: RAW10: 4 pixels saved on 5 bytes, RAW12: 2 pixels saved on 3 bytes. When packing is disabled (false), data is saved lsb-aligned, e.g. a RAW10 pixel will be stored as uint16, on bits 9..0: 0b0000’00pp’pppp’pppp. Default is auto: enabled for standard color/monochrome cameras where ISP can work with both packed/unpacked, but disabled for other cameras like ToF.

Public Members

CameraControl initialControl

Initial control options to apply to sensor

Input inputConfig = {*this, "inputConfig", Input::Type::SReceiver, false, 8, {{DatatypeEnum::ImageManipConfig, false}}}

Input for ImageManipConfig message, which can modify crop parameters in runtime

Default queue is non-blocking with size 8

Input inputControl = {*this, "inputControl", Input::Type::SReceiver, true, 8, {{DatatypeEnum::CameraControl, false}}}

Input for CameraControl message, which can modify camera parameters in runtime

Default queue is blocking with size 8

Output video = {*this, "video", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries NV12 encoded (YUV420, UV plane interleaved) frame data.

Suitable for use with VideoEncoder node

Output preview = {*this, "preview", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries BGR/RGB planar/interleaved encoded frame data.

Suitable for use with NeuralNetwork node

Output still = {*this, "still", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries NV12 encoded (YUV420, UV plane interleaved) frame data.

The message is sent only when a CameraControl message arrives to inputControl with captureStill command set.

Output isp = {*this, "isp", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries YUV420 planar (I420/IYUV) frame data.

Generated by the ISP engine, and the source for the ‘video’, ‘preview’ and ‘still’ outputs

Output raw = {*this, "raw", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries RAW10-packed (MIPI CSI-2 format) frame data.

Captured directly from the camera sensor, and the source for the ‘isp’ output.

Output frameEvent = {*this, "frameEvent", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs metadata-only ImgFrame message as an early indicator of an incoming frame.

It’s sent on the MIPI SoF (start-of-frame) event, just after the exposure of the current frame has finished and before the exposure for next frame starts. Could be used to synchronize various processes with camera capture. Fields populated: camera id, sequence number, timestamp

Public Static Functions

int getScaledSize(int input, int num, int denom)

Computes the scaled size given numerator and denominator

Public Static Attributes

static constexpr const char *NAME = "Camera"

Private Members

std::shared_ptr<RawCameraControl> rawControl

Got questions?

Head over to Discussion Forum for technical support or any other questions you might have.