C++ API Reference

namespace dai

Typedefs

using Clock = std::chrono::steady_clock

Enums

enum CameraExposureOffset

Describe possible exposure offsets

Values:

enumerator START
enumerator MIDDLE
enumerator END
enum CameraBoardSocket

Which Camera socket to use.

AUTO denotes that the decision will be made by device

Values:

enumerator AUTO
enumerator CAM_A
enumerator CAM_B
enumerator CAM_C
enumerator CAM_D
enumerator CAM_E
enumerator CAM_F
enumerator CAM_G
enumerator CAM_H
enumerator CAM_I
enumerator CAM_J
enumerator RGB
enumerator CENTER
enumerator LEFT
enumerator RIGHT
enum CameraImageOrientation

Camera sensor image orientation / pixel readout. This exposes direct sensor settings. 90 or 270 degrees rotation is not available.

AUTO denotes that the decision will be made by device (e.g. on OAK-1/megaAI: ROTATE_180_DEG).

Values:

enumerator AUTO
enumerator NORMAL
enumerator HORIZONTAL_MIRROR
enumerator VERTICAL_FLIP
enumerator ROTATE_180_DEG
enum CameraModel

Which CameraModel to initialize the calibration with.

Values:

enumerator Perspective
enumerator Fisheye
enumerator Equirectangular
enumerator RadialDivision
enum CameraSensorType

Camera sensor type.

Values:

enumerator AUTO
enumerator COLOR
enumerator MONO
enumerator TOF
enumerator THERMAL
enum Colormap

Camera sensor type.

Values:

enumerator NONE
enumerator TURBO
enumerator JET
enumerator STEREO_TURBO
enumerator STEREO_JET
enum ConnectionInterface

Values:

enumerator USB
enumerator ETHERNET
enumerator WIFI
enum FrameEvent

Values:

enumerator NONE
enumerator READOUT_START
enumerator READOUT_END
enum Interpolation

Interpolation type

Values:

enumerator AUTO
enumerator BILINEAR
enumerator BICUBIC
enumerator NEAREST_NEIGHBOR
enumerator BYPASS
enumerator DEFAULT
enumerator DEFAULT_DISPARITY_DEPTH
enum MedianFilter

Median filter config

Values:

enumerator MEDIAN_OFF
enumerator KERNEL_3x3
enumerator KERNEL_5x5
enumerator KERNEL_7x7
enum ProcessorType

On which processor the node will be placed

Enum specifying processor

Values:

enumerator LEON_CSS
enumerator LEON_MSS
enum UsbSpeed

Get USB Speed

Values:

enumerator UNKNOWN
enumerator LOW
enumerator FULL
enumerator HIGH
enumerator SUPER
enumerator SUPER_PLUS
enum DatatypeEnum

Values:

enumerator Buffer
enumerator ImgFrame
enumerator EncodedFrame
enumerator NNData
enumerator ImageManipConfig
enumerator CameraControl
enumerator ImgDetections
enumerator SpatialImgDetections
enumerator SystemInformation
enumerator SpatialLocationCalculatorConfig
enumerator SpatialLocationCalculatorData
enumerator EdgeDetectorConfig
enumerator AprilTagConfig
enumerator AprilTags
enumerator Tracklets
enumerator IMUData
enumerator StereoDepthConfig
enumerator FeatureTrackerConfig
enumerator ImageAlignConfig
enumerator ToFConfig
enumerator PointCloudConfig
enumerator PointCloudData
enumerator TrackedFeatures
enumerator MessageGroup
enum SpatialLocationCalculatorAlgorithm

SpatialLocationCalculatorAlgorithm configuration modes

Contains calculation method used to obtain spatial locations.

Values:

enumerator AVERAGE
enumerator MEAN
enumerator MIN
enumerator MAX
enumerator MODE
enumerator MEDIAN
enum LogLevel

Values:

enumerator TRACE
enumerator DEBUG
enumerator INFO
enumerator WARN
enumerator ERR
enumerator CRITICAL
enumerator OFF
enum IMUSensor

Available IMU sensors. More details about each sensor can be found in the datasheet:

https://www.ceva-dsp.com/wp-content/uploads/2019/10/BNO080_085-Datasheet.pdf

Values:

enumerator ACCELEROMETER_RAW

Section 2.1.1

Acceleration of the device without any postprocessing, straight from the sensor. Units are [m/s^2]

enumerator ACCELEROMETER

Section 2.1.1

Acceleration of the device including gravity. Units are [m/s^2]

enumerator LINEAR_ACCELERATION

Section 2.1.1

Acceleration of the device with gravity removed. Units are [m/s^2]

enumerator GRAVITY

Section 2.1.1

Gravity. Units are [m/s^2]

enumerator GYROSCOPE_RAW

Section 2.1.2

The angular velocity of the device without any postprocessing, straight from the sensor. Units are [rad/s]

enumerator GYROSCOPE_CALIBRATED

Section 2.1.2

The angular velocity of the device. Units are [rad/s]

enumerator GYROSCOPE_UNCALIBRATED

Section 2.1.2

Angular velocity without bias compensation. Units are [rad/s]

enumerator MAGNETOMETER_RAW

Section 2.1.3

Magnetic field measurement without any postprocessing, straight from the sensor. Units are [uTesla]

enumerator MAGNETOMETER_CALIBRATED

Section 2.1.3

The fully calibrated magnetic field measurement. Units are [uTesla]

enumerator MAGNETOMETER_UNCALIBRATED

Section 2.1.3

The magnetic field measurement without hard-iron offset applied. Units are [uTesla]

enumerator ROTATION_VECTOR

Section 2.2

The rotation vector provides an orientation output that is expressed as a quaternion referenced to magnetic north and gravity. It is produced by fusing the outputs of the accelerometer, gyroscope and magnetometer. The rotation vector is the most accurate orientation estimate available. The magnetometer provides correction in yaw to reduce drift and the gyroscope enables the most responsive performance.

enumerator GAME_ROTATION_VECTOR

Section 2.2

The game rotation vector is an orientation output that is expressed as a quaternion with no specific reference for heading, while roll and pitch are referenced against gravity. It is produced by fusing the outputs of the accelerometer and the gyroscope (i.e. no magnetometer). The game rotation vector does not use the magnetometer to correct the gyroscopes drift in yaw. This is a deliberate omission (as specified by Google) to allow gaming applications to use a smoother representation of the orientation without the jumps that an instantaneous correction provided by a magnetic field update could provide. Long term the output will likely drift in yaw due to the characteristics of gyroscopes, but this is seen as preferable for this output versus a corrected output.

enumerator GEOMAGNETIC_ROTATION_VECTOR

Section 2.2

The geomagnetic rotation vector is an orientation output that is expressed as a quaternion referenced to magnetic north and gravity. It is produced by fusing the outputs of the accelerometer and magnetometer. The gyroscope is specifically excluded in order to produce a rotation vector output using less power than is required to produce the rotation vector of section 2.2.4. The consequences of removing the gyroscope are: Less responsive output since the highly dynamic outputs of the gyroscope are not used More errors in the presence of varying magnetic fields.

enumerator ARVR_STABILIZED_ROTATION_VECTOR

Section 2.2

Estimates of the magnetic field and the roll/pitch of the device can create a potential correction in the rotation vector produced. For applications (typically augmented or virtual reality applications) where a sudden jump can be disturbing, the output is adjusted to prevent these jumps in a manner that takes account of the velocity of the sensor system.

enumerator ARVR_STABILIZED_GAME_ROTATION_VECTOR

Section 2.2

While the magnetometer is removed from the calculation of the game rotation vector, the accelerometer itself can create a potential correction in the rotation vector produced (i.e. the estimate of gravity changes). For applications (typically augmented or virtual reality applications) where a sudden jump can be disturbing, the output is adjusted to prevent these jumps in a manner that takes account of the velocity of the sensor system. This process is called AR/VR stabilization.

enum TrackerType

Values:

enumerator SHORT_TERM_KCF

Kernelized Correlation Filter tracking.

enumerator SHORT_TERM_IMAGELESS

Short term tracking without using image data.

enumerator ZERO_TERM_IMAGELESS

Ability to track the objects without accessing image data.

enumerator ZERO_TERM_COLOR_HISTOGRAM

Tracking using image data too.

enum TrackerIdAssignmentPolicy

Values:

enumerator UNIQUE_ID

Always take a new, unique ID.

enumerator SMALLEST_ID

Take the smallest available ID.

enum SerializationType

Values:

enumerator LIBNOP
enumerator JSON
enumerator JSON_MSGPACK

Functions

bool initialize()
bool initialize(std::string additionalInfo, bool installSignalHandler = true, void *javavm = nullptr)
bool initialize(const char *additionalInfo, bool installSignalHandler = true, void *javavm = nullptr)
bool initialize(void *javavm)
DEPTHAI_SERIALIZE_EXT(CameraSensorConfig, width, height, minFps, maxFps, fov, type)
DEPTHAI_SERIALIZE_EXT(ChipTemperature, css, mss, upa, dss, average)
DEPTHAI_SERIALIZE_EXT(CpuUsage, average, msTime)
DEPTHAI_SERIALIZE_EXT(DetectionParserOptions, nnFamily, confidenceThreshold, classes, coordinates, anchors, anchorMasks, iouThreshold)
DEPTHAI_SERIALIZE_OPTIONAL_EXT(EepromData, version, boardCustom, boardName, boardRev, boardConf, hardwareConf, productName, deviceName, batchName, batchTime, boardOptions, cameraData, stereoRectificationData, imuExtrinsics, housingExtrinsics, miscellaneousData, stereoUseSpecTranslation, stereoEnableDistortionCorrection, verticalCameraSocket)
DEPTHAI_SERIALIZE_EXT(MemoryInfo, remaining, used, total)
DEPTHAI_SERIALIZE_EXT(Point2f, x, y)
DEPTHAI_SERIALIZE_EXT(Point3f, x, y, z)
DEPTHAI_SERIALIZE_EXT(Rect, x, y, width, height)
DEPTHAI_SERIALIZE_EXT(RotatedRect, center, size, angle)
DEPTHAI_SERIALIZE_EXT(Size2f, width, height)
DEPTHAI_SERIALIZE_EXT(StereoRectification, rectifiedRotationLeft, rectifiedRotationRight, leftCameraSocket, rightCameraSocket)
DEPTHAI_SERIALIZE_EXT(TensorInfo, order, dataType, numDimensions, dims, strides, name, offset)
DEPTHAI_SERIALIZE_EXT(Timestamp, sec, nsec)
bool isDatatypeSubclassOf(DatatypeEnum parent, DatatypeEnum children)
DEPTHAI_SERIALIZE_EXT(AprilTag, id, hamming, decisionMargin, topLeft, topRight, bottomRight, bottomLeft)
DEPTHAI_SERIALIZE_EXT(EdgeDetectorConfigData, sobelFilterHorizontalKernel, sobelFilterVerticalKernel)
DEPTHAI_SERIALIZE_EXT(IMUReport, sequence, accuracy, timestamp, tsDevice)
DEPTHAI_SERIALIZE_EXT(IMUReportAccelerometer, x, y, z, sequence, accuracy, timestamp, tsDevice)
DEPTHAI_SERIALIZE_EXT(IMUReportGyroscope, x, y, z, sequence, accuracy, timestamp, tsDevice)
DEPTHAI_SERIALIZE_EXT(IMUReportMagneticField, x, y, z, sequence, accuracy, timestamp, tsDevice)
DEPTHAI_SERIALIZE_EXT(IMUReportRotationVectorWAcc, i, j, k, real, rotationVectorAccuracy, sequence, accuracy, timestamp, tsDevice)
DEPTHAI_SERIALIZE_EXT(IMUPacket, acceleroMeter, gyroscope, magneticField, rotationVector)
DEPTHAI_SERIALIZE_EXT(ImgDetection, label, confidence, xmin, ymin, xmax, ymax)
DEPTHAI_SERIALIZE_EXT(SpatialImgDetection, label, confidence, xmin, ymin, xmax, ymax, spatialCoordinates, boundingBoxMapping)
DEPTHAI_SERIALIZE_EXT(SpatialLocationCalculatorConfigThresholds, lowerThreshold, upperThreshold)
DEPTHAI_SERIALIZE_EXT(SpatialLocationCalculatorConfigData, roi, depthThresholds, calculationAlgorithm, stepSize)
DEPTHAI_SERIALIZE_EXT(SpatialLocations, config, depthAverage, depthMode, depthMedian, depthMin, depthMax, depthAveragePixelCount, spatialCoordinates)
DEPTHAI_SERIALIZE_EXT(TrackedFeature, position, id, age, harrisScore, trackingError)
DEPTHAI_SERIALIZE_EXT(BoardConfig::USB, vid, pid, flashBootedVid, flashBootedPid, maxSpeed, productName, manufacturer)
DEPTHAI_SERIALIZE_EXT(BoardConfig::Network, mtu, xlinkTcpNoDelay)
DEPTHAI_SERIALIZE_EXT(BoardConfig::GPIO, mode, direction, level, pull, drive, schmitt, slewFast)
DEPTHAI_SERIALIZE_EXT(BoardConfig::UART, tmp)
DEPTHAI_SERIALIZE_EXT(BoardConfig::Camera, name, sensorType, orientation)
DEPTHAI_SERIALIZE_EXT(BoardConfig::IMU, bus, interrupt, wake, csGpio, boot, reset)
DEPTHAI_SERIALIZE_EXT(BoardConfig::UVC, cameraName, width, height, frameType, enable)
DEPTHAI_SERIALIZE_EXT(BoardConfig, usb, network, sysctl, watchdogTimeoutMs, watchdogInitialDelayMs, gpio, uart, pcieInternalClock, usb3PhyInternalClock, emmc, logPath, logSizeMax, logVerbosity, logDevicePrints, nonExclusiveMode, camera, imu, uvc)
DEPTHAI_SERIALIZE_EXT(CrashDump, crashReports, depthaiCommitHash, deviceId)
DEPTHAI_SERIALIZE_EXT(LogMessage, nodeIdName, level, time, colorRangeStart, colorRangeEnd, payload)
DEPTHAI_SERIALIZE_EXT(NodeConnectionSchema, node1Id, node1OutputGroup, node1Output, node2Id, node2InputGroup, node2Input)
DEPTHAI_SERIALIZE_EXT(NodeIoInfo, group, name, type, blocking, queueSize, waitForMessage, id)
DEPTHAI_SERIALIZE_EXT(NodeObjInfo, id, name, properties, ioInfo)
DEPTHAI_SERIALIZE_EXT(PipelineSchema, connections, globalProperties, nodes)
DEPTHAI_SERIALIZE_EXT(AprilTagProperties, initialConfig, inputConfigSync)
DEPTHAI_SERIALIZE_EXT(CameraProperties, initialControl, boardSocket, cameraName, imageOrientation, colorOrder, interleaved, fp16, previewHeight, previewWidth, videoWidth, videoHeight, stillWidth, stillHeight, resolutionWidth, resolutionHeight, fps, isp3aFps, sensorCropX, sensorCropY, previewKeepAspectRatio, ispScale, sensorType, numFramesPoolRaw, numFramesPoolIsp, numFramesPoolVideo, numFramesPoolPreview, numFramesPoolStill, warpMeshSource, warpMeshUri, warpMeshWidth, warpMeshHeight, calibAlpha, warpMeshStepWidth, warpMeshStepHeight, rawPacked)
DEPTHAI_SERIALIZE_EXT(CastProperties, numFramesPool, outputType, scale, offset)
DEPTHAI_SERIALIZE_EXT(ColorCameraProperties, initialControl, boardSocket, cameraName, imageOrientation, colorOrder, interleaved, fp16, previewHeight, previewWidth, videoWidth, videoHeight, stillWidth, stillHeight, resolution, fps, isp3aFps, sensorCropX, sensorCropY, previewKeepAspectRatio, ispScale, numFramesPoolRaw, numFramesPoolIsp, numFramesPoolVideo, numFramesPoolPreview, numFramesPoolStill, rawPacked)
DEPTHAI_SERIALIZE_EXT(DetectionNetworkProperties, blobSize, blobUri, numFrames, numThreads, numNCEPerThread, parser)
DEPTHAI_SERIALIZE_EXT(DetectionParserProperties, numFramesPool, networkInputs, parser)
DEPTHAI_SERIALIZE_EXT(EdgeDetectorProperties, initialConfig, outputFrameSize, numFramesPool)
DEPTHAI_SERIALIZE_EXT(FeatureTrackerProperties, initialConfig, numShaves, numMemorySlices)
DEPTHAI_SERIALIZE_EXT(GlobalProperties, leonCssFrequencyHz, leonMssFrequencyHz, pipelineName, pipelineVersion, cameraTuningBlobSize, cameraTuningBlobUri, calibData, xlinkChunkSize, sippBufferSize, sippDmaBufferSize)
DEPTHAI_SERIALIZE_EXT(IMUSensorConfig, sensitivityEnabled, sensitivityRelative, changeSensitivity, reportRate, sensorId)
DEPTHAI_SERIALIZE_EXT(IMUProperties, imuSensors, batchReportThreshold, maxBatchReports, enableFirmwareUpdate)
DEPTHAI_SERIALIZE_EXT(ImageAlignProperties, initialConfig, numFramesPool, alignWidth, alignHeight, warpHwIds, interpolation, outKeepAspectRatio, numShaves)
DEPTHAI_SERIALIZE_EXT(ImageManipProperties, initialConfig, outputFrameSize, numFramesPool, meshWidth, meshHeight, meshUri)
DEPTHAI_SERIALIZE_EXT(MessageDemuxProperties, dummy)
DEPTHAI_SERIALIZE_EXT(MonoCameraProperties, initialControl, boardSocket, cameraName, imageOrientation, resolution, fps, isp3aFps, numFramesPool, numFramesPoolRaw, rawPacked)
DEPTHAI_SERIALIZE_EXT(NeuralNetworkProperties, blobSize, blobUri, numFrames, numThreads, numNCEPerThread)
DEPTHAI_SERIALIZE_EXT(ObjectTrackerProperties, trackerThreshold, maxObjectsToTrack, detectionLabelsToTrack, trackerType, trackerIdAssignmentPolicy, trackingPerClass)
DEPTHAI_SERIALIZE_EXT(PointCloudProperties, initialConfig, numFramesPool)
DEPTHAI_SERIALIZE_EXT(SPIInProperties, streamName, busId, maxDataSize, numFrames)
DEPTHAI_SERIALIZE_EXT(SPIOutProperties, streamName, busId)
DEPTHAI_SERIALIZE_EXT(ScriptProperties, scriptUri, scriptName, processor)
DEPTHAI_SERIALIZE_EXT(SpatialDetectionNetworkProperties, blobSize, blobUri, numFrames, numThreads, numNCEPerThread, parser, detectedBBScaleFactor, depthThresholds, calculationAlgorithm, stepSize)
DEPTHAI_SERIALIZE_EXT(SpatialLocationCalculatorProperties, roiConfig)
DEPTHAI_SERIALIZE_EXT(StereoDepthProperties, initialConfig, depthAlignCamera, enableRectification, rectifyEdgeFillColor, width, height, outWidth, outHeight, outKeepAspectRatio, mesh, enableRuntimeStereoModeSwitch, numFramesPool, numPostProcessingShaves, numPostProcessingMemorySlices, focalLengthFromCalibration, useHomographyRectification, baseline, focalLength, disparityToDepthUseSpecTranslation, rectificationUseSpecTranslation, depthAlignmentUseSpecTranslation, alphaScaling)
DEPTHAI_SERIALIZE_EXT(SyncProperties, syncThresholdNs, syncAttempts)
DEPTHAI_SERIALIZE_EXT(SystemLoggerProperties, rateHz)
DEPTHAI_SERIALIZE_EXT(ToFProperties, initialConfig, numFramesPool, numShaves)
DEPTHAI_SERIALIZE_EXT(UVCProperties, gpioInit, gpioStreamOn, gpioStreamOff)
DEPTHAI_SERIALIZE_EXT(VideoEncoderProperties, bitrate, keyframeFrequency, maxBitrate, numBFrames, numFramesPool, outputFrameSize, profile, quality, lossless, rateCtrlMode, frameRate)
DEPTHAI_SERIALIZE_EXT(WarpProperties, outputWidth, outputHeight, outputFrameSize, numFramesPool, meshWidth, meshHeight, meshUri, warpHwIds, interpolation)
DEPTHAI_SERIALIZE_EXT(XLinkInProperties, streamName, maxDataSize, numFrames)
DEPTHAI_SERIALIZE_EXT(XLinkOutProperties, maxFpsLimit, streamName, metadataOnly)

Variables

static constexpr uint32_t BOARD_CONFIG_MAGIC1 = 0x78010000U
static constexpr uint32_t BOARD_CONFIG_MAGIC2 = 0x21ea17e6U
constexpr const char *LOG_DEFAULT_PATTERN = "[%E.%e] [%n] [%^%l%$] %v"
static constexpr auto DEFAULT_SERIALIZATION_TYPE = SerializationType::LIBNOP
class ADatatype
#include <ADatatype.hpp>

Abstract message.

Subclassed by dai::Buffer

struct AprilTag
#include <RawAprilTags.hpp>

AprilTag structure.

Public Members

int id = 0

The decoded ID of the tag

int hamming = 0

How many error bits were corrected? Note: accepting large numbers of corrected errors leads to greatly increased false positive rates. As of this implementation, the detector cannot detect tags with a hamming distance greater than 2.

float decisionMargin = 0.f

A measure of the quality of the binary decoding process; the average difference between the intensity of a data bit versus the decision threshold. Higher numbers roughly indicate better decodes. This is a reasonable measure of detection accuracy only for very small tags not effective for larger tags (where we could have sampled anywhere within a bit cell and still gotten a good detection.

Point2f topLeft

The detected top left coordinates.

Point2f topRight

The detected top right coordinates.

Point2f bottomRight

The detected bottom right coordinates.

Point2f bottomLeft

The detected bottom left coordinates.

class AprilTagConfig : public dai::Buffer
#include <AprilTagConfig.hpp>

AprilTagConfig message.

Public Functions

AprilTagConfig()

Construct AprilTagConfig message.

AprilTagConfig &setFamily(Family family)

Parameters

AprilTagConfig &set(dai::RawAprilTagConfig config)

Set explicit configuration.

Parameters
  • config: Explicit configuration

dai::RawAprilTagConfig get() const

Retrieve configuration data for AprilTag.

Return

config for stereo depth algorithm

struct AprilTagProperties : public dai::PropertiesSerializable<Properties, AprilTagProperties>
#include <AprilTagProperties.hpp>

Specify properties for AprilTag

Public Members

bool inputConfigSync = false

Whether to wait for config at ‘inputConfig’ IO.

class AprilTags : public dai::Buffer
#include <AprilTags.hpp>

AprilTags message.

Public Functions

AprilTags()

Construct AprilTags message.

AprilTags &setTimestamp(std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> timestamp)

Sets image timestamp related to dai::Clock::now()

AprilTags &setTimestampDevice(std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> timestamp)

Sets image timestamp related to dai::Clock::now()

AprilTags &setSequenceNum(int64_t sequenceNum)

Retrieves image sequence number

struct Asset
#include <AssetManager.hpp>

Asset is identified with string key and can store arbitrary binary data.

class AssetManager
#include <AssetManager.hpp>

AssetManager can store assets and serialize.

Public Functions

void addExisting(std::vector<std::shared_ptr<Asset>> assets)

Adds all assets in an array to the AssetManager

Parameters
  • assets: Vector of assets to add

std::shared_ptr<dai::Asset> set(Asset asset)

Adds or overwrites an asset object to AssetManager.

Return

Shared pointer to asset

Parameters

std::shared_ptr<dai::Asset> set(const std::string &key, Asset asset)

Adds or overwrites an asset object to AssetManager with a specified key. Key value will be assigned to an Asset as well

Return

Shared pointer to asset

Parameters
  • key: Key under which the asset should be stored

  • asset: Asset to store

std::shared_ptr<dai::Asset> set(const std::string &key, const dai::Path &path, int alignment = 64)

Loads file into asset manager under specified key.

Parameters
  • key: Key under which the asset should be stored

  • path: Path to file which to load as asset

  • alignment: [Optional] alignment of asset data in asset storage. Default is 64B

std::shared_ptr<dai::Asset> set(const std::string &key, const std::vector<std::uint8_t> &data, int alignment = 64)

Loads file into asset manager under specified key.

Return

Shared pointer to asset

Parameters
  • key: Key under which the asset should be stored

  • data: Asset data

  • alignment: [Optional] alignment of asset data in asset storage. Default is 64B

std::shared_ptr<const Asset> get(const std::string &key) const

Return

Asset assigned to the specified key or a nullptr otherwise

std::shared_ptr<Asset> get(const std::string &key)

Return

Asset assigned to the specified key or a nullptr otherwise

std::vector<std::shared_ptr<const Asset>> getAll() const

Return

All asset stored in the AssetManager

std::vector<std::shared_ptr<Asset>> getAll()

Return

All asset stored in the AssetManager

std::size_t size() const

Return

Number of asset stored in the AssetManager

void remove(const std::string &key)

Removes asset with key

Parameters
  • key: Key of asset to remove

void serialize(AssetsMutable &assets, std::vector<std::uint8_t> &assetStorage, std::string prefix = "") const

Serializes.

class Assets

Subclassed by dai::AssetsMutable

class AssetsMutable : public dai::Assets
struct AssetView
struct BoardConfig

Public Members

std::vector<std::string> sysctl

Optional list of FreeBSD sysctl parameters to be set (system, network, etc.). For example: “net.inet.tcp.delayed_ack=0” (this one is also set by default)

tl::optional<uint32_t> watchdogTimeoutMs

Watchdog config.

std::unordered_map<std::int8_t, UART> uart

UART instance map.

tl::optional<bool> pcieInternalClock

PCIe config.

tl::optional<bool> usb3PhyInternalClock

USB3 phy config.

tl::optional<bool> mipi4LaneRgb

MIPI 4Lane RGB config.

tl::optional<bool> emmc

eMMC config

tl::optional<std::string> logPath

log path

tl::optional<size_t> logSizeMax

Max log size.

tl::optional<LogLevel> logVerbosity

log verbosity

tl::optional<bool> logDevicePrints

log device prints

struct Camera
#include <BoardConfig.hpp>

Camera description.

struct GPIO
#include <BoardConfig.hpp>

GPIO config.

Public Types

enum Drive

Drive strength in mA (2, 4, 8 and 12mA)

Values:

enumerator MA_2
enumerator MA_4
enumerator MA_8
enumerator MA_12
struct IMU
struct Network
#include <BoardConfig.hpp>

Network configuration.

Public Members

uint16_t mtu = 0

Network MTU, 0 is auto (usually 1500 for Ethernet) or forwarded from bootloader (not yet implemented there). Note: not advised to increase past 1500 for now

bool xlinkTcpNoDelay = true

Sets the TCP_NODELAY option for XLink TCP sockets (disable Nagle’s algorithm), reducing latency at the expense of a small hit for max throughput. Default is true

struct UART
#include <BoardConfig.hpp>

UART instance config.

struct USB
#include <BoardConfig.hpp>

USB related config.

struct UVC
#include <BoardConfig.hpp>

UVC configuration for USB descriptor.

class Buffer : public dai::ADatatype
#include <Buffer.hpp>

Base message - buffer of binary data.

Subclassed by dai::AprilTagConfig, dai::AprilTags, dai::CameraControl, dai::EdgeDetectorConfig, dai::EncodedFrame, dai::FeatureTrackerConfig, dai::ImageAlignConfig, dai::ImageManipConfig, dai::ImgDetections, dai::ImgFrame, dai::IMUData, dai::MessageGroup, dai::NNData, dai::PointCloudConfig, dai::PointCloudData, dai::SpatialImgDetections, dai::SpatialLocationCalculatorConfig, dai::SpatialLocationCalculatorData, dai::StereoDepthConfig, dai::SystemInformation, dai::ToFConfig, dai::TrackedFeatures, dai::Tracklets

Public Functions

Buffer()

Creates Buffer message.

std::vector<std::uint8_t> &getData() const

Get non-owning reference to internal buffer.

Return

Reference to internal buffer

void setData(const std::vector<std::uint8_t> &data)

Parameters
  • data: Copies data to internal buffer

void setData(std::vector<std::uint8_t> &&data)

Parameters
  • data: Moves data to internal buffer

std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> getTimestamp() const

Retrieves timestamp related to dai::Clock::now()

std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> getTimestampDevice() const

Retrieves timestamp directly captured from device’s monotonic clock, not synchronized to host time. Used mostly for debugging

int64_t getSequenceNum() const

Retrieves sequence number

Buffer &setTimestamp(std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> timestamp)

Sets timestamp related to dai::Clock::now()

Buffer &setTimestampDevice(std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> timestamp)

Sets timestamp related to dai::Clock::now()

Buffer &setSequenceNum(int64_t sequenceNum)

Retrieves sequence number

class CalibrationHandler
#include <CalibrationHandler.hpp>

CalibrationHandler is an interface to read/load/write structured calibration and device data. The following fields are protected and aren’t allowed to be overridden by default:

  • boardName

  • boardRev

  • boardConf

  • hardwareConf

  • batchName

  • batchTime

  • boardOptions

  • productName

Public Functions

CalibrationHandler(dai::Path eepromDataPath)

Construct a new Calibration Handler object using the eeprom json file created from calibration procedure.

Parameters
  • eepromDataPath: takes the full path to the json file containing the calibration and device info.

CalibrationHandler(dai::Path calibrationDataPath, dai::Path boardConfigPath)

Construct a new Calibration Handler object using the board config json file and .calib binary files created using gen1 calibration.

Parameters
  • calibrationDataPath: Full Path to the .calib binary file from the gen1 calibration. (Supports only Version 5)

  • boardConfigPath: Full Path to the board config json file containing device information.

CalibrationHandler(EepromData eepromData)

Construct a new Calibration Handler object from EepromData object.

Parameters
  • eepromData: EepromData data structure containing the calibration data.

dai::EepromData getEepromData() const

Get the Eeprom Data object

Return

EepromData object which contains the raw calibration data

std::vector<std::vector<float>> getCameraIntrinsics(CameraBoardSocket cameraId, int resizeWidth = -1, int resizeHeight = -1, Point2f topLeftPixelId = Point2f(), Point2f bottomRightPixelId = Point2f(), bool keepAspectRatio = true) const

Get the Camera Intrinsics object

Matrix representation of intrinsic matrix

\[\begin{split} \text{Intrinsic Matrix} = \left [ \begin{matrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{matrix} \right ] \end{split}\]
Return

Represents the 3x3 intrinsics matrix of the respective camera at the requested size and crop dimensions.

Parameters
  • cameraId: Uses the cameraId to identify which camera intrinsics to return

  • resizewidth: resized width of the image for which intrinsics is requested. resizewidth = -1 represents width is same as default intrinsics

  • resizeHeight: resized height of the image for which intrinsics is requested. resizeHeight = -1 represents height is same as default intrinsics

  • topLeftPixelId: (x, y) point represents the top left corner coordinates of the cropped image which is used to modify the intrinsics for the respective cropped image

  • bottomRightPixelId: (x, y) point represents the bottom right corner coordinates of the cropped image which is used to modify the intrinsics for the respective cropped image

  • keepAspectRatio: Enabling this will scale on width or height depending on which provides the max resolution and crops the remaining part of the other side

std::vector<std::vector<float>> getCameraIntrinsics(CameraBoardSocket cameraId, Size2f destShape, Point2f topLeftPixelId = Point2f(), Point2f bottomRightPixelId = Point2f(), bool keepAspectRatio = true) const

Get the Camera Intrinsics object

Matrix representation of intrinsic matrix

\[\begin{split} \text{Intrinsic Matrix} = \left [ \begin{matrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{matrix} \right ] \end{split}\]
Return

Represents the 3x3 intrinsics matrix of the respective camera at the requested size and crop dimensions.

Parameters
  • cameraId: Uses the cameraId to identify which camera intrinsics to return

  • destShape: resized width and height of the image for which intrinsics is requested.

  • topLeftPixelId: (x, y) point represents the top left corner coordinates of the cropped image which is used to modify the intrinsics for the respective cropped image

  • bottomRightPixelId: (x, y) point represents the bottom right corner coordinates of the cropped image which is used to modify the intrinsics for the respective cropped image

  • keepAspectRatio: Enabling this will scale on width or height depending on which provides the max resolution and crops the remaining part of the other side

std::vector<std::vector<float>> getCameraIntrinsics(CameraBoardSocket cameraId, std::tuple<int, int> destShape, Point2f topLeftPixelId = Point2f(), Point2f bottomRightPixelId = Point2f(), bool keepAspectRatio = true) const

Get the Camera Intrinsics object

Matrix representation of intrinsic matrix

\[\begin{split} \text{Intrinsic Matrix} = \left [ \begin{matrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{matrix} \right ] \end{split}\]
Return

Represents the 3x3 intrinsics matrix of the respective camera at the requested size and crop dimensions.

Parameters
  • cameraId: Uses the cameraId to identify which camera intrinsics to return

  • destShape: resized width and height of the image for which intrinsics is requested.

  • topLeftPixelId: (x, y) point represents the top left corner coordinates of the cropped image which is used to modify the intrinsics for the respective cropped image

  • bottomRightPixelId: (x, y) point represents the bottom right corner coordinates of the cropped image which is used to modify the intrinsics for the respective cropped image

  • keepAspectRatio: Enabling this will scale on width or height depending on which provides the max resolution and crops the remaining part of the other side

std::tuple<std::vector<std::vector<float>>, int, int> getDefaultIntrinsics(CameraBoardSocket cameraId) const

Get the Default Intrinsics object

Matrix representation of intrinsic matrix

\[\begin{split} \text{Intrinsic Matrix} = \left [ \begin{matrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{matrix} \right ] \end{split}\]
Return

Represents the 3x3 intrinsics matrix of the respective camera along with width and height at which it was calibrated.

Parameters
  • cameraId: Uses the cameraId to identify which camera intrinsics to return

std::vector<float> getDistortionCoefficients(CameraBoardSocket cameraId) const

Get the Distortion Coefficients object

Return

the distortion coefficients of the requested camera in this order: [k1,k2,p1,p2,k3,k4,k5,k6,s1,s2,s3,s4,tx,ty] for CameraModel::Perspective or [k1, k2, k3, k4] for CameraModel::Fisheye see https://docs.opencv.org/4.5.4/d9/d0c/group__calib3d.html for Perspective model (Rational Polynomial Model) see https://docs.opencv.org/4.5.4/db/d58/group__calib3d__fisheye.html for Fisheye model

Parameters
  • cameraId: Uses the cameraId to identify which distortion Coefficients to return.

float getFov(CameraBoardSocket cameraId, bool useSpec = true) const

Get the Fov of the camera

Return

field of view of the camera with given cameraId.

Parameters
  • cameraId: of the camera of which we are fetching fov.

  • useSpec: Disabling this bool will calculate the fov based on intrinsics (focal length, image width), instead of getting it from the camera specs

uint8_t getLensPosition(CameraBoardSocket cameraId) const

Get the lens position of the given camera

Return

lens position of the camera with given cameraId at which it was calibrated.

Parameters
  • cameraId: of the camera with lens position is requested.

CameraModel getDistortionModel(CameraBoardSocket cameraId) const

Get the distortion model of the given camera

Return

lens position of the camera with given cameraId at which it was calibrated.

Parameters
  • cameraId: of the camera with lens position is requested.

std::vector<std::vector<float>> getCameraExtrinsics(CameraBoardSocket srcCamera, CameraBoardSocket dstCamera, bool useSpecTranslation = false) const

Get the Camera Extrinsics object between two cameras from the calibration data if there is a linked connection between any two cameras then the relative rotation and translation (in centimeters) is returned by this function.

Matrix representation of transformation matrix

\[\begin{split} \text{Transformation Matrix} = \left [ \begin{matrix} r_{00} & r_{01} & r_{02} & T_x \\ r_{10} & r_{11} & r_{12} & T_y \\ r_{20} & r_{21} & r_{22} & T_z \\ 0 & 0 & 0 & 1 \end{matrix} \right ] \end{split}\]
Return

a transformationMatrix which is 4x4 in homogeneous coordinate system

Parameters
  • srcCamera: Camera Id of the camera which will be considered as origin.

  • dstCamera: Camera Id of the destination camera to which we are fetching the rotation and translation from the SrcCamera

  • useSpecTranslation: Enabling this bool uses the translation information from the board design data

std::vector<float> getCameraTranslationVector(CameraBoardSocket srcCamera, CameraBoardSocket dstCamera, bool useSpecTranslation = true) const

Get the Camera translation vector between two cameras from the calibration data.

Return

a translation vector like [x, y, z] in centimeters

Parameters
  • srcCamera: Camera Id of the camera which will be considered as origin.

  • dstCamera: Camera Id of the destination camera to which we are fetching the translation vector from the SrcCamera

  • useSpecTranslation: Disabling this bool uses the translation information from the calibration data (not the board design data)

float getBaselineDistance(CameraBoardSocket cam1 = CameraBoardSocket::CAM_C, CameraBoardSocket cam2 = CameraBoardSocket::CAM_B, bool useSpecTranslation = true) const

Get the baseline distance between two specified cameras. By default it will get the baseline between CameraBoardSocket.CAM_C and CameraBoardSocket.CAM_B.

Return

baseline distance in centimeters

Parameters
  • cam1: First camera

  • cam2: Second camera

  • useSpecTranslation: Enabling this bool uses the translation information from the board design data (not the calibration data)

std::vector<std::vector<float>> getCameraToImuExtrinsics(CameraBoardSocket cameraId, bool useSpecTranslation = false) const

Get the Camera To Imu Extrinsics object From the data loaded if there is a linked connection between IMU and the given camera then there relative rotation and translation from the camera to IMU is returned.

Matrix representation of transformation matrix

\[\begin{split} \text{Transformation Matrix} = \left [ \begin{matrix} r_{00} & r_{01} & r_{02} & T_x \\ r_{10} & r_{11} & r_{12} & T_y \\ r_{20} & r_{21} & r_{22} & T_z \\ 0 & 0 & 0 & 1 \end{matrix} \right ] \end{split}\]
Return

Returns a transformationMatrix which is 4x4 in homogeneous coordinate system

Parameters
  • cameraId: Camera Id of the camera which will be considered as origin. from which Transformation matrix to the IMU will be found

  • useSpecTranslation: Enabling this bool uses the translation information from the board design data

std::vector<std::vector<float>> getImuToCameraExtrinsics(CameraBoardSocket cameraId, bool useSpecTranslation = false) const

Get the Imu To Camera Extrinsics object from the data loaded if there is a linked connection between IMU and the given camera then there relative rotation and translation from the IMU to Camera is returned.

Matrix representation of transformation matrix

\[\begin{split} \text{Transformation Matrix} = \left [ \begin{matrix} r_{00} & r_{01} & r_{02} & T_x \\ r_{10} & r_{11} & r_{12} & T_y \\ r_{20} & r_{21} & r_{22} & T_z \\ 0 & 0 & 0 & 1 \end{matrix} \right ] \end{split}\]
Return

Returns a transformationMatrix which is 4x4 in homogeneous coordinate system

Parameters
  • cameraId: Camera Id of the camera which will be considered as destination. To which Transformation matrix from the IMU will be found.

  • useSpecTranslation: Enabling this bool uses the translation information from the board design data

std::vector<std::vector<float>> getStereoRightRectificationRotation() const

Get the Stereo Right Rectification Rotation object

Return

returns a 3x3 rectification rotation matrix

std::vector<std::vector<float>> getStereoLeftRectificationRotation() const

Get the Stereo Left Rectification Rotation object

Return

returns a 3x3 rectification rotation matrix

dai::CameraBoardSocket getStereoLeftCameraId() const

Get the camera id of the camera which is used as left camera of the stereo setup

Return

cameraID of the camera used as left camera

dai::CameraBoardSocket getStereoRightCameraId() const

Get the camera id of the camera which is used as right camera of the stereo setup

Return

cameraID of the camera used as right camera

bool eepromToJsonFile(dai::Path destPath) const

Write raw calibration/board data to json file.

Return

True on success, false otherwise

Parameters
  • destPath: Full path to the json file in which raw calibration data will be stored

nlohmann::json eepromToJson() const

Get JSON representation of calibration data

Return

JSON structure

void setBoardInfo(std::string boardName, std::string boardRev)

Set the Board Info object

Parameters
  • version: Sets the version of the Calibration data(Current version is 6)

  • boardName: Sets your board name.

  • boardRev: set your board revision id.

void setBoardInfo(std::string productName, std::string boardName, std::string boardRev, std::string boardConf, std::string hardwareConf, std::string batchName, uint64_t batchTime, uint32_t boardOptions, std::string boardCustom = "")

Set the Board Info object. Creates version 7 EEPROM data

Parameters
  • productName: Sets product name (alias).

  • boardName: Sets board name.

  • boardRev: Sets board revision id.

  • boardConf: Sets board configuration id.

  • hardwareConf: Sets hardware configuration id.

  • batchName: Sets batch name.

  • batchTime: Sets batch time (unix timestamp).

  • boardCustom: Sets a custom board (Default empty string).

void setBoardInfo(std::string deviceName, std::string productName, std::string boardName, std::string boardRev, std::string boardConf, std::string hardwareConf, std::string batchName, uint64_t batchTime, uint32_t boardOptions, std::string boardCustom = "")

Set the Board Info object. Creates version 7 EEPROM data

Parameters
  • deviceName: Sets device name.

  • productName: Sets product name (alias).

  • boardName: Sets board name.

  • boardRev: Sets board revision id.

  • boardConf: Sets board configuration id.

  • hardwareConf: Sets hardware configuration id.

  • batchName: Sets batch name. Not supported anymore

  • batchTime: Sets batch time (unix timestamp).

  • boardCustom: Sets a custom board (Default empty string).

void setDeviceName(std::string deviceName)

Set the deviceName which responses to getDeviceName of Device

Parameters
  • deviceName: Sets device name.

void setProductName(std::string productName)

Set the productName which acts as alisas for users to identify the device

Parameters
  • productName: Sets product name (alias).

void setCameraIntrinsics(CameraBoardSocket cameraId, std::vector<std::vector<float>> intrinsics, Size2f frameSize)

Set the Camera Intrinsics object

Matrix representation of intrinsic matrix

\[\begin{split} \text{Intrinsic Matrix} = \left [ \begin{matrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{matrix} \right ] \end{split}\]
Parameters
  • cameraId: CameraId of the camera for which Camera intrinsics are being loaded

  • intrinsics: 3x3 intrinsics matrix

  • frameSize: Represents the width and height of the image at which intrinsics are calculated.

void setCameraIntrinsics(CameraBoardSocket cameraId, std::vector<std::vector<float>> intrinsics, int width, int height)

Set the Camera Intrinsics object

Matrix representation of intrinsic matrix

\[\begin{split} \text{Intrinsic Matrix} = \left [ \begin{matrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{matrix} \right ] \end{split}\]
Parameters
  • cameraId: CameraId of the camera for which Camera intrinsics are being loaded

  • intrinsics: 3x3 intrinsics matrix

  • width: Represents the width of the image at which intrinsics are calculated.

  • height: Represents the height of the image at which intrinsics are calculated.

void setCameraIntrinsics(CameraBoardSocket cameraId, std::vector<std::vector<float>> intrinsics, std::tuple<int, int> frameSize)

Set the Camera Intrinsics object

Matrix representation of intrinsic matrix

\[\begin{split} \text{Intrinsic Matrix} = \left [ \begin{matrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{matrix} \right ] \end{split}\]
Parameters
  • cameraId: CameraId of the camera for which Camera intrinsics are being loaded

  • intrinsics: 3x3 intrinsics matrix

  • frameSize: Represents the width and height of the image at which intrinsics are calculated.

void setDistortionCoefficients(CameraBoardSocket cameraId, std::vector<float> distortionCoefficients)

Sets the distortion Coefficients obtained from camera calibration

Parameters
  • cameraId: Camera Id of the camera for which distortion coefficients are computed

  • distortionCoefficients: Distortion Coefficients of the respective Camera.

void setFov(CameraBoardSocket cameraId, float hfov)

Set the Fov of the Camera

Parameters
  • cameraId: Camera Id of the camera

  • hfov: Horizontal fov of the camera from Camera Datasheet

void setLensPosition(CameraBoardSocket cameraId, uint8_t lensPosition)

Sets the distortion Coefficients obtained from camera calibration

Parameters
  • cameraId: Camera Id of the camera

  • lensPosition: lens posiotion value of the camera at the time of calibration

void setCameraType(CameraBoardSocket cameraId, CameraModel cameraModel)

Set the Camera Type object

Parameters
  • cameraId: CameraId of the camera for which cameraModel Type is being updated.

  • cameraModel: Type of the model the camera represents

void setCameraExtrinsics(CameraBoardSocket srcCameraId, CameraBoardSocket destCameraId, std::vector<std::vector<float>> rotationMatrix, std::vector<float> translation, std::vector<float> specTranslation = {0, 0, 0})

Set the Camera Extrinsics object

Parameters
  • srcCameraId: Camera Id of the camera which will be considered as relative origin.

  • destCameraId: Camera Id of the camera which will be considered as destination from srcCameraId.

  • rotationMatrix: Rotation between srcCameraId and destCameraId origins.

  • translation: Translation between srcCameraId and destCameraId origins.

  • specTranslation: Translation between srcCameraId and destCameraId origins from the design.

void setImuExtrinsics(CameraBoardSocket destCameraId, std::vector<std::vector<float>> rotationMatrix, std::vector<float> translation, std::vector<float> specTranslation = {0, 0, 0})

Set the Imu to Camera Extrinsics object

Parameters
  • destCameraId: Camera Id of the camera which will be considered as destination from IMU.

  • rotationMatrix: Rotation between srcCameraId and destCameraId origins.

  • translation: Translation between IMU and destCameraId origins.

  • specTranslation: Translation between IMU and destCameraId origins from the design.

void setStereoLeft(CameraBoardSocket cameraId, std::vector<std::vector<float>> rectifiedRotation)

Set the Stereo Left Rectification object

Homography of the Left Rectification = Intrinsics_right * rectifiedRotation * inv(Intrinsics_left)

Parameters
  • cameraId: CameraId of the camera which will be used as left Camera of stereo Setup

  • rectifiedRotation: Rectification rotation of the left camera required for feature matching

void setStereoRight(CameraBoardSocket cameraId, std::vector<std::vector<float>> rectifiedRotation)

Set the Stereo Right Rectification object

Homography of the Right Rectification = Intrinsics_right * rectifiedRotation * inv(Intrinsics_right)

Parameters
  • cameraId: CameraId of the camera which will be used as left Camera of stereo Setup

  • rectifiedRotation: Rectification rotation of the left camera required for feature matching

bool validateCameraArray() const

Using left camera as the head it iterates over the camera extrinsics connection to check if all the camera extrinsics are connected and no loop exists.

Return

true on proper connection with no loops.

Public Static Functions

CalibrationHandler fromJson(nlohmann::json eepromDataJson)

Construct a new Calibration Handler object from JSON EepromData.

Parameters

class CallbackHandler
class CameraControl : public dai::Buffer
#include <CameraControl.hpp>

CameraControl message. Specifies various camera control commands like:

  • Still capture

  • Auto/manual focus

  • Auto/manual white balance

  • Auto/manual exposure

  • Anti banding

By default the camera enables 3A, with auto-focus in CONTINUOUS_VIDEO mode, auto-white-balance in AUTO mode, and auto-exposure with anti-banding for 50Hz mains frequency.

Public Functions

CameraControl()

Construct CameraControl message.

CameraControl &setCaptureStill(bool capture)

Set a command to capture a still image

CameraControl &setStartStreaming()

Set a command to start streaming

CameraControl &setStopStreaming()

Set a command to stop streaming

CameraControl &setExternalTrigger(int numFramesBurst, int numFramesDiscard)

Set a command to enable external trigger snapshot mode

A rising edge on the sensor FSIN pin will make it capture a sequence of numFramesBurst frames. First numFramesDiscard will be skipped as configured (can be set to 0 as well), as they may have degraded quality

CameraControl &setFrameSyncMode(FrameSyncMode mode)

Set the frame sync mode for continuous streaming operation mode, translating to how the camera pin FSIN/FSYNC is used: input/output/disabled

CameraControl &setStrobeSensor(int activeLevel = 1)

Enable STROBE output on sensor pin, optionally configuring the polarity. Note: for many sensors the polarity is high-active and not configurable

CameraControl &setStrobeExternal(int gpioNumber, int activeLevel = 1)

Enable STROBE output driven by a MyriadX GPIO, optionally configuring the polarity This normally requires a FSIN/FSYNC/trigger input for MyriadX (usually GPIO 41), to generate timings

CameraControl &setStrobeDisable()

Disable STROBE output

CameraControl &setAutoFocusMode(AutoFocusMode mode)

Set a command to specify autofocus mode. Default CONTINUOUS_VIDEO

CameraControl &setAutoFocusTrigger()

Set a command to trigger autofocus

CameraControl &setAutoFocusLensRange(int infinityPosition, int macroPosition)

Set autofocus lens range, infinityPosition < macroPosition, valid values 0..255. May help to improve autofocus in case the lens adjustment is not typical/tuned

CameraControl &setAutoFocusRegion(uint16_t startX, uint16_t startY, uint16_t width, uint16_t height)

Set a command to specify focus region in pixels. Note: the region should be mapped to the configured sensor resolution, before ISP scaling

Parameters
  • startX: X coordinate of top left corner of region

  • startY: Y coordinate of top left corner of region

  • width: Region width

  • height: Region height

CameraControl &setManualFocus(uint8_t lensPosition)

Set a command to specify manual focus position

Parameters
  • lensPosition: specify lens position 0..255

CameraControl &setManualFocusRaw(float lensPositionRaw)

Set a command to specify manual focus position (more precise control).

Return

CameraControl&

Parameters
  • lensPositionRaw: specify lens position 0.0f .. 1.0f

CameraControl &setAutoExposureEnable()

Set a command to enable auto exposure

CameraControl &setAutoExposureLock(bool lock)

Set a command to specify lock auto exposure

Parameters
  • lock: Auto exposure lock mode enabled or disabled

CameraControl &setAutoExposureRegion(uint16_t startX, uint16_t startY, uint16_t width, uint16_t height)

Set a command to specify auto exposure region in pixels. Note: the region should be mapped to the configured sensor resolution, before ISP scaling

Parameters
  • startX: X coordinate of top left corner of region

  • startY: Y coordinate of top left corner of region

  • width: Region width

  • height: Region height

CameraControl &setAutoExposureCompensation(int compensation)

Set a command to specify auto exposure compensation

Parameters
  • compensation: Compensation value between -9..9, default 0

CameraControl &setAutoExposureLimit(uint32_t maxExposureTimeUs)

Set a command to specify the maximum exposure time limit for auto-exposure. By default the AE algorithm prioritizes increasing exposure over ISO, up to around frame-time (subject to further limits imposed by anti-banding)

Parameters
  • maxExposureTimeUs: Maximum exposure time in microseconds

CameraControl &setAutoExposureLimit(std::chrono::microseconds maxExposureTime)

Set a command to specify the maximum exposure time limit for auto-exposure. By default the AE algorithm prioritizes increasing exposure over ISO, up to around frame-time (subject to further limits imposed by anti-banding)

Parameters
  • maxExposureTime: Maximum exposure time

CameraControl &setAntiBandingMode(AntiBandingMode mode)

Set a command to specify anti-banding mode. Anti-banding / anti-flicker works in auto-exposure mode, by controlling the exposure time to be applied in multiples of half the mains period, for example in multiple of 10ms for 50Hz (period 20ms) AC-powered illumination sources.

If the scene would be too bright for the smallest exposure step (10ms in the example, with ISO at a minimum of 100), anti-banding is not effective.

Parameters
  • mode: Anti-banding mode to use. Default: MAINS_50_HZ

CameraControl &setManualExposure(uint32_t exposureTimeUs, uint32_t sensitivityIso)

Set a command to manually specify exposure

Parameters
  • exposureTimeUs: Exposure time in microseconds

  • sensitivityIso: Sensitivity as ISO value, usual range 100..1600

CameraControl &setManualExposure(std::chrono::microseconds exposureTime, uint32_t sensitivityIso)

Set a command to manually specify exposure

Parameters
  • exposureTime: Exposure time

  • sensitivityIso: Sensitivity as ISO value, usual range 100..1600

CameraControl &setAutoWhiteBalanceMode(AutoWhiteBalanceMode mode)

Set a command to specify auto white balance mode

Parameters
  • mode: Auto white balance mode to use. Default AUTO

CameraControl &setAutoWhiteBalanceLock(bool lock)

Set a command to specify auto white balance lock

Parameters
  • lock: Auto white balance lock mode enabled or disabled

CameraControl &setManualWhiteBalance(int colorTemperatureK)

Set a command to manually specify white-balance color correction

Parameters
  • colorTemperatureK: Light source color temperature in kelvins, range 1000..12000

CameraControl &setBrightness(int value)

Set a command to adjust image brightness

Parameters
  • value: Brightness, range -10..10, default 0

CameraControl &setContrast(int value)

Set a command to adjust image contrast

Parameters
  • value: Contrast, range -10..10, default 0

CameraControl &setSaturation(int value)

Set a command to adjust image saturation

Parameters
  • value: Saturation, range -10..10, default 0

CameraControl &setSharpness(int value)

Set a command to adjust image sharpness

Parameters
  • value: Sharpness, range 0..4, default 1

CameraControl &setLumaDenoise(int value)

Set a command to adjust luma denoise amount

Parameters
  • value: Luma denoise amount, range 0..4, default 1

CameraControl &setChromaDenoise(int value)

Set a command to adjust chroma denoise amount

Parameters
  • value: Chroma denoise amount, range 0..4, default 1

CameraControl &setSceneMode(SceneMode mode)

Set a command to specify scene mode

Parameters
  • mode: Scene mode

CameraControl &setEffectMode(EffectMode mode)

Set a command to specify effect mode

Parameters
  • mode: Effect mode

CameraControl &setControlMode(ControlMode mode)

Set a command to specify control mode

Parameters
  • mode: Control mode

CameraControl &setCaptureIntent(CaptureIntent mode)

Set a command to specify capture intent mode

Parameters
  • mode: Capture intent mode

bool getCaptureStill() const

Check whether command to capture a still is set

Return

True if capture still command is set

std::chrono::microseconds getExposureTime() const

Retrieves exposure time

int getSensitivity() const

Retrieves sensitivity, as an ISO value

int getLensPosition() const

Retrieves lens position, range 0..255. Returns -1 if not available

float getLensPositionRaw() const

Retrieves lens position, range 0.0f..1.0f.

CameraControl &set(dai::RawCameraControl config)

Set explicit configuration.

Parameters
  • config: Explicit configuration

dai::RawCameraControl get() const

Retrieve configuration data for CameraControl.

Return

config for CameraControl

struct CameraFeatures
#include <CameraFeatures.hpp>

CameraFeatures structure

Characterizes detected cameras on board

Public Members

CameraBoardSocket socket = CameraBoardSocket::AUTO

Board socket where the camera was detected

std::string sensorName

Camera sensor name, e.g: “IMX378”, “OV9282”

std::int32_t width = -1

Maximum sensor resolution

CameraImageOrientation orientation = CameraImageOrientation::AUTO

Default camera orientation, board dependent

std::vector<CameraSensorType> supportedTypes

List of supported types of processing for the given camera.

For some sensors it’s not possible to determine if they are color or mono (e.g. OV9782 and OV9282), so this could return more than one entry

bool hasAutofocusIC = false

Whether an autofocus VCM IC was detected

bool hasAutofocus = false

Whether camera has auto focus capabilities, or is a fixed focus lens

std::string name

Camera name or alias

std::vector<std::string> additionalNames

Additional camera names or aliases

std::vector<CameraSensorConfig> configs

Available sensor configs

tl::optional<CameraSensorConfig> calibrationResolution = tl::nullopt

The resolution which should be used for calibration.

struct CameraInfo
#include <CameraInfo.hpp>

CameraInfo structure.

struct CameraProperties : public dai::PropertiesSerializable<Properties, CameraProperties>
#include <CameraProperties.hpp>

Specify properties for ColorCamera such as camera ID, …

Public Types

enum ColorOrder

For 24 bit color these can be either RGB or BGR

Values:

enumerator BGR
enumerator RGB
enum WarpMeshSource

Warp mesh source

Values:

enumerator AUTO
enumerator NONE
enumerator CALIBRATION
enumerator URI

Public Members

RawCameraControl initialControl

Initial controls applied to ColorCamera node

CameraBoardSocket boardSocket = CameraBoardSocket::AUTO

Which socket will color camera use

std::string cameraName = ""

Which camera name will color camera use

CameraImageOrientation imageOrientation = CameraImageOrientation::AUTO

Camera sensor image orientation / pixel readout

ColorOrder colorOrder = ColorOrder::BGR

For 24 bit color these can be either RGB or BGR

bool interleaved = true

Are colors interleaved (R1G1B1, R2G2B2, …) or planar (R1R2…, G1G2…, B1B2)

bool fp16 = false

Are values FP16 type (0.0 - 255.0)

uint32_t previewHeight = DEFAULT_PREVIEW_HEIGHT

Preview frame output height

uint32_t previewWidth = DEFAULT_PREVIEW_WIDTH

Preview frame output width

int32_t videoWidth = AUTO

Preview frame output width

int32_t videoHeight = AUTO

Preview frame output height

int32_t stillWidth = AUTO

Preview frame output width

int32_t stillHeight = AUTO

Preview frame output height

int32_t resolutionWidth = AUTO

Select the camera sensor width

int32_t resolutionHeight = AUTO

Select the camera sensor height

float fps = 30.0

Camera sensor FPS

int isp3aFps = 0

Isp 3A rate (auto focus, auto exposure, auto white balance, camera controls etc.). Default (0) matches the camera FPS, meaning that 3A is running on each frame. Reducing the rate of 3A reduces the CPU usage on CSS, but also increases the convergence rate of 3A. Note that camera controls will be processed at this rate. E.g. if camera is running at 30 fps, and camera control is sent at every frame, but 3A fps is set to 15, the camera control messages will be processed at 15 fps rate, which will lead to queueing.

float sensorCropX = AUTO

Initial sensor crop, -1 signifies center crop

bool previewKeepAspectRatio = false

Whether to keep aspect ratio of input (video/preview size) or not

IspScale ispScale

Configure scaling for isp output.

CameraSensorType sensorType = CameraSensorType::AUTO

Type of sensor, specifies what kind of postprocessing is performed.

int numFramesPoolRaw = 3

Pool sizes

tl::optional<float> calibAlpha

Free scaling parameter between 0 (when all the pixels in the undistorted image are valid) and 1 (when all the source image pixels are retained in the undistorted image). On some high distortion lenses, and/or due to rectification (image rotated) invalid areas may appear even with alpha=0, in these cases alpha < 0.0 helps removing invalid areas. See getOptimalNewCameraMatrix from opencv for more details.

tl::optional<bool> rawPacked

Configures whether the camera raw frames are saved as MIPI-packed to memory. The packed format is more efficient, consuming less memory on device, and less data to send to host: RAW10: 4 pixels saved on 5 bytes, RAW12: 2 pixels saved on 3 bytes. When packing is disabled (false), data is saved lsb-aligned, e.g. a RAW10 pixel will be stored as uint16, on bits 9..0: 0b0000’00pp’pppp’pppp. Default is auto: enabled for standard color/monochrome cameras where ISP can work with both packed/unpacked, but disabled for other cameras like ToF.

struct IspScale
struct CameraSensorConfig
#include <CameraFeatures.hpp>

Sensor config

Public Members

std::int32_t width = -1

Width and height in number of output pixels.

Rect fov

Sensor active view area in physical area [pixels].

struct CastProperties : public dai::PropertiesSerializable<Properties, CastProperties>
#include <CastProperties.hpp>

Specify properties for Cast

struct ChipTemperature
#include <ChipTemperature.hpp>

Chip temperature information.

Multiple temperature measurement points and their average

Public Members

float css

CPU Subsystem

float mss

Media Subsystem

float upa

Shave Array

float dss

DRAM Subsystem

float average

Average of measurements

struct ColorCameraProperties : public dai::PropertiesSerializable<Properties, ColorCameraProperties>
#include <ColorCameraProperties.hpp>

Specify properties for ColorCamera such as camera ID, …

Public Types

enum SensorResolution

Select the camera sensor resolution

Values:

enumerator THE_1080_P

1920 × 1080

enumerator THE_4_K

3840 × 2160

enumerator THE_12_MP

4056 × 3040

enumerator THE_13_MP

4208 × 3120

enumerator THE_720_P

1280 × 720

enumerator THE_800_P

1280 × 800

enumerator THE_1200_P

1920 × 1200

enumerator THE_5_MP

2592 × 1944

enumerator THE_4000X3000

4000 × 3000

enumerator THE_5312X6000

5312 × 6000

enumerator THE_48_MP

8000 × 6000

enumerator THE_1440X1080

1440 × 1080

enumerator THE_1352X1012

1352 × 1012

enumerator THE_2024X1520

2024 × 1520

enum ColorOrder

For 24 bit color these can be either RGB or BGR

Values:

enumerator BGR
enumerator RGB

Public Members

CameraBoardSocket boardSocket = CameraBoardSocket::AUTO

Which socket will color camera use

std::string cameraName = ""

Which camera name will color camera use

CameraImageOrientation imageOrientation = CameraImageOrientation::AUTO

Camera sensor image orientation / pixel readout

ColorOrder colorOrder = ColorOrder::BGR

For 24 bit color these can be either RGB or BGR

bool interleaved = true

Are colors interleaved (R1G1B1, R2G2B2, …) or planar (R1R2…, G1G2…, B1B2)

bool fp16 = false

Are values FP16 type (0.0 - 255.0)

uint32_t previewHeight = 300

Preview frame output height

uint32_t previewWidth = 300

Preview frame output width

int32_t videoWidth = AUTO

Preview frame output width

int32_t videoHeight = AUTO

Preview frame output height

int32_t stillWidth = AUTO

Preview frame output width

int32_t stillHeight = AUTO

Preview frame output height

SensorResolution resolution = SensorResolution::THE_1080_P

Select the camera sensor resolution

float fps = 30.0

Camera sensor FPS

int isp3aFps = 0

Isp 3A rate (auto focus, auto exposure, auto white balance, camera controls etc.). Default (0) matches the camera FPS, meaning that 3A is running on each frame. Reducing the rate of 3A reduces the CPU usage on CSS, but also increases the convergence rate of 3A. Note that camera controls will be processed at this rate. E.g. if camera is running at 30 fps, and camera control is sent at every frame, but 3A fps is set to 15, the camera control messages will be processed at 15 fps rate, which will lead to queueing.

float sensorCropX = AUTO

Initial sensor crop, -1 signifies center crop

bool previewKeepAspectRatio = true

Whether to keep aspect ratio of input (video size) or not

IspScale ispScale

Configure scaling for isp output.

int numFramesPoolRaw = 3

Pool sizes

std::vector<dai::FrameEvent> eventFilter = {dai::FrameEvent::READOUT_START}

List of events to receive, the rest will be ignored

tl::optional<bool> rawPacked

Configures whether the camera raw frames are saved as MIPI-packed to memory. The packed format is more efficient, consuming less memory on device, and less data to send to host: RAW10: 4 pixels saved on 5 bytes, RAW12: 2 pixels saved on 3 bytes. When packing is disabled (false), data is saved lsb-aligned, e.g. a RAW10 pixel will be stored as uint16, on bits 9..0: 0b0000’00pp’pppp’pppp. Default is auto: enabled for standard color/monochrome cameras where ISP can work with both packed/unpacked, but disabled for other cameras like ToF.

struct IspScale
template<typename T>
class copyable_unique_ptr : public std::unique_ptr<T>
#include <copyable_unique_ptr.hpp>

A smart pointer with deep copy semantics.

This is similar to std::unique_ptr in that it does not permit shared ownership of the contained object. However, unlike std::unique_ptr, copyable_unique_ptr supports copy and assignment operations, by insisting that the contained object be “copyable”. To be copyable, the class must have either an accessible copy constructor, or it must have an accessible clone method with signature

std::unique_ptr<Foo> Clone() const;
where Foo is the type of the managed object. By “accessible” we mean either that the copy constructor or clone method is public, or friend copyable_unique_ptr<Foo>; appears in Foo’s class declaration.

Generally, the API is modeled as closely as possible on the C++ standard std::unique_ptr API and copyable_unique_ptr<T> is interoperable with unique_ptr<T> wherever that makes sense. However, there are some differences:

  1. It always uses a default deleter.

  2. There is no array version.

  3. To allow for future copy-on-write optimizations, there is a distinction between writable and const access, the get() method is modified to return only a const pointer, with get_mutable() added to return a writable pointer. Furthermore, derefencing (operator*()) a mutable pointer will give a mutable reference (in so far as T is not declared const), and dereferencing a const pointer will give a const reference.

This class is entirely inline and has no computational or space overhead except when copying is required; it contains just a single pointer and does no reference counting.

Usage

In the simplest use case, the instantiation type will match the type of object it references, e.g.:

copyable_unique_ptr<Foo> ptr = make_unique<Foo>(...);
In this case, as long Foo is deemed compatible, the behavior will be as expected, i.e., when ptr copies, it will contain a reference to a new instance of Foo.

copyable_unique_ptr can also be used with polymorphic classes a copyable_unique_ptr, instantiated on a base class, references an instance of a derived class. When copying the object, we would want the copy to likewise contain an instance of the derived class. For example:

copyable_unique_ptr<Base> cu_ptr = make_unique<Derived>();
copyable_unique_ptr<Base> other_cu_ptr = cu_ptr;           // Triggers a copy.
is_dynamic_castable<Derived>(other_cu_ptr.get());          // Should be true.

This works for well-designed polymorphic classes.

  • The Base class’s Clone() implementation does not invoke the Derived class’s implementation of a suitable virtual method.

Warning

Ill-formed polymorphic classes can lead to fatal type slicing of the referenced object, such that the new copy contains an instance of Base instead of Derived. Some mistakes that would lead to this degenerate behavior:

Template Parameters
  • T: The type of the contained object, which must be copyable as defined above. May be an abstract or concrete type.

Constructors

copyable_unique_ptr() noexcept

Default constructor stores a nullptr. No heap allocation is performed. The empty() method will return true when called on a default-constructed copyable_unique_ptr.

copyable_unique_ptr(T *raw) noexcept

Given a raw pointer to a writable heap-allocated object, take over ownership of that object. No copying occurs.

copyable_unique_ptr(const T &value)

Constructs a unique instance of T as a copy of the provided model value.

copyable_unique_ptr(const copyable_unique_ptr &cu_ptr)

Copy constructor is deep; the new copyable_unique_ptr object contains a new copy of the object in the source, created via the source object’s copy constructor or Clone() method. If the source container is empty this one will be empty also.

template<typename U>
copyable_unique_ptr(const std::unique_ptr<U> &u_ptr)

Copy constructor from a standard unique_ptr of compatible type. The copy is deep; the new copyable_unique_ptr object contains a new copy of the object in the source, created via the source object’s copy constructor or Clone() method. If the source container is empty this one will be empty also.

copyable_unique_ptr(copyable_unique_ptr &&cu_ptr) noexcept

Move constructor is very fast and leaves the source empty. Ownership is transferred from the source to the new copyable_unique_ptr. If the source was empty this one will be empty also. No heap activity occurs.

copyable_unique_ptr(std::unique_ptr<T> &&u_ptr) noexcept

Move constructor from a standard unique_ptr. The move is very fast and leaves the source empty. Ownership is transferred from the source to the new copyable_unique_ptr. If the source was empty this one will be empty also. No heap activity occurs.

template<typename U>
copyable_unique_ptr(std::unique_ptr<U> &&u_ptr) noexcept

Move construction from a compatible standard unique_ptr. Type U* must be implicitly convertible to type T*. Ownership is transferred from the source to the new copyable_unique_ptr. If the source was empty this one will be empty also. No heap activity occurs.

Assignment

copyable_unique_ptr &operator=(T *raw) noexcept

This form of assignment replaces the currently-held object by the given source object and takes over ownership of the source object. The currently-held object (if any) is deleted.

copyable_unique_ptr &operator=(const T &ref)

This form of assignment replaces the currently-held object by a heap-allocated copy of the source object, created using its copy constructor or Clone() method. The currently-held object (if any) is deleted.

copyable_unique_ptr &operator=(const copyable_unique_ptr &cu_ptr)

Copy assignment from copyable_unique_ptr replaces the currently-held object by a copy of the object held in the source container, created using the source object’s copy constructor or Clone() method. The currently-held object (if any) is deleted. If the source container is empty this one will be empty also after the assignment. Nothing happens if the source and destination are the same container.

template<typename U>
copyable_unique_ptr &operator=(const copyable_unique_ptr<U> &cu_ptr)

Copy assignment from a compatible copyable_unique_ptr replaces the currently-held object by a copy of the object held in the source container, created using the source object’s copy constructor or Clone() method. The currently-held object (if any) is deleted. If the source container is empty this one will be empty also after the assignment. Nothing happens if the source and destination are the same container.

copyable_unique_ptr &operator=(const std::unique_ptr<T> &src)

Copy assignment from a standard unique_ptr replaces the currently-held object by a copy of the object held in the source container, created using the source object’s copy constructor or Clone() method. The currently-held object (if any) is deleted. If the source container is empty this one will be empty also after the assignment. Nothing happens if the source and destination are the same container.

template<typename U>
copyable_unique_ptr &operator=(const std::unique_ptr<U> &u_ptr)

Copy assignment from a compatible standard unique_ptr replaces the currently-held object by a copy of the object held in the source container, created using the source object’s copy constructor or Clone() method. The currently-held object (if any) is deleted. If the source container is empty this one will be empty also after the assignment. Nothing happens if the source and destination are the same container.

copyable_unique_ptr &operator=(copyable_unique_ptr &&cu_ptr) noexcept

Move assignment replaces the currently-held object by the source object, leaving the source empty. The currently-held object (if any) is deleted. The instance is not copied. Nothing happens if the source and destination are the same containers.

template<typename U>
copyable_unique_ptr &operator=(copyable_unique_ptr<U> &&cu_ptr) noexcept

Move assignment replaces the currently-held object by the compatible source object, leaving the source empty. The currently-held object (if any) is deleted. The instance is not copied. Nothing happens if the source and destination are the same containers.

copyable_unique_ptr &operator=(std::unique_ptr<T> &&u_ptr) noexcept

Move assignment replaces the currently-held object by the source object, leaving the source empty. The currently-held object (if any) is deleted. The instance is not copied. Nothing happens if the source and destination are the same containers.

template<typename U>
copyable_unique_ptr &operator=(std::unique_ptr<U> &&u_ptr) noexcept

Move assignment replaces the currently-held object by the compatible source object, leaving the source empty. The currently-held object (if any) is deleted. The instance is not copied. Nothing happens if the source and destination are the same containers.

Observers

bool empty() const noexcept

Return true if this container is empty, which is the state the container is in immediately after default construction and various other operations.

const T *get() const noexcept

Return a const pointer to the contained object if any, or nullptr. Note that this is different than get() for the standard smart pointers like std::unique_ptr which return a writable pointer. Use get_mutable() here for that purpose.

T *get_mutable() noexcept

Return a writable pointer to the contained object if any, or nullptr. Note that you need write access to this container in order to get write access to the object it contains.

Warning

If copyable_unique_ptr is instantiated on a const template parameter (e.g., copyable_unique_ptr<const Foo>), then get_mutable() returns a const pointer.

const T &operator*() const

Return a const reference to the contained object. Note that this is different from std::unique_ptr::operator*() which would return a non-const reference (if T is non-const), even if the container itself is const. For a const copyable_unique_ptr will always return a const reference to its contained value.

Warning

Currently copyable_unique_ptr is a std::unique_ptr. As such, a const copyable_unique_ptr<Foo> can be upcast to a const unique_ptr<Foo> and the parent’s behavior will provide a mutable reference. This is strongly discouraged and will break as the implementation of this class changes to shore up this gap in the const correctness protection.

Pre

this != nullptr reports true.

T &operator*()

Return a writable reference to the contained object (if T is itself not const). Note that you need write access to this container in order to get write access to the object it contains.

We strongly recommend, that, if dereferencing a copyable_unique_ptr without the intention of mutating the underlying value, prefer to dereference a const copyable_unique_ptr (or use *my_ptr.get()) and not a mutable copyable_unique_ptr. As “copy-on-write” behavior is introduced in the future, this recommended practice will prevent unwanted copies of the underlying value.

If copyable_unique_ptr is instantiated on a const template parameter (e.g., copyable_unique_ptr<const Foo>), then operator*() must return a const reference.

Pre

this != nullptr reports true.

Related

template<class charT, class traits, class T>
std::basic_ostream<charT, traits> &operator<<(std::basic_ostream<charT, traits> &os, const copyable_unique_ptr<T> &cu_ptr)

Output the system-dependent representation of the pointer contained in a copyable_unique_ptr object. This is equivalent to os << p.get();.

struct CpuUsage
#include <CpuUsage.hpp>

CpuUsage structure

Average usage in percent and time span of the average (since last query)

Public Members

float average

Average CPU usage, expressed with a normalized value (0-1)

int32_t msTime

Time span in which the average was calculated in milliseconds

struct CrashDump
struct CrashReport
struct ErrorSourceInfo
struct AssertContext
struct TrapContext
struct ThreadCallstack
struct CallstackContext
class DataInputQueue
#include <DataQueue.hpp>

Access to send messages through XLink stream

Public Functions

bool isClosed() const

Check whether queue is closed

Warning

This function is thread-unsafe and may return outdated incorrect values. It is only meant for use in simple single-threaded code. Well written code should handle exceptions when calling any DepthAI apis to handle hardware events and multithreaded use.

void close()

Closes the queue and the underlying thread

void setMaxDataSize(std::size_t maxSize)

Sets maximum message size. If message is larger than specified, then an exception is issued.

Parameters
  • maxSize: Maximum message size to add to queue

std::size_t getMaxDataSize()

Gets maximum queue size.

Return

Maximum message size

void setBlocking(bool blocking)

Sets queue behavior when full (maxSize)

Parameters
  • blocking: Specifies if block or overwrite the oldest message in the queue

bool getBlocking() const

Gets current queue behavior when full (maxSize)

Return

True if blocking, false otherwise

void setMaxSize(unsigned int maxSize)

Sets queue maximum size

Parameters
  • maxSize: Specifies maximum number of messages in the queue

unsigned int getMaxSize() const

Gets queue maximum size

Return

Maximum queue size

std::string getName() const

Gets queues name

Return

Queue name

void send(const std::shared_ptr<RawBuffer> &rawMsg)

Adds a raw message to the queue, which will be picked up and sent to the device. Can either block if ‘blocking’ behavior is true or overwrite oldest

Parameters
  • rawMsg: Message to add to the queue

void send(const std::shared_ptr<ADatatype> &msg)

Adds a message to the queue, which will be picked up and sent to the device. Can either block if ‘blocking’ behavior is true or overwrite oldest

Parameters
  • msg: Message to add to the queue

void send(const ADatatype &msg)

Adds a message to the queue, which will be picked up and sent to the device. Can either block if ‘blocking’ behavior is true or overwrite oldest

Parameters
  • msg: Message to add to the queue

bool send(const std::shared_ptr<RawBuffer> &rawMsg, std::chrono::milliseconds timeout)

Adds message to the queue, which will be picked up and sent to the device. Can either block until timeout if ‘blocking’ behavior is true or overwrite oldest

Parameters
  • rawMsg: Message to add to the queue

  • timeout: Maximum duration to block in milliseconds

bool send(const std::shared_ptr<ADatatype> &msg, std::chrono::milliseconds timeout)

Adds message to the queue, which will be picked up and sent to the device. Can either block until timeout if ‘blocking’ behavior is true or overwrite oldest

Parameters
  • msg: Message to add to the queue

  • timeout: Maximum duration to block in milliseconds

bool send(const ADatatype &msg, std::chrono::milliseconds timeout)

Adds message to the queue, which will be picked up and sent to the device. Can either block until timeout if ‘blocking’ behavior is true or overwrite oldest

Parameters
  • msg: Message to add to the queue

  • timeout: Maximum duration to block in milliseconds

class DataOutputQueue
#include <DataQueue.hpp>

Access to receive messages coming from XLink stream

Public Types

using CallbackId = int

Alias for callback id.

Public Functions

bool isClosed() const

Check whether queue is closed

Warning

This function is thread-unsafe and may return outdated incorrect values. It is only meant for use in simple single-threaded code. Well written code should handle exceptions when calling any DepthAI apis to handle hardware events and multithreaded use.

void close()

Closes the queue and the underlying thread

void setBlocking(bool blocking)

Sets queue behavior when full (maxSize)

Parameters
  • blocking: Specifies if block or overwrite the oldest message in the queue

bool getBlocking() const

Gets current queue behavior when full (maxSize)

Return

True if blocking, false otherwise

void setMaxSize(unsigned int maxSize)

Sets queue maximum size

Parameters
  • maxSize: Specifies maximum number of messages in the queue

unsigned int getMaxSize() const

Gets queue maximum size

Return

Maximum queue size

std::string getName() const

Gets queues name

Return

Queue name

CallbackId addCallback(std::function<void(std::string, std::shared_ptr<ADatatype>)>)

Adds a callback on message received

Return

Callback id

Parameters
  • callback: Callback function with queue name and message pointer

CallbackId addCallback(std::function<void(std::shared_ptr<ADatatype>)>)

Adds a callback on message received

Return

Callback id

Parameters
  • callback: Callback function with message pointer

CallbackId addCallback(std::function<void()> callback)

Adds a callback on message received

Return

Callback id

Parameters
  • callback: Callback function without any parameters

bool removeCallback(CallbackId callbackId)

Removes a callback

Return

True if callback was removed, false otherwise

Parameters
  • callbackId: Id of callback to be removed

template<class T>
bool has()

Check whether front of the queue has message of type T

Return

True if queue isn’t empty and the first element is of type T, false otherwise

bool has()

Check whether front of the queue has a message (isn’t empty)

Return

True if queue isn’t empty, false otherwise

template<class T>
std::shared_ptr<T> tryGet()

Try to retrieve message T from queue. If message isn’t of type T it returns nullptr

Return

Message of type T or nullptr if no message available

std::shared_ptr<ADatatype> tryGet()

Try to retrieve message from queue. If no message available, return immediately with nullptr

Return

Message or nullptr if no message available

template<class T>
std::shared_ptr<T> get()

Block until a message is available.

Return

Message of type T or nullptr if no message available

std::shared_ptr<ADatatype> get()

Block until a message is available.

Return

Message or nullptr if no message available

template<class T>
std::shared_ptr<T> front()

Gets first message in the queue.

Return

Message of type T or nullptr if no message available

std::shared_ptr<ADatatype> front()

Gets first message in the queue.

Return

Message or nullptr if no message available

template<class T, typename Rep, typename Period>
std::shared_ptr<T> get(std::chrono::duration<Rep, Period> timeout, bool &hasTimedout)

Block until a message is available with a timeout.

Return

Message of type T otherwise nullptr if message isn’t type T or timeout occurred

Parameters
  • timeout: Duration for which the function should block

  • [out] hasTimedout: Outputs true if timeout occurred, false otherwise

template<typename Rep, typename Period>
std::shared_ptr<ADatatype> get(std::chrono::duration<Rep, Period> timeout, bool &hasTimedout)

Block until a message is available with a timeout.

Return

Message of type T otherwise nullptr if message isn’t type T or timeout occurred

Parameters
  • timeout: Duration for which the function should block

  • [out] hasTimedout: Outputs true if timeout occurred, false otherwise

template<class T>
std::vector<std::shared_ptr<T>> tryGetAll()

Try to retrieve all messages in the queue.

Return

Vector of messages which can either be of type T or nullptr

std::vector<std::shared_ptr<ADatatype>> tryGetAll()

Try to retrieve all messages in the queue.

Return

Vector of messages

template<class T>
std::vector<std::shared_ptr<T>> getAll()

Block until at least one message in the queue. Then return all messages from the queue.

Return

Vector of messages which can either be of type T or nullptr

std::vector<std::shared_ptr<ADatatype>> getAll()

Block until at least one message in the queue. Then return all messages from the queue.

Return

Vector of messages

template<class T, typename Rep, typename Period>
std::vector<std::shared_ptr<T>> getAll(std::chrono::duration<Rep, Period> timeout, bool &hasTimedout)

Block for maximum timeout duration. Then return all messages from the queue.

Return

Vector of messages which can either be of type T or nullptr

Parameters
  • timeout: Maximum duration to block

  • [out] hasTimedout: Outputs true if timeout occurred, false otherwise

template<typename Rep, typename Period>
std::vector<std::shared_ptr<ADatatype>> getAll(std::chrono::duration<Rep, Period> timeout, bool &hasTimedout)

Block for maximum timeout duration. Then return all messages from the queue.

Return

Vector of messages

Parameters
  • timeout: Maximum duration to block

  • [out] hasTimedout: Outputs true if timeout occurred, false otherwise

struct DetectionNetworkProperties : public dai::PropertiesSerializable<NeuralNetworkProperties, DetectionNetworkProperties>
#include <DetectionNetworkProperties.hpp>

Specify properties for DetectionNetwork

Subclassed by dai::PropertiesSerializable< DetectionNetworkProperties, SpatialDetectionNetworkProperties >

struct DetectionParserOptions
#include <DetectionParserOptions.hpp>

DetectionParserOptions

Specifies how to parse output of detection networks

Public Members

DetectionNetworkType nnFamily

Generic Neural Network properties.

int classes

YOLO specific network properties.

struct DetectionParserProperties : public dai::PropertiesSerializable<Properties, DetectionParserProperties>
#include <DetectionParserProperties.hpp>

Specify properties for DetectionParser

Public Members

int numFramesPool = 8

Num frames in output pool.

std::unordered_map<std::string, TensorInfo> networkInputs

Network inputs.

DetectionParserOptions parser

Options for parser.

class Device : public dai::DeviceBase
#include <Device.hpp>

Represents the DepthAI device with the methods to interact with it. Implements the host-side queues to connect with XLinkIn and XLinkOut nodes

Public Functions

Device(const Pipeline &pipeline)

Connects to any available device with a DEFAULT_SEARCH_TIME timeout.

Parameters
  • pipeline: Pipeline to be executed on the device

template<typename T, std::enable_if_t<std::is_same<T, bool>::value, bool> = true>
Device(const Pipeline &pipeline, T usb2Mode)

Connects to any available device with a DEFAULT_SEARCH_TIME timeout.

Parameters
  • pipeline: Pipeline to be executed on the device

  • usb2Mode: (bool) Boot device using USB2 mode firmware

Device(const Pipeline &pipeline, UsbSpeed maxUsbSpeed)

Connects to any available device with a DEFAULT_SEARCH_TIME timeout.

Parameters
  • pipeline: Pipeline to be executed on the device

  • maxUsbSpeed: Maximum allowed USB speed

Device(const Pipeline &pipeline, const dai::Path &pathToCmd)

Connects to any available device with a DEFAULT_SEARCH_TIME timeout.

Parameters
  • pipeline: Pipeline to be executed on the device

  • pathToCmd: Path to custom device firmware

Device(const Pipeline &pipeline, const DeviceInfo &devInfo)

Connects to device specified by devInfo.

Parameters
  • pipeline: Pipeline to be executed on the device

  • devInfo: DeviceInfo which specifies which device to connect to

template<typename T, std::enable_if_t<std::is_same<T, bool>::value, bool> = true>
Device(const Pipeline &pipeline, const DeviceInfo &devInfo, T usb2Mode)

Connects to device specified by devInfo.

Parameters
  • pipeline: Pipeline to be executed on the device

  • devInfo: DeviceInfo which specifies which device to connect to

  • usb2Mode: (bool) Boot device using USB2 mode firmware

Device(const Pipeline &pipeline, const DeviceInfo &devInfo, UsbSpeed maxUsbSpeed)

Connects to device specified by devInfo.

Parameters
  • pipeline: Pipeline to be executed on the device

  • devInfo: DeviceInfo which specifies which device to connect to

  • maxUsbSpeed: Maximum allowed USB speed

Device(const Pipeline &pipeline, const DeviceInfo &devInfo, const dai::Path &pathToCmd)

Connects to device specified by devInfo.

Parameters
  • pipeline: Pipeline to be executed on the device

  • devInfo: DeviceInfo which specifies which device to connect to

  • pathToCmd: Path to custom device firmware

Device()

Connects to any available device with a DEFAULT_SEARCH_TIME timeout. Uses OpenVINO version OpenVINO::VERSION_UNIVERSAL

~Device() override

dtor to close the device

std::shared_ptr<DataOutputQueue> getOutputQueue(const std::string &name)

Gets an output queue corresponding to stream name. If it doesn’t exist it throws

Return

Smart pointer to DataOutputQueue

Parameters
  • name: Queue/stream name, created by XLinkOut node

std::shared_ptr<DataOutputQueue> getOutputQueue(const std::string &name, unsigned int maxSize, bool blocking = true)

Gets a queue corresponding to stream name, if it exists, otherwise it throws. Also sets queue options

Return

Smart pointer to DataOutputQueue

Parameters
  • name: Queue/stream name, set in XLinkOut node

  • maxSize: Maximum number of messages in queue

  • blocking: Queue behavior once full. True specifies blocking and false overwriting of oldest messages. Default: true

std::vector<std::string> getOutputQueueNames() const

Get all available output queue names

Return

Vector of output queue names

std::shared_ptr<DataInputQueue> getInputQueue(const std::string &name)

Gets an input queue corresponding to stream name. If it doesn’t exist it throws

Return

Smart pointer to DataInputQueue

Parameters
  • name: Queue/stream name, set in XLinkIn node

std::shared_ptr<DataInputQueue> getInputQueue(const std::string &name, unsigned int maxSize, bool blocking = true)

Gets an input queue corresponding to stream name. If it doesn’t exist it throws. Also sets queue options

Return

Smart pointer to DataInputQueue

Parameters
  • name: Queue/stream name, set in XLinkIn node

  • maxSize: Maximum number of messages in queue

  • blocking: Queue behavior once full. True: blocking, false: overwriting of oldest messages. Default: true

std::vector<std::string> getInputQueueNames() const

Get all available input queue names

Return

Vector of input queue names

std::vector<std::string> getQueueEvents(const std::vector<std::string> &queueNames, std::size_t maxNumEvents = std::numeric_limits<std::size_t>::max(), std::chrono::microseconds timeout = std::chrono::microseconds(-1))

Gets or waits until any of specified queues has received a message

Return

Names of queues which received messages first

Parameters
  • queueNames: Names of queues for which to block

  • maxNumEvents: Maximum number of events to remove from queue - Default is unlimited

  • timeout: Timeout after which return regardless. If negative then wait is indefinite - Default is -1

std::vector<std::string> getQueueEvents(std::string queueName, std::size_t maxNumEvents = std::numeric_limits<std::size_t>::max(), std::chrono::microseconds timeout = std::chrono::microseconds(-1))

Gets or waits until specified queue has received a message

Return

Names of queues which received messages first

Parameters
  • queueName: Name of queues for which to wait for

  • maxNumEvents: Maximum number of events to remove from queue. Default is unlimited

  • timeout: Timeout after which return regardless. If negative then wait is indefinite. Default is -1

std::vector<std::string> getQueueEvents(std::size_t maxNumEvents = std::numeric_limits<std::size_t>::max(), std::chrono::microseconds timeout = std::chrono::microseconds(-1))

Gets or waits until any queue has received a message

Return

Names of queues which received messages first

Parameters
  • maxNumEvents: Maximum number of events to remove from queue. Default is unlimited

  • timeout: Timeout after which return regardless. If negative then wait is indefinite. Default is -1

std::string getQueueEvent(const std::vector<std::string> &queueNames, std::chrono::microseconds timeout = std::chrono::microseconds(-1))

Gets or waits until any of specified queues has received a message

Return

Queue name which received a message first

Parameters
  • queueNames: Names of queues for which to wait for

  • timeout: Timeout after which return regardless. If negative then wait is indefinite. Default is -1

std::string getQueueEvent(std::string queueName, std::chrono::microseconds timeout = std::chrono::microseconds(-1))

Gets or waits until specified queue has received a message

Return

Queue name which received a message

Parameters
  • queueNames: Name of queues for which to wait for

  • timeout: Timeout after which return regardless. If negative then wait is indefinite. Default is -1

std::string getQueueEvent(std::chrono::microseconds timeout = std::chrono::microseconds(-1))

Gets or waits until any queue has received a message

Return

Queue name which received a message

Parameters
  • timeout: Timeout after which return regardless. If negative then wait is indefinite. Default is -1

DeviceBase(const Pipeline &pipeline)

Connects to any available device with a DEFAULT_SEARCH_TIME timeout.

Parameters
  • pipeline: Pipeline to be executed on the device

template<typename T, std::enable_if_t<std::is_same<T, bool>::value, bool> = true>
DeviceBase(const Pipeline &pipeline, T usb2Mode)

Connects to any available device with a DEFAULT_SEARCH_TIME timeout.

Parameters
  • pipeline: Pipeline to be executed on the device

  • usb2Mode: Boot device using USB2 mode firmware

DeviceBase(const Pipeline &pipeline, UsbSpeed maxUsbSpeed)

Connects to any available device with a DEFAULT_SEARCH_TIME timeout.

Parameters
  • pipeline: Pipeline to be executed on the device

  • maxUsbSpeed: Maximum allowed USB speed

DeviceBase(const Pipeline &pipeline, const dai::Path &pathToCmd)

Connects to any available device with a DEFAULT_SEARCH_TIME timeout.

Parameters
  • pipeline: Pipeline to be executed on the device

  • pathToCmd: Path to custom device firmware

DeviceBase(const Pipeline &pipeline, const DeviceInfo &devInfo)

Connects to device specified by devInfo.

Parameters
  • pipeline: Pipeline to be executed on the device

  • devInfo: DeviceInfo which specifies which device to connect to

template<typename T, std::enable_if_t<std::is_same<T, bool>::value, bool> = true>
DeviceBase(const Pipeline &pipeline, const DeviceInfo &devInfo, T usb2Mode)

Connects to device specified by devInfo.

Parameters
  • pipeline: Pipeline to be executed on the device

  • devInfo: DeviceInfo which specifies which device to connect to

  • usb2Mode: Boot device using USB2 mode firmware

DeviceBase(const Pipeline &pipeline, const DeviceInfo &devInfo, UsbSpeed maxUsbSpeed)

Connects to device specified by devInfo.

Parameters
  • pipeline: Pipeline to be executed on the device

  • devInfo: DeviceInfo which specifies which device to connect to

  • maxUsbSpeed: Maximum allowed USB speed

DeviceBase(const Pipeline &pipeline, const DeviceInfo &devInfo, const dai::Path &pathToCmd)

Connects to device specified by devInfo.

Parameters
  • pipeline: Pipeline to be executed on the device

  • devInfo: DeviceInfo which specifies which device to connect to

  • pathToCmd: Path to custom device firmware

DeviceBase()

Connects to any available device with a DEFAULT_SEARCH_TIME timeout. Uses OpenVINO version OpenVINO::VERSION_UNIVERSAL

DeviceBase(OpenVINO::Version version)

Connects to any available device with a DEFAULT_SEARCH_TIME timeout.

Parameters
  • version: OpenVINO version which the device will be booted with.

template<typename T, std::enable_if_t<std::is_same<T, bool>::value, bool> = true>
DeviceBase(OpenVINO::Version version, T usb2Mode)

Connects to any available device with a DEFAULT_SEARCH_TIME timeout.

Parameters
  • version: OpenVINO version which the device will be booted with

  • usb2Mode: Boot device using USB2 mode firmware

DeviceBase(OpenVINO::Version version, UsbSpeed maxUsbSpeed)

Connects to device specified by devInfo.

Parameters
  • version: OpenVINO version which the device will be booted with

  • maxUsbSpeed: Maximum allowed USB speed

DeviceBase(OpenVINO::Version version, const dai::Path &pathToCmd)

Connects to any available device with a DEFAULT_SEARCH_TIME timeout.

Parameters
  • version: OpenVINO version which the device will be booted with

  • pathToCmd: Path to custom device firmware

DeviceBase(OpenVINO::Version version, const DeviceInfo &devInfo)

Connects to device specified by devInfo.

Parameters
  • version: OpenVINO version which the device will be booted with

  • devInfo: DeviceInfo which specifies which device to connect to

template<typename T, std::enable_if_t<std::is_same<T, bool>::value, bool> = true>
DeviceBase(OpenVINO::Version version, const DeviceInfo &devInfo, T usb2Mode)

Connects to device specified by devInfo.

Parameters
  • version: OpenVINO version which the device will be booted with

  • devInfo: DeviceInfo which specifies which device to connect to

  • usb2Mode: Boot device using USB2 mode firmware

DeviceBase(OpenVINO::Version version, const DeviceInfo &devInfo, UsbSpeed maxUsbSpeed)

Connects to device specified by devInfo.

Parameters
  • version: OpenVINO version which the device will be booted with

  • devInfo: DeviceInfo which specifies which device to connect to

  • maxUsbSpeed: Maximum allowed USB speed

DeviceBase(OpenVINO::Version version, const DeviceInfo &devInfo, const dai::Path &pathToCmd)

Connects to device specified by devInfo.

Parameters
  • version: OpenVINO version which the device will be booted with

  • devInfo: DeviceInfo which specifies which device to connect to

  • pathToCmd: Path to custom device firmware

DeviceBase(Config config)

Connects to any available device with custom config.

Parameters
  • config: Device custom configuration to boot with

DeviceBase(Config config, const DeviceInfo &devInfo)

Connects to device ‘devInfo’ with custom config.

Parameters
  • config: Device custom configuration to boot with

  • devInfo: DeviceInfo which specifies which device to connect to

DeviceBase(const DeviceInfo &devInfo)

Connects to any available device with a DEFAULT_SEARCH_TIME timeout. Uses OpenVINO version OpenVINO::VERSION_UNIVERSAL

Parameters
  • devInfo: DeviceInfo which specifies which device to connect to

DeviceBase(const DeviceInfo &devInfo, UsbSpeed maxUsbSpeed)

Connects to any available device with a DEFAULT_SEARCH_TIME timeout. Uses OpenVINO version OpenVINO::VERSION_UNIVERSAL

Parameters
  • devInfo: DeviceInfo which specifies which device to connect to

  • maxUsbSpeed: Maximum allowed USB speed

DeviceBase(std::string nameOrDeviceId)

Connects to any available device with a DEFAULT_SEARCH_TIME timeout. Uses OpenVINO version OpenVINO::VERSION_UNIVERSAL

Parameters
  • nameOrDeviceId: Creates DeviceInfo with nameOrDeviceId to connect to

DeviceBase(std::string nameOrDeviceId, UsbSpeed maxUsbSpeed)

Connects to any available device with a DEFAULT_SEARCH_TIME timeout. Uses OpenVINO version OpenVINO::VERSION_UNIVERSAL

Parameters
  • nameOrDeviceId: Creates DeviceInfo with nameOrDeviceId to connect to

  • maxUsbSpeed: Maximum allowed USB speed

template<typename T, std::enable_if_t<std::is_same<T, bool>::value, bool> = true>
DeviceBase(Config config, T usb2Mode)

Connects to any available device with a DEFAULT_SEARCH_TIME timeout.

Parameters
  • config: Config with which the device will be booted with

  • usb2Mode: Boot device using USB2 mode firmware

DeviceBase(Config config, UsbSpeed maxUsbSpeed)

Connects to device specified by devInfo.

Parameters
  • config: Config with which the device will be booted with

  • maxUsbSpeed: Maximum allowed USB speed

DeviceBase(Config config, const dai::Path &pathToCmd)

Connects to any available device with a DEFAULT_SEARCH_TIME timeout.

Parameters
  • config: Config with which the device will be booted with

  • pathToCmd: Path to custom device firmware

template<typename T, std::enable_if_t<std::is_same<T, bool>::value, bool> = true>
DeviceBase(Config config, const DeviceInfo &devInfo, T usb2Mode)

Connects to device specified by devInfo.

Parameters
  • config: Config with which the device will be booted with

  • devInfo: DeviceInfo which specifies which device to connect to

  • usb2Mode: Boot device using USB2 mode firmware

DeviceBase(Config config, const DeviceInfo &devInfo, UsbSpeed maxUsbSpeed)

Connects to device specified by devInfo.

Parameters
  • config: Config with which the device will be booted with

  • devInfo: DeviceInfo which specifies which device to connect to

  • maxUsbSpeed: Maximum allowed USB speed

DeviceBase(Config config, const DeviceInfo &devInfo, const dai::Path &pathToCmd, bool dumpOnly = false)

Connects to device specified by devInfo.

Parameters
  • config: Config with which the device will be booted with

  • devInfo: DeviceInfo which specifies which device to connect to

  • pathToCmd: Path to custom device firmware

  • dumpOnly: If true only the minimal connection is established to retrieve the crash dump

Public Static Attributes

constexpr std::size_t EVENT_QUEUE_MAXIMUM_SIZE = {2048}

Maximum number of elements in event queue.

class DeviceBase
#include <DeviceBase.hpp>

The core of depthai device for RAII, connects to device and maintains watchdog, timesync, …

Subclassed by dai::Device

Public Functions

DeviceBase(const Pipeline &pipeline)

Connects to any available device with a DEFAULT_SEARCH_TIME timeout.

Parameters
  • pipeline: Pipeline to be executed on the device

template<typename T, std::enable_if_t<std::is_same<T, bool>::value, bool> = true>
DeviceBase(const Pipeline &pipeline, T usb2Mode)

Connects to any available device with a DEFAULT_SEARCH_TIME timeout.

Parameters
  • pipeline: Pipeline to be executed on the device

  • usb2Mode: Boot device using USB2 mode firmware

DeviceBase(const Pipeline &pipeline, UsbSpeed maxUsbSpeed)

Connects to any available device with a DEFAULT_SEARCH_TIME timeout.

Parameters
  • pipeline: Pipeline to be executed on the device

  • maxUsbSpeed: Maximum allowed USB speed

DeviceBase(const Pipeline &pipeline, const dai::Path &pathToCmd)

Connects to any available device with a DEFAULT_SEARCH_TIME timeout.

Parameters
  • pipeline: Pipeline to be executed on the device

  • pathToCmd: Path to custom device firmware

DeviceBase(const Pipeline &pipeline, const DeviceInfo &devInfo)

Connects to device specified by devInfo.

Parameters
  • pipeline: Pipeline to be executed on the device

  • devInfo: DeviceInfo which specifies which device to connect to

template<typename T, std::enable_if_t<std::is_same<T, bool>::value, bool> = true>
DeviceBase(const Pipeline &pipeline, const DeviceInfo &devInfo, T usb2Mode)

Connects to device specified by devInfo.

Parameters
  • pipeline: Pipeline to be executed on the device

  • devInfo: DeviceInfo which specifies which device to connect to

  • usb2Mode: Boot device using USB2 mode firmware

DeviceBase(const Pipeline &pipeline, const DeviceInfo &devInfo, UsbSpeed maxUsbSpeed)

Connects to device specified by devInfo.

Parameters
  • pipeline: Pipeline to be executed on the device

  • devInfo: DeviceInfo which specifies which device to connect to

  • maxUsbSpeed: Maximum allowed USB speed

DeviceBase(const Pipeline &pipeline, const DeviceInfo &devInfo, const dai::Path &pathToCmd)

Connects to device specified by devInfo.

Parameters
  • pipeline: Pipeline to be executed on the device

  • devInfo: DeviceInfo which specifies which device to connect to

  • pathToCmd: Path to custom device firmware

DeviceBase()

Connects to any available device with a DEFAULT_SEARCH_TIME timeout. Uses OpenVINO version OpenVINO::VERSION_UNIVERSAL

DeviceBase(OpenVINO::Version version)

Connects to any available device with a DEFAULT_SEARCH_TIME timeout.

Parameters
  • version: OpenVINO version which the device will be booted with.

template<typename T, std::enable_if_t<std::is_same<T, bool>::value, bool> = true>
DeviceBase(OpenVINO::Version version, T usb2Mode)

Connects to any available device with a DEFAULT_SEARCH_TIME timeout.

Parameters
  • version: OpenVINO version which the device will be booted with

  • usb2Mode: Boot device using USB2 mode firmware

DeviceBase(OpenVINO::Version version, UsbSpeed maxUsbSpeed)

Connects to device specified by devInfo.

Parameters
  • version: OpenVINO version which the device will be booted with

  • maxUsbSpeed: Maximum allowed USB speed

DeviceBase(OpenVINO::Version version, const dai::Path &pathToCmd)

Connects to any available device with a DEFAULT_SEARCH_TIME timeout.

Parameters
  • version: OpenVINO version which the device will be booted with

  • pathToCmd: Path to custom device firmware

DeviceBase(OpenVINO::Version version, const DeviceInfo &devInfo)

Connects to device specified by devInfo.

Parameters
  • version: OpenVINO version which the device will be booted with

  • devInfo: DeviceInfo which specifies which device to connect to

template<typename T, std::enable_if_t<std::is_same<T, bool>::value, bool> = true>
DeviceBase(OpenVINO::Version version, const DeviceInfo &devInfo, T usb2Mode)

Connects to device specified by devInfo.

Parameters
  • version: OpenVINO version which the device will be booted with

  • devInfo: DeviceInfo which specifies which device to connect to

  • usb2Mode: Boot device using USB2 mode firmware

DeviceBase(OpenVINO::Version version, const DeviceInfo &devInfo, UsbSpeed maxUsbSpeed)

Connects to device specified by devInfo.

Parameters
  • version: OpenVINO version which the device will be booted with

  • devInfo: DeviceInfo which specifies which device to connect to

  • maxUsbSpeed: Maximum allowed USB speed

DeviceBase(OpenVINO::Version version, const DeviceInfo &devInfo, const dai::Path &pathToCmd)

Connects to device specified by devInfo.

Parameters
  • version: OpenVINO version which the device will be booted with

  • devInfo: DeviceInfo which specifies which device to connect to

  • pathToCmd: Path to custom device firmware

DeviceBase(Config config)

Connects to any available device with custom config.

Parameters
  • config: Device custom configuration to boot with

DeviceBase(Config config, const DeviceInfo &devInfo)

Connects to device ‘devInfo’ with custom config.

Parameters
  • config: Device custom configuration to boot with

  • devInfo: DeviceInfo which specifies which device to connect to

DeviceBase(const DeviceInfo &devInfo)

Connects to any available device with a DEFAULT_SEARCH_TIME timeout. Uses OpenVINO version OpenVINO::VERSION_UNIVERSAL

Parameters
  • devInfo: DeviceInfo which specifies which device to connect to

DeviceBase(const DeviceInfo &devInfo, UsbSpeed maxUsbSpeed)

Connects to any available device with a DEFAULT_SEARCH_TIME timeout. Uses OpenVINO version OpenVINO::VERSION_UNIVERSAL

Parameters
  • devInfo: DeviceInfo which specifies which device to connect to

  • maxUsbSpeed: Maximum allowed USB speed

DeviceBase(std::string nameOrDeviceId)

Connects to any available device with a DEFAULT_SEARCH_TIME timeout. Uses OpenVINO version OpenVINO::VERSION_UNIVERSAL

Parameters
  • nameOrDeviceId: Creates DeviceInfo with nameOrDeviceId to connect to

DeviceBase(std::string nameOrDeviceId, UsbSpeed maxUsbSpeed)

Connects to any available device with a DEFAULT_SEARCH_TIME timeout. Uses OpenVINO version OpenVINO::VERSION_UNIVERSAL

Parameters
  • nameOrDeviceId: Creates DeviceInfo with nameOrDeviceId to connect to

  • maxUsbSpeed: Maximum allowed USB speed

template<typename T, std::enable_if_t<std::is_same<T, bool>::value, bool> = true>
DeviceBase(Config config, T usb2Mode)

Connects to any available device with a DEFAULT_SEARCH_TIME timeout.

Parameters
  • config: Config with which the device will be booted with

  • usb2Mode: Boot device using USB2 mode firmware

DeviceBase(Config config, UsbSpeed maxUsbSpeed)

Connects to device specified by devInfo.

Parameters
  • config: Config with which the device will be booted with

  • maxUsbSpeed: Maximum allowed USB speed

DeviceBase(Config config, const dai::Path &pathToCmd)

Connects to any available device with a DEFAULT_SEARCH_TIME timeout.

Parameters
  • config: Config with which the device will be booted with

  • pathToCmd: Path to custom device firmware

template<typename T, std::enable_if_t<std::is_same<T, bool>::value, bool> = true>
DeviceBase(Config config, const DeviceInfo &devInfo, T usb2Mode)

Connects to device specified by devInfo.

Parameters
  • config: Config with which the device will be booted with

  • devInfo: DeviceInfo which specifies which device to connect to

  • usb2Mode: Boot device using USB2 mode firmware

DeviceBase(Config config, const DeviceInfo &devInfo, UsbSpeed maxUsbSpeed)

Connects to device specified by devInfo.

Parameters
  • config: Config with which the device will be booted with

  • devInfo: DeviceInfo which specifies which device to connect to

  • maxUsbSpeed: Maximum allowed USB speed

DeviceBase(Config config, const DeviceInfo &devInfo, const dai::Path &pathToCmd, bool dumpOnly = false)

Connects to device specified by devInfo.

Parameters
  • config: Config with which the device will be booted with

  • devInfo: DeviceInfo which specifies which device to connect to

  • pathToCmd: Path to custom device firmware

  • dumpOnly: If true only the minimal connection is established to retrieve the crash dump

~DeviceBase()

Device destructor

Note

In the destructor of the derived class, remember to call close()

tl::optional<Version> getBootloaderVersion()

Gets Bootloader version if it was booted through Bootloader

Return

DeviceBootloader::Version if booted through Bootloader or none otherwise

bool isPipelineRunning()

Checks if devices pipeline is already running

Return

True if running, false otherwise

bool startPipeline()

Starts the execution of the devices pipeline

Return

True if pipeline started, false otherwise

bool startPipeline(const Pipeline &pipeline)

Starts the execution of a given pipeline

Return

True if pipeline started, false otherwise

Parameters
  • pipeline: OpenVINO version of the pipeline must match the one which the device was booted with.

void setLogLevel(LogLevel level)

Sets the devices logging severity level. This level affects which logs are transferred from device to host.

Parameters
  • level: Logging severity

LogLevel getLogLevel()

Gets current logging severity level of the device.

Return

Logging severity level

void setXLinkChunkSize(int sizeBytes)

Sets the chunk size for splitting device-sent XLink packets. A larger value could increase performance, and 0 disables chunking. A negative value is ignored. Device defaults are configured per protocol, currently 64*1024 for both USB and Ethernet.

Parameters
  • sizeBytes: XLink chunk size in bytes

int getXLinkChunkSize()

Gets current XLink chunk size.

Return

XLink chunk size in bytes

void setXLinkRateLimit(int maxRateBytesPerSecond, int burstSize = 0, int waitUs = 0)

Sets the maximum transmission rate for the XLink connection on device side, using a simple token bucket algorithm. Useful for bandwidth throttling

Parameters
  • maxRateBytesPerSecond: Rate limit in Bytes/second

  • burstSize: Size in Bytes for how much to attempt to send once, 0 = auto

  • waitUs: Time in microseconds to wait for replenishing tokens, 0 = auto

DeviceInfo getDeviceInfo() const

Get the Device Info object o the device which is currently running

Return

DeviceInfo of the current device in execution

std::string getDeviceName()

Get device name if available

Return

device name or empty string if not available

std::string getProductName()

Get product name if available

Return

product name or empty string if not available

std::string getMxId()

Get MxId of device

Return

MxId of connected device

void setLogOutputLevel(LogLevel level)

Sets logging level which decides printing level to standard output. If lower than setLogLevel, no messages will be printed

Parameters
  • level: Standard output printing severity

LogLevel getLogOutputLevel()

Gets logging level which decides printing level to standard output.

Return

Standard output printing severity

bool setIrLaserDotProjectorBrightness(float mA, int mask = -1)

Sets the brightness of the IR Laser Dot Projector. Limits: up to 765mA at 30% duty cycle, up to 1200mA at 6% duty cycle. The duty cycle is controlled by left camera STROBE, aligned to start of exposure. The emitter is turned off by default

Return

True on success, false if not found or other failure

Parameters
  • mA: Current in mA that will determine brightness, 0 or negative to turn off

  • mask: Optional mask to modify only Left (0x1) or Right (0x2) sides on OAK-D-Pro-W-DEV

bool setIrFloodLightBrightness(float mA, int mask = -1)

Sets the brightness of the IR Flood Light. Limits: up to 1500mA at 30% duty cycle. The duty cycle is controlled by the left camera STROBE, aligned to start of exposure. If the dot projector is also enabled, its lower duty cycle limits take precedence. The emitter is turned off by default

Return

True on success, false if not found or other failure

Parameters
  • mA: Current in mA that will determine brightness, 0 or negative to turn off

  • mask: Optional mask to modify only Left (0x1) or Right (0x2) sides on OAK-D-Pro-W-DEV

bool setIrLaserDotProjectorIntensity(float intensity, int mask = -1)

Sets the intensity of the IR Laser Dot Projector. Limits: up to 765mA at 30% frame time duty cycle when exposure time is longer than 30% frame time. Otherwise, duty cycle is 100% of exposure time, with current increased up to max 1200mA to make up for shorter duty cycle. The duty cycle is controlled by left camera STROBE, aligned to start of exposure. The emitter is turned off by default

Return

True on success, false if not found or other failure

Parameters
  • intensity: Intensity on range 0 to 1, that will determine brightness. 0 or negative to turn off

  • mask: Optional mask to modify only Left (0x1) or Right (0x2) sides on OAK-D-Pro-W-DEV

bool setIrFloodLightIntensity(float intensity, int mask = -1)

Sets the intensity of the IR Flood Light. Limits: Intensity is directly normalized to 0 - 1500mA current. The duty cycle is 30% when exposure time is longer than 30% frame time. Otherwise, duty cycle is 100% of exposure time. The duty cycle is controlled by the left camera STROBE, aligned to start of exposure. The emitter is turned off by default

Return

True on success, false if not found or other failure

Parameters
  • intensity: Intensity on range 0 to 1, that will determine brightness, 0 or negative to turn off

  • mask: Optional mask to modify only Left (0x1) or Right (0x2) sides on OAK-D-Pro-W-DEV

std::vector<std::tuple<std::string, int, int>> getIrDrivers()

Retrieves detected IR laser/LED drivers.

Return

Vector of tuples containing: driver name, I2C bus, I2C address. For OAK-D-Pro it should be [{"LM3644", 2, 0x63}]

dai::CrashDump getCrashDump(bool clearCrashDump = true)

Retrieves crash dump for debugging.

bool hasCrashDump()

Retrieves whether the is crash dump stored on device or not.

ProfilingData getProfilingData()

Get current accumulated profiling data

Return

ProfilingData from the specific device

int addLogCallback(std::function<void(LogMessage)> callback)

Add a callback for device logging. The callback will be called from a separate thread with the LogMessage being passed.

Return

Id which can be used to later remove the callback

Parameters
  • callback: Callback to call whenever a log message arrives

bool removeLogCallback(int callbackId)

Removes a callback

Return

True if callback was removed, false otherwise

Parameters
  • callbackId: Id of callback to be removed

void setSystemInformationLoggingRate(float rateHz)

Sets rate of system information logging (“info” severity). Default 1Hz If parameter is less or equal to zero, then system information logging will be disabled

Parameters
  • rateHz: Logging rate in Hz

float getSystemInformationLoggingRate()

Gets current rate of system information logging (“info” severity) in Hz.

Return

Logging rate in Hz

std::vector<CameraBoardSocket> getConnectedCameras()

Get cameras that are connected to the device

Return

Vector of connected cameras

std::vector<ConnectionInterface> getConnectionInterfaces()

Get connection interfaces for device

Return

Vector of connection type

std::vector<CameraFeatures> getConnectedCameraFeatures()

Get cameras that are connected to the device with their features/properties

Return

Vector of connected camera features

std::vector<StereoPair> getStereoPairs()

Get stereo pairs based on the device type.

Return

Vector of stereo pairs

std::vector<StereoPair> getAvailableStereoPairs()

Get stereo pairs taking into account the calibration and connected cameras.

Note

This method will always return a subset of getStereoPairs.

Return

Vector of stereo pairs

std::unordered_map<CameraBoardSocket, std::string> getCameraSensorNames()

Get sensor names for cameras that are connected to the device

Return

Map/dictionary with camera sensor names, indexed by socket

std::string getConnectedIMU()

Get connected IMU type

Return

IMU type

dai::Version getIMUFirmwareVersion()

Get connected IMU firmware version

Return

IMU firmware version

dai::Version getEmbeddedIMUFirmwareVersion()

Get embedded IMU firmware version to which IMU can be upgraded

Return

Get embedded IMU firmware version to which IMU can be upgraded.

bool startIMUFirmwareUpdate(bool forceUpdate = false)

Starts IMU firmware update asynchronously only if IMU node is not running. If current firmware version is the same as embedded firmware version then it’s no-op. Can be overridden by forceUpdate parameter. State of firmware update can be monitored using getIMUFirmwareUpdateStatus API.

Return

Returns whether firmware update can be started. Returns false if IMU node is started.

Parameters
  • forceUpdate: Force firmware update or not. Will perform FW update regardless of current version and embedded firmware version.

std::tuple<bool, unsigned int> getIMUFirmwareUpdateStatus()

Get IMU firmware update status

Return

Whether IMU firmware update is done and last firmware update progress as percentage. return value true and 100 means that the update was successful return value true and other than 100 means that the update failed

MemoryInfo getDdrMemoryUsage()

Retrieves current DDR memory information from device

Return

Used, remaining and total ddr memory

MemoryInfo getCmxMemoryUsage()

Retrieves current CMX memory information from device

Return

Used, remaining and total cmx memory

MemoryInfo getLeonCssHeapUsage()

Retrieves current CSS Leon CPU heap information from device

Return

Used, remaining and total heap memory

MemoryInfo getLeonMssHeapUsage()

Retrieves current MSS Leon CPU heap information from device

Return

Used, remaining and total heap memory

ChipTemperature getChipTemperature()

Retrieves current chip temperature as measured by device

Return

Temperature of various onboard sensors

CpuUsage getLeonCssCpuUsage()

Retrieves average CSS Leon CPU usage

Return

Average CPU usage and sampling duration

CpuUsage getLeonMssCpuUsage()

Retrieves average MSS Leon CPU usage

Return

Average CPU usage and sampling duration

bool isEepromAvailable()

Check if EEPROM is available

Return

True if EEPROM is present on board, false otherwise

bool flashCalibration(CalibrationHandler calibrationDataHandler)

Stores the Calibration and Device information to the Device EEPROM

Return

true on successful flash, false on failure

Parameters

void flashCalibration2(CalibrationHandler calibrationDataHandler)

Stores the Calibration and Device information to the Device EEPROM

Exceptions
  • std::runtime_exception: if failed to flash the calibration

Parameters

CalibrationHandler readCalibration()

Fetches the EEPROM data from the device and loads it into CalibrationHandler object If no calibration is flashed, it returns default

Return

The CalibrationHandler object containing the calibration currently flashed on device EEPROM

CalibrationHandler readCalibration2()

Fetches the EEPROM data from the device and loads it into CalibrationHandler object

Return

The CalibrationHandler object containing the calibration currently flashed on device EEPROM

Exceptions
  • std::runtime_exception: if no calibration is flashed

CalibrationHandler readCalibrationOrDefault()

Fetches the EEPROM data from the device and loads it into CalibrationHandler object If no calibration is flashed, it returns default

Return

The CalibrationHandler object containing the calibration currently flashed on device EEPROM

void factoryResetCalibration()

Factory reset EEPROM data if factory backup is available.

Exceptions
  • std::runtime_exception: If factory reset was unsuccessful

void flashFactoryCalibration(CalibrationHandler calibrationHandler)

Stores the Calibration and Device information to the Device EEPROM in Factory area To perform this action, correct env variable must be set

Return

True on successful flash, false on failure

Exceptions
  • std::runtime_exception: if failed to flash the calibration

void flashEepromClear()

Destructive action, deletes User area EEPROM contents Requires PROTECTED permissions

Return

True on successful flash, false on failure

Exceptions
  • std::runtime_exception: if failed to flash the calibration

void flashFactoryEepromClear()

Destructive action, deletes Factory area EEPROM contents Requires FACTORY PROTECTED permissions

Return

True on successful flash, false on failure

Exceptions
  • std::runtime_exception: if failed to flash the calibration

CalibrationHandler readFactoryCalibration()

Fetches the EEPROM data from Factory area and loads it into CalibrationHandler object

Return

The CalibrationHandler object containing the calibration currently flashed on device EEPROM in Factory Area

Exceptions
  • std::runtime_exception: if no calibration is flashed

CalibrationHandler readFactoryCalibrationOrDefault()

Fetches the EEPROM data from Factory area and loads it into CalibrationHandler object If no calibration is flashed, it returns default

Return

The CalibrationHandler object containing the calibration currently flashed on device EEPROM in Factory Area

std::vector<std::uint8_t> readCalibrationRaw()

Fetches the raw EEPROM data from User area

Return

Binary dump of User area EEPROM data

Exceptions
  • std::runtime_exception: if any error occurred

std::vector<std::uint8_t> readFactoryCalibrationRaw()

Fetches the raw EEPROM data from Factory area

Return

Binary dump of Factory area EEPROM data

Exceptions
  • std::runtime_exception: if any error occurred

UsbSpeed getUsbSpeed()

Retrieves USB connection speed

Return

USB connection speed of connected device if applicable. Unknown otherwise.

void setTimesync(std::chrono::milliseconds period, int numSamples, bool random)

Configures Timesync service on device. It keeps host and device clocks in sync First time timesync is started it waits until the initial sync is completed Afterwards the function changes the following parameters

Parameters
  • period: Interval between timesync runs

  • numSamples: Number of timesync samples per run which are used to compute a better value. Set to zero to disable timesync

  • random: If true partial timesync requests will be performed at random intervals, otherwise at fixed intervals

void setTimesync(bool enable)

Enables or disables Timesync service on device. It keeps host and device clocks in sync.

Parameters
  • enable: Enables or disables consistent timesyncing

void close()

Explicitly closes connection to device.

Note

This function does not need to be explicitly called as destructor closes the device automatically

bool isClosed() const

Is the device already closed (or disconnected)

Warning

This function is thread-unsafe and may return outdated incorrect values. It is only meant for use in simple single-threaded code. Well written code should handle exceptions when calling any DepthAI apis to handle hardware events and multithreaded use.

std::shared_ptr<XLinkConnection> getConnection()

Returns underlying XLinkConnection

std::shared_ptr<const XLinkConnection> getConnection() const

Returns underlying XLinkConnection

Public Static Functions

std::chrono::milliseconds getDefaultSearchTime()

Get the Default Search Time for finding devices.

Return

Default search time in milliseconds

std::tuple<bool, DeviceInfo> getAnyAvailableDevice(std::chrono::milliseconds timeout)

Waits for any available device with a timeout

Return

Tuple of bool and DeviceInfo. Bool specifies if device was found. DeviceInfo specifies the found device

Parameters
  • timeout: duration of time to wait for the any device

std::tuple<bool, DeviceInfo> getAnyAvailableDevice()

Gets any available device

Return

Tuple of bool and DeviceInfo. Bool specifies if device was found. DeviceInfo specifies the found device

std::tuple<bool, DeviceInfo> getAnyAvailableDevice(std::chrono::milliseconds timeout, std::function<void()> cb)

Waits for any available device with a timeout

Return

Tuple of bool and DeviceInfo. Bool specifies if device was found. DeviceInfo specifies the found device

Parameters
  • timeout: duration of time to wait for the any device

  • cb: callback function called between pooling intervals

std::tuple<bool, DeviceInfo> getFirstAvailableDevice(bool skipInvalidDevice = true)

Gets first available device. Device can be either in XLINK_UNBOOTED or XLINK_BOOTLOADER state

Return

Tuple of bool and DeviceInfo. Bool specifies if device was found. DeviceInfo specifies the found device

std::tuple<bool, DeviceInfo> getDeviceByMxId(std::string mxId)

Finds a device by MX ID. Example: 14442C10D13EABCE00

Return

Tuple of bool and DeviceInfo. Bool specifies if device was found. DeviceInfo specifies the found device

Parameters
  • mxId: MyraidX ID which uniquely specifies a device

std::vector<DeviceInfo> getAllAvailableDevices()

Returns all available devices

Return

Vector of available devices

std::vector<DeviceInfo> getAllConnectedDevices()

Returns information of all connected devices. The devices could be both connectable as well as already connected to devices.

Return

Vector of connected device information

std::vector<std::uint8_t> getEmbeddedDeviceBinary(bool usb2Mode, OpenVINO::Version version = OpenVINO::VERSION_UNIVERSAL)

Gets device firmware binary for a specific OpenVINO version

Return

Firmware binary

Parameters
  • usb2Mode: USB2 mode firmware

  • version: Version of OpenVINO which firmware will support

std::vector<std::uint8_t> getEmbeddedDeviceBinary(Config config)

Gets device firmware binary for a specific configuration

Return

Firmware binary

Parameters
  • config: FW with applied configuration

ProfilingData getGlobalProfilingData()

Get current global accumulated profiling data

Return

ProfilingData from all devices

Public Static Attributes

constexpr std::chrono::seconds DEFAULT_SEARCH_TIME = {3}

Default search time for constructors which discover devices.

constexpr float DEFAULT_SYSTEM_INFORMATION_LOGGING_RATE_HZ = {1.0f}

Default rate at which system information is logged.

constexpr UsbSpeed DEFAULT_USB_SPEED = {UsbSpeed::SUPER}

Default UsbSpeed for device connection.

constexpr std::chrono::milliseconds DEFAULT_TIMESYNC_PERIOD = {5000}

Default Timesync period.

constexpr int DEFAULT_TIMESYNC_NUM_SAMPLES = {10}

Default Timesync number of samples per sync.

constexpr bool DEFAULT_TIMESYNC_RANDOM = {true}

Default Timesync packet interval randomness.

struct Config
#include <DeviceBase.hpp>

Device specific configuration

class DeviceBootloader
#include <DeviceBootloader.hpp>

Represents the DepthAI bootloader with the methods to interact with it.

Public Functions

DeviceBootloader(const DeviceInfo &devInfo)

Connects to or boots device in bootloader mode depending on devInfo state; flashing not allowed

Parameters
  • devInfo: DeviceInfo of which to boot or connect to

template<typename T, std::enable_if_t<std::is_same<T, bool>::value, bool> = true>
DeviceBootloader(const DeviceInfo &devInfo, T allowFlashingBootloader)

Connects to or boots device in bootloader mode depending on devInfo state.

Parameters
  • devInfo: DeviceInfo of which to boot or connect to

  • allowFlashingBootloader: (bool) Set to true to allow flashing the devices bootloader

DeviceBootloader(const DeviceInfo &devInfo, Type type, bool allowFlashingBootloader = false)

Connects to device in bootloader of specified type. Throws if it wasn’t possible. This constructor will automatically boot into specified bootloader type if not already running

Parameters
  • devInfo: DeviceInfo of which to boot or connect to

  • type: Type of bootloader to boot/connect to.

  • allowFlashingBootloader: Set to true to allow flashing the devices bootloader. Defaults to false

DeviceBootloader(const DeviceInfo &devInfo, const dai::Path &pathToBootloader, bool allowFlashingBootloader = false)

Connects to or boots device in bootloader mode depending on devInfo state with a custom bootloader firmware.

Parameters
  • devInfo: DeviceInfo of which to boot or connect to

  • pathToBootloader: Custom bootloader firmware to boot

  • allowFlashingBootloader: Set to true to allow flashing the devices bootloader. Defaults to false

DeviceBootloader(std::string nameOrDeviceId, bool allowFlashingBootloader = false)

Connects to device with specified name/device id

Parameters
  • nameOrDeviceId: Creates DeviceInfo with nameOrDeviceId to connect to

  • allowFlashingBootloader: Set to true to allow flashing the devices bootloader. Defaults to false

~DeviceBootloader()

Destroy the Device Bootloader object.

std::tuple<bool, std::string> flash(std::function<void(float)> progressCallback, const Pipeline &pipeline, bool compress = false, std::string applicationName = "", Memory memory = Memory::AUTO, bool checkChecksum = false, )

Flashes a given pipeline to the device.

Parameters
  • progressCallback: Callback that sends back a value between 0..1 which signifies current flashing progress

  • pipeline: Pipeline to flash to the board

  • compress: Compresses application to reduce needed memory size

  • applicationName: Name the application that is flashed

std::tuple<bool, std::string> flash(const Pipeline &pipeline, bool compress = false, std::string applicationName = "", Memory memory = Memory::AUTO, bool checkChecksum = false)

Flashes a given pipeline to the device.

Parameters
  • pipeline: Pipeline to flash to the board

  • compress: Compresses application to reduce needed memory size

  • applicationName: Optional name the application that is flashed

ApplicationInfo readApplicationInfo(Memory memory)

Reads information about flashed application in specified memory from device

Parameters
  • memory: Specifies which memory to query

std::tuple<bool, std::string> flashDepthaiApplicationPackage(std::function<void(float)> progressCallback, std::vector<uint8_t> package, Memory memory = Memory::AUTO, )

Flashes a specific depthai application package that was generated using createDepthaiApplicationPackage or saveDepthaiApplicationPackage

Parameters
  • progressCallback: Callback that sends back a value between 0..1 which signifies current flashing progress

  • package: Depthai application package to flash to the board

std::tuple<bool, std::string> flashDepthaiApplicationPackage(std::vector<uint8_t> package, Memory memory = Memory::AUTO)

Flashes a specific depthai application package that was generated using createDepthaiApplicationPackage or saveDepthaiApplicationPackage

Parameters
  • package: Depthai application package to flash to the board

std::tuple<bool, std::string> flashClear(Memory memory = Memory::AUTO)

Clears flashed application on the device, by removing SBR boot structure Doesn’t remove fast boot header capability to still boot the application

std::tuple<bool, std::string> flashBootloader(std::function<void(float)> progressCallback, const dai::Path &path = {}, )

Flashes bootloader to the current board

Parameters
  • progressCallback: Callback that sends back a value between 0..1 which signifies current flashing progress

  • path: Optional parameter to custom bootloader to flash

std::tuple<bool, std::string> flashBootloader(Memory memory, Type type, std::function<void(float)> progressCallback, const dai::Path &path = {}, )

Flash selected bootloader to the current board

Parameters
  • memory: Memory to flash

  • type: Bootloader type to flash

  • progressCallback: Callback that sends back a value between 0..1 which signifies current flashing progress

  • path: Optional parameter to custom bootloader to flash

std::tuple<bool, std::string> flashUserBootloader(std::function<void(float)> progressCallback, const dai::Path &path = {}, )

Flashes user bootloader to the current board. Available for NETWORK bootloader type

Parameters
  • progressCallback: Callback that sends back a value between 0..1 which signifies current flashing progress

  • path: Optional parameter to custom bootloader to flash

std::tuple<bool, std::string> flashGpioModeBootHeader(Memory memory, int gpioMode)

Flash boot header which boots same as equivalent GPIO mode would

Parameters
  • gpioMode: GPIO mode equivalent

std::tuple<bool, std::string> flashUsbRecoveryBootHeader(Memory memory)

Flash USB recovery boot header. Switches to USB ROM Bootloader

Parameters
  • memory: Which memory to flash the header to

std::tuple<bool, std::string> flashBootHeader(Memory memory, int32_t frequency = -1, int64_t location = -1, int32_t dummyCycles = -1, int64_t offset = -1)

Flash optimized boot header

Return

status as std::tuple<bool, std::string>

Parameters
  • memory: Which memory to flasht the header to

  • frequency: SPI specific parameter, frequency in MHz

  • location: Target location the header should boot to. Default to location of bootloader

  • dummyCycles: SPI specific parameter

  • offset: Offset in memory to flash the header to. Defaults to offset of boot header

std::tuple<bool, std::string> flashFastBootHeader(Memory memory, int32_t frequency = -1, int64_t location = -1, int32_t dummyCycles = -1, int64_t offset = -1)

Flash fast boot header. Application must already be present in flash, or location must be specified manually. Note - Can soft brick your device if firmware location changes.

Return

status as std::tuple<bool, std::string>

Parameters
  • memory: Which memory to flash the header to

  • frequency: SPI specific parameter, frequency in MHz

  • location: Target location the header should boot to. Default to location of bootloader

  • dummyCycles: SPI specific parameter

  • offset: Offset in memory to flash the header to. Defaults to offset of boot header

std::tuple<bool, std::string> flashCustom(Memory memory, size_t offset, const std::vector<uint8_t> &data, std::function<void(float)> progressCb = nullptr)

Flash arbitrary data at custom offset in specified memory

Parameters
  • memory: Memory to flash

  • offset: Offset at which to flash the given data in bytes

  • progressCallback: Callback that sends back a value between 0..1 which signifies current flashing progress

  • data: Data to flash

std::tuple<bool, std::string> readCustom(Memory memory, size_t offset, size_t size, std::vector<uint8_t> &data, std::function<void(float)> progressCb = nullptr)

Reads arbitrary data at custom offset in specified memory

Parameters
  • memory: Memory to read

  • offset: Offset at which to read the specified bytes

  • size: Number of bytes to read

  • data: Data to read to. Must be at least ‘size’ number of bytes big

  • progressCallback: Callback that sends back a value between 0..1 which signifies current reading progress

nlohmann::json readConfigData(Memory memory = Memory::AUTO, Type type = Type::AUTO)

Reads configuration data from bootloader

Return

Unstructured configuration data

Parameters
  • memory: Optional - from which memory to read configuration data

  • type: Optional - from which type of bootloader to read configuration data

std::tuple<bool, std::string> flashConfigData(nlohmann::json configData, Memory memory = Memory::AUTO, Type type = Type::AUTO)

Flashes configuration data to bootloader

Parameters
  • configData: Unstructured configuration data

  • memory: Optional - to which memory flash configuration

  • type: Optional - for which type of bootloader to flash configuration

std::tuple<bool, std::string> flashConfigFile(const dai::Path &configPath, Memory memory = Memory::AUTO, Type type = Type::AUTO)

Flashes configuration data to bootloader

Parameters
  • configPath: Unstructured configuration data

  • memory: Optional - to which memory flash configuration

  • type: Optional - for which type of bootloader to flash configuration

std::tuple<bool, std::string> flashConfigClear(Memory memory = Memory::AUTO, Type type = Type::AUTO)

Clears configuration data

Parameters
  • memory: Optional - on which memory to clear configuration data

  • type: Optional - for which type of bootloader to clear configuration data

Config readConfig(Memory memory = Memory::AUTO, Type type = Type::AUTO)

Reads configuration from bootloader

Return

Configuration structure

Parameters
  • memory: Optional - from which memory to read configuration

  • type: Optional - from which type of bootloader to read configuration

std::tuple<bool, std::string> flashConfig(const Config &config, Memory memory = Memory::AUTO, Type type = Type::AUTO)

Flashes configuration to bootloader

Parameters
  • configData: Configuration structure

  • memory: Optional - to which memory flash configuration

  • type: Optional - for which type of bootloader to flash configuration

MemoryInfo getMemoryInfo(Memory memory)

Retrieves information about specified memory

Parameters
  • memory: Specifies which memory to query

bool isUserBootloaderSupported()

Checks whether User Bootloader is supported with current bootloader

Return

true of User Bootloader is supported, false otherwise

bool isUserBootloader()

Retrieves whether current bootloader is User Bootloader (B out of A/B configuration)

void bootMemory(const std::vector<uint8_t> &fw)

Boots a custom FW in memory

Parameters
  • fw:

Exceptions
  • A: runtime exception if there are any communication issues

void bootUsbRomBootloader()

Boots into integrated ROM bootloader in USB mode

Exceptions
  • A: runtime exception if there are any communication issues

Version getVersion() const

Return

Version of current running bootloader

tl::optional<Version> getFlashedVersion() const

Return

Version of the bootloader that is flashed on the device. Nullopt when the version could not be retrieved because the device was in X_LINK_UNBOOTED state before booting the bootloader.

bool isEmbeddedVersion() const

Return

True when bootloader was booted using latest bootloader integrated in the library. False when bootloader is already running on the device and just connected to.

Type getType() const

Return

Type of currently connected bootloader

bool isAllowedFlashingBootloader() const

Return

True if allowed to flash bootloader

void close()

Explicitly closes connection to device.

Note

This function does not need to be explicitly called as destructor closes the device automatically

bool isClosed() const

Is the device already closed (or disconnected)

Warning

This function is thread-unsafe and may return outdated incorrect values. It is only meant for use in simple single-threaded code. Well written code should handle exceptions when calling any DepthAI apis to handle hardware events and multithreaded use.

Public Static Functions

std::tuple<bool, DeviceInfo> getFirstAvailableDevice()

Searches for connected devices in either UNBOOTED or BOOTLOADER states and returns first available.

Return

Tuple of boolean and DeviceInfo. If found boolean is true and DeviceInfo describes the device. Otherwise false

std::vector<DeviceInfo> getAllAvailableDevices()

Searches for connected devices in either UNBOOTED or BOOTLOADER states.

Return

Vector of all found devices

std::vector<uint8_t> createDepthaiApplicationPackage(const Pipeline &pipeline, const dai::Path &pathToCmd = {}, bool compress = false, std::string applicationName = "", bool checkChecksum = false)

Creates application package which can be flashed to depthai device.

Return

Depthai application package

Parameters
  • pipeline: Pipeline from which to create the application package

  • pathToCmd: Optional path to custom device firmware

  • compress: Optional boolean which specifies if contents should be compressed

  • applicationName: Optional name the application that is flashed

std::vector<uint8_t> createDepthaiApplicationPackage(const Pipeline &pipeline, bool compress, std::string applicationName = "", bool checkChecksum = false)

Creates application package which can be flashed to depthai device.

Return

Depthai application package

Parameters
  • pipeline: Pipeline from which to create the application package

  • compress: Specifies if contents should be compressed

  • applicationName: Name the application that is flashed

void saveDepthaiApplicationPackage(const dai::Path &path, const Pipeline &pipeline, const dai::Path &pathToCmd = {}, bool compress = false, std::string applicationName = "", bool checkChecksum = false)

Saves application package to a file which can be flashed to depthai device.

Parameters
  • path: Path where to save the application package

  • pipeline: Pipeline from which to create the application package

  • pathToCmd: Optional path to custom device firmware

  • compress: Optional boolean which specifies if contents should be compressed

  • applicationName: Optional name the application that is flashed

void saveDepthaiApplicationPackage(const dai::Path &path, const Pipeline &pipeline, bool compress, std::string applicationName = "", bool checkChecksum = false)

Saves application package to a file which can be flashed to depthai device.

Parameters
  • path: Path where to save the application package

  • pipeline: Pipeline from which to create the application package

  • compress: Specifies if contents should be compressed

  • applicationName: Optional name the application that is flashed

Version getEmbeddedBootloaderVersion()

Return

Embedded bootloader version

std::vector<std::uint8_t> getEmbeddedBootloaderBinary(Type type = DEFAULT_TYPE)

Return

Embedded bootloader binary

Public Static Attributes

constexpr const Type DEFAULT_TYPE = {Type::USB}

Default Bootloader type.

struct ApplicationInfo
struct Config : public dai::bootloader::Config

Public Functions

void setStaticIPv4(std::string ip, std::string mask, std::string gateway)

Setting a static IPv4 won’t start DHCP client.

void setDynamicIPv4(std::string ip, std::string mask, std::string gateway)

Setting a dynamic IPv4 will set that IP as well as start DHCP client.

bool isStaticIPV4()

Get if static IPv4 configuration is set.

std::string getIPv4()

Get IPv4.

std::string getIPv4Mask()

Get IPv4 mask.

std::string getIPv4Gateway()

Get IPv4 gateway.

void setDnsIPv4(std::string dns, std::string dnsAlt = "")

Set IPv4 DNS options.

std::string getDnsIPv4()

Get primary IPv4 DNS server.

std::string getDnsAltIPv4()

Get alternate IPv4 DNS server.

void setUsbTimeout(std::chrono::milliseconds ms)

Set USB timeout.

std::chrono::milliseconds getUsbTimeout()

Get USB timeout.

void setNetworkTimeout(std::chrono::milliseconds ms)

Set NETWOR timeout.

std::chrono::milliseconds getNetworkTimeout()

Get NETWORK timeout.

void setMacAddress(std::string mac)

Set MAC address if not flashed on controller.

std::string getMacAddress()

Get MAC address if not flashed on controller.

void setUsbMaxSpeed(UsbSpeed speed)

Set maxUsbSpeed.

UsbSpeed getUsbMaxSpeed()

Get maxUsbSpeed.

nlohmann::json toJson() const

To JSON.

Public Static Functions

Config fromJson(nlohmann::json)

from JSON

struct MemoryInfo
struct DeviceInfo
#include <XLinkConnection.hpp>

Describes a connected device

Public Functions

DeviceInfo(std::string mxidOrName)

Creates a DeviceInfo by checking whether supplied parameter is a MXID or IP/USB name

Parameters
  • mxidOrName: Either MXID, IP Address or USB port name

class EdgeDetectorConfig : public dai::Buffer
#include <EdgeDetectorConfig.hpp>

EdgeDetectorConfig message. Carries sobel edge filter config.

Public Functions

EdgeDetectorConfig()

Construct EdgeDetectorConfig message.

void setSobelFilterKernels(const std::vector<std::vector<int>> &horizontalKernel, const std::vector<std::vector<int>> &verticalKernel)

Set sobel filter horizontal and vertical 3x3 kernels

Parameters
  • horizontalKernel: Used for horizontal gradient computation in 3x3 Sobel filter

  • verticalKernel: Used for vertical gradient computation in 3x3 Sobel filter

EdgeDetectorConfigData getConfigData() const

Retrieve configuration data for EdgeDetector

Return

EdgeDetectorConfigData: sobel filter horizontal and vertical 3x3 kernels

EdgeDetectorConfig &set(dai::RawEdgeDetectorConfig config)

Set explicit configuration.

Parameters
  • config: Explicit configuration

dai::RawEdgeDetectorConfig get() const

Retrieve configuration data for EdgeDetector.

Return

config for EdgeDetector

struct EdgeDetectorConfigData
#include <RawEdgeDetectorConfig.hpp>

EdgeDetectorConfigData configuration data structure.

Public Members

std::vector<std::vector<int>> sobelFilterHorizontalKernel

Used for horizontal gradient computation in 3x3 Sobel filter Format - 3x3 matrix, 2nd column must be 0 Default - +1 0 -1; +2 0 -2; +1 0 -1

std::vector<std::vector<int>> sobelFilterVerticalKernel

Used for vertical gradient computation in 3x3 Sobel filter Format - 3x3 matrix, 2nd row must be 0 Default - +1 +2 +1; 0 0 0; -1 -2 -1

struct EdgeDetectorProperties : public dai::PropertiesSerializable<Properties, EdgeDetectorProperties>
#include <EdgeDetectorProperties.hpp>

Specify properties for EdgeDetector

Public Members

RawEdgeDetectorConfig initialConfig

Initial edge detector config.

int outputFrameSize = 1 * 1024 * 1024

Maximum output frame size in bytes (eg: 300x300 BGR image -> 300*300*3 bytes)

int numFramesPool = 4

Num frames in output pool.

struct EepromData
#include <EepromData.hpp>

EepromData structure

Contains the Calibration and Board data stored on device

Public Members

uint64_t batchTime = {0}

Deprecated, not used or stored.

struct EepromError : public runtime_error
class EncodedFrame : public dai::Buffer

Public Functions

EncodedFrame()

Construct EncodedFrame message. Timestamp is set to now

unsigned int getInstanceNum() const

Retrieves instance number

std::chrono::microseconds getExposureTime() const

Retrieves exposure time

int getSensitivity() const

Retrieves sensitivity, as an ISO value

int getColorTemperature() const

Retrieves white-balance color temperature of the light source, in kelvins

int getLensPosition() const

Retrieves lens position, range 0..255. Returns -1 if not available

float getLensPositionRaw() const

Retrieves lens position, range 0.0f..1.0f. Returns -1 if not available

unsigned int getQuality() const

Retrieves the encoding quality

unsigned int getBitrate() const

Retrieves the encoding bitrate

bool getLossless() const

Returns true if encoding is lossless (JPEG only)

FrameType getFrameType() const

Retrieves frame type (H26x only)

Profile getProfile() const

Retrieves the encoding profile (JPEG, AVC or HEVC)

EncodedFrame &setTimestamp(std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> tp)

Retrieves image timestamp related to dai::Clock::now()

EncodedFrame &setTimestampDevice(std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> tp)

Sets image timestamp related to dai::Clock::now()

EncodedFrame &setSequenceNum(int64_t seq)

Specifies sequence number

Parameters
  • seq: Sequence number

EncodedFrame &setInstanceNum(unsigned int instance)

Instance number relates to the origin of the frame (which camera)

Parameters
  • instance: Instance number

EncodedFrame &setQuality(unsigned int quality)

Specifies the encoding quality

Parameters
  • quality: Encoding quality

EncodedFrame &setBitrate(unsigned int bitrate)

Specifies the encoding quality

Parameters
  • quality: Encoding quality

EncodedFrame &setLossless(bool lossless)

Specifies if encoding is lossless (JPEG only)

Parameters
  • lossless: True if lossless

EncodedFrame &setFrameType(FrameType type)

Specifies the frame type (H26x only)

Parameters
  • type: Type of h26x frame (I, P, B)

EncodedFrame &setProfile(Profile profile)

Specifies the encoding profile

Parameters
  • profile: Encoding profile

struct Extrinsics
#include <Extrinsics.hpp>

Extrinsics structure.

Public Members

Point3f translation

(x, y, z) pose of destCameraSocket w.r.t currentCameraSocket obtained through calibration

Point3f specTranslation

(x, y, z) pose of destCameraSocket w.r.t currentCameraSocket measured through CAD design

class FeatureTrackerConfig : public dai::Buffer
#include <FeatureTrackerConfig.hpp>

FeatureTrackerConfig message. Carries config for feature tracking algorithm

Public Functions

FeatureTrackerConfig()

Construct FeatureTrackerConfig message.

FeatureTrackerConfig &setCornerDetector(dai::FeatureTrackerConfig::CornerDetector::Type cornerDetector)

Set corner detector algorithm type.

Parameters
  • cornerDetector: Corner detector type, HARRIS or SHI_THOMASI

FeatureTrackerConfig &setCornerDetector(dai::FeatureTrackerConfig::CornerDetector config)

Set corner detector full configuration.

Parameters
  • config: Corner detector configuration

FeatureTrackerConfig &setOpticalFlow()

Set optical flow as motion estimation algorithm type.

FeatureTrackerConfig &setOpticalFlow(dai::FeatureTrackerConfig::MotionEstimator::OpticalFlow config)

Set optical flow full configuration.

Parameters
  • config: Optical flow configuration

FeatureTrackerConfig &setHwMotionEstimation()

Set hardware accelerated motion estimation using block matching. Faster than optical flow (software implementation) but might not be as accurate.

FeatureTrackerConfig &setNumTargetFeatures(std::int32_t numTargetFeatures)

Set number of target features to detect.

Parameters
  • numTargetFeatures: Number of features

FeatureTrackerConfig &setMotionEstimator(bool enable)

Enable or disable motion estimator.

Parameters
  • enable:

FeatureTrackerConfig &setMotionEstimator(dai::FeatureTrackerConfig::MotionEstimator config)

Set motion estimator full configuration.

Parameters
  • config: Motion estimator configuration

FeatureTrackerConfig &setFeatureMaintainer(bool enable)

Enable or disable feature maintainer.

Parameters
  • enable:

FeatureTrackerConfig &setFeatureMaintainer(dai::FeatureTrackerConfig::FeatureMaintainer config)

Set feature maintainer full configuration.

Parameters
  • config: feature maintainer configuration

FeatureTrackerConfig &set(dai::RawFeatureTrackerConfig config)

Set explicit configuration.

Parameters
  • config: Explicit configuration

dai::RawFeatureTrackerConfig get() const

Retrieve configuration data for FeatureTracker.

Return

config for feature tracking algorithm

struct FeatureTrackerProperties : public dai::PropertiesSerializable<Properties, FeatureTrackerProperties>
#include <FeatureTrackerProperties.hpp>

Specify properties for FeatureTracker

Public Members

RawFeatureTrackerConfig initialConfig

Initial feature tracker config

std::int32_t numShaves = 1

Number of shaves reserved for feature tracking. Optical flow can use 1 or 2 shaves, while for corner detection only 1 is enough. Hardware motion estimation doesn’t require shaves. Maximum 2, minimum 1.

std::int32_t numMemorySlices = 1

Number of memory slices reserved for feature tracking. Optical flow can use 1 or 2 memory slices, while for corner detection only 1 is enough. Maximum number of features depends on the number of allocated memory slices. Hardware motion estimation doesn’t require memory slices. Maximum 2, minimum 1.

struct GlobalProperties : public dai::PropertiesSerializable<Properties, GlobalProperties>
#include <GlobalProperties.hpp>

Specify properties which apply for whole pipeline

Public Members

double leonCssFrequencyHz = 700 * 1000 * 1000

Set frequency of Leon OS - Increasing can improve performance, at the cost of higher power draw

double leonMssFrequencyHz = 700 * 1000 * 1000

Set frequency of Leon RT - Increasing can improve performance, at the cost of higher power draw

tl::optional<dai::EepromData> calibData

Calibration data sent through pipeline

tl::optional<std::uint32_t> cameraTuningBlobSize

Camera tuning blob size in bytes

std::string cameraTuningBlobUri

Uri which points to camera tuning blob

int32_t xlinkChunkSize = -1

Chunk size for splitting device-sent XLink packets, in bytes. A larger value could increase performance, with 0 disabling chunking. A negative value won’t modify the device defaults - configured per protocol, currently 64*1024 for both USB and Ethernet.

uint32_t sippBufferSize = SIPP_BUFFER_DEFAULT_SIZE

SIPP (Signal Image Processing Pipeline) internal memory pool. SIPP is a framework used to schedule HW filters, e.g. ISP, Warp, Median filter etc. Changing the size of this pool is meant for advanced use cases, pushing the limits of the HW. By default memory is allocated in high speed CMX memory. Setting to 0 will allocate in DDR 256 kilobytes. Units are bytes.

uint32_t sippDmaBufferSize = SIPP_DMA_BUFFER_DEFAULT_SIZE

SIPP (Signal Image Processing Pipeline) internal DMA memory pool. SIPP is a framework used to schedule HW filters, e.g. ISP, Warp, Median filter etc. Changing the size of this pool is meant for advanced use cases, pushing the limits of the HW. Memory is allocated in high speed CMX memory Units are bytes.

class ImageAlignConfig : public dai::Buffer
#include <ImageAlignConfig.hpp>

ImageAlignConfig message

Public Functions

ImageAlignConfig &set(dai::RawImageAlignConfig config)

Set explicit configuration.

Parameters
  • config: Explicit configuration

dai::RawImageAlignConfig get() const

Retrieve configuration data for SpatialLocationCalculator.

Return

config for SpatialLocationCalculator

struct ImageAlignProperties : public dai::PropertiesSerializable<Properties, ImageAlignProperties>
#include <ImageAlignProperties.hpp>

Specify properties for ImageAlign

Public Members

int numFramesPool = 4

Num frames in output pool.

int alignWidth = 0

Optional output width

int alignHeight = 0

Optional output height

std::vector<int> warpHwIds

Warp HW IDs to use, if empty, use auto/default.

Interpolation interpolation = Interpolation::AUTO

Interpolation type to use.

bool outKeepAspectRatio = true

Whether to keep aspect ratio of the input or not

std::int32_t numShaves = 2

Number of shaves reserved.

class ImageManipConfig : public dai::Buffer
#include <ImageManipConfig.hpp>

ImageManipConfig message. Specifies image manipulation options like:

  • Crop

  • Resize

  • Warp

Public Functions

ImageManipConfig()

Construct ImageManipConfig message.

ImageManipConfig &setCropRect(float xmin, float ymin, float xmax, float ymax)

Specifies crop with rectangle with normalized values (0..1)

Parameters
  • xmin: Top left X coordinate of rectangle

  • ymin: Top left Y coordinate of rectangle

  • xmax: Bottom right X coordinate of rectangle

  • ymax: Bottom right Y coordinate of rectangle

ImageManipConfig &setCropRect(std::tuple<float, float, float, float> coordinates)

Specifies crop with rectangle with normalized values (0..1)

Parameters
  • coordinates: Coordinate of rectangle

ImageManipConfig &setCropRotatedRect(RotatedRect rr, bool normalizedCoords = true)

Specifies crop with rotated rectangle. Optionally as non normalized coordinates

Parameters
  • rr: Rotated rectangle which specifies crop

  • normalizedCoords: If true coordinates are in normalized range (0..1) otherwise absolute

ImageManipConfig &setCenterCrop(float ratio, float whRatio = 1.0f)

Specifies a centered crop.

Parameters
  • ratio: Ratio between input image and crop region (0..1)

  • whRatio: Crop region aspect ratio - 1 equals to square, 1.7 equals to 16:9, …

ImageManipConfig &setWarpTransformFourPoints(std::vector<Point2f> pt, bool normalizedCoords)

Specifies warp by supplying 4 points in either absolute or normalized coordinates

Parameters
  • pt: 4 points specifying warp

  • normalizedCoords: If true pt is interpreted as normalized, absolute otherwise

ImageManipConfig &setWarpTransformMatrix3x3(std::vector<float> mat)

Specifies warp with a 3x3 matrix

Parameters
  • mat: 3x3 matrix

ImageManipConfig &setWarpBorderReplicatePixels()

Specifies that warp replicates border pixels

ImageManipConfig &setWarpBorderFillColor(int red, int green, int blue)

Specifies fill color for border pixels. Example:

  • setWarpBorderFillColor(255,255,255) -> white

  • setWarpBorderFillColor(0,0,255) -> blue

Parameters
  • red: Red component

  • green: Green component

  • blue: Blue component

ImageManipConfig &setRotationDegrees(float deg)

Specifies clockwise rotation in degrees

Parameters
  • deg: Rotation in degrees

ImageManipConfig &setRotationRadians(float rad)

Specifies clockwise rotation in radians

Parameters
  • rad: Rotation in radians

ImageManipConfig &setResize(int w, int h)

Specifies output image size. After crop stage the image will be stretched to fit.

Parameters
  • w: Width in pixels

  • h: Height in pixels

ImageManipConfig &setResize(std::tuple<int, int> size)

Specifies output image size. After crop stage the image will be stretched to fit.

Parameters
  • size: Size in pixels

ImageManipConfig &setResizeThumbnail(int w, int h, int bgRed = 0, int bgGreen = 0, int bgBlue = 0)

Specifies output image size. After crop stage the image will be resized by preserving aspect ration. Optionally background can be specified.

Parameters
  • w: Width in pixels

  • h: Height in pixels

  • bgRed: Red component

  • bgGreen: Green component

  • bgBlue: Blue component

ImageManipConfig &setResizeThumbnail(std::tuple<int, int> size, int bgRed = 0, int bgGreen = 0, int bgBlue = 0)

Specifies output image size. After crop stage the image will be resized by preserving aspect ration. Optionally background can be specified.

Parameters
  • size: Size in pixels

  • bgRed: Red component

  • bgGreen: Green component

  • bgBlue: Blue component

ImageManipConfig &setFrameType(ImgFrame::Type name)

Specify output frame type.

Parameters
  • name: Frame type

ImageManipConfig &setColormap(Colormap colormap, int min, int max)

Specify gray to color conversion map

Parameters
  • colormap: map from Colormap enum or Colormap::NONE to disable

ImageManipConfig &setHorizontalFlip(bool flip)

Specify horizontal flip

Parameters
  • flip: True to enable flip, false otherwise

void setVerticalFlip(bool flip)

Specify vertical flip

Parameters
  • flip: True to enable vertical flip, false otherwise

ImageManipConfig &setReusePreviousImage(bool reuse)

Instruct ImageManip to not remove current image from its queue and use the same for next message.

Parameters
  • reuse: True to enable reuse, false otherwise

ImageManipConfig &setSkipCurrentImage(bool skip)

Instructs ImageManip to skip current image and wait for next in queue.

Parameters
  • skip: True to skip current image, false otherwise

ImageManipConfig &setKeepAspectRatio(bool keep)

Specifies to whether to keep aspect ratio or not

ImageManipConfig &setInterpolation(dai::Interpolation interpolation)

Specify which interpolation method to use

Parameters
  • interpolation: type of interpolation

float getCropXMin() const

Return

Top left X coordinate of crop region

float getCropYMin() const

Return

Top left Y coordinate of crop region

float getCropXMax() const

Return

Bottom right X coordinate of crop region

float getCropYMax() const

Return

Bottom right Y coordinate of crop region

int getResizeWidth() const

Return

Output image width

int getResizeHeight() const

Return

Output image height

CropConfig getCropConfig() const

Return

Crop configuration

ResizeConfig getResizeConfig() const

Return

Resize configuration

FormatConfig getFormatConfig() const

Return

Format configuration

bool isResizeThumbnail() const

Return

True if resize thumbnail mode is set, false otherwise

Colormap getColormap() const

Return

specified colormap

ImageManipConfig &set(dai::RawImageManipConfig config)

Set explicit configuration.

Parameters
  • config: Explicit configuration

dai::RawImageManipConfig get() const

Retrieve configuration data for ImageManip.

Return

config for ImageManip

dai::Interpolation getInterpolation() const

Retrieve which interpolation method to use.

struct ImageManipProperties : public dai::PropertiesSerializable<Properties, ImageManipProperties>
#include <ImageManipProperties.hpp>

Specify properties for ImageManip

Public Members

RawImageManipConfig initialConfig

Initial configuration for ImageManip node.

int outputFrameSize = 1 * 1024 * 1024

Maximum output frame size in bytes (eg: 300x300 BGR image -> 300*300*3 bytes)

int numFramesPool = 4

Num frames in output pool.

int meshWidth = 0

Custom warp mesh width. Set to zero to disable.

int meshHeight = 0

Custom warp mesh height. Set to zero to disable.

std::string meshUri = ""

Custom warp mesh uri. Set to empty string to disable.

struct ImgDetection
#include <RawImgDetections.hpp>

ImgDetection structure.

Subclassed by dai::SpatialImgDetection

class ImgDetections : public dai::Buffer
#include <ImgDetections.hpp>

ImgDetections message. Carries normalized detection results

Public Functions

ImgDetections()

Construct ImgDetections message.

ImgDetections &setTimestamp(std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> timestamp)

Sets image timestamp related to dai::Clock::now()

ImgDetections &setTimestampDevice(std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> timestamp)

Sets image timestamp related to dai::Clock::now()

ImgDetections &setSequenceNum(int64_t sequenceNum)

Retrieves image sequence number

Public Members

std::vector<ImgDetection> &detections

Detections.

class ImgFrame : public dai::Buffer
#include <ImgFrame.hpp>

ImgFrame message. Carries image data and metadata.

Public Functions

ImgFrame()

Construct ImgFrame message. Timestamp is set to now

std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> getTimestamp(CameraExposureOffset offset) const

Retrieves image timestamp (at the specified offset of exposure) related to dai::Clock::now()

std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> getTimestampDevice(CameraExposureOffset offset) const

Retrieves image timestamp (at the specified offset of exposure) directly captured from device’s monotonic clock, not synchronized to host time. Used when monotonicity is required.

unsigned int getInstanceNum() const

Retrieves instance number

unsigned int getCategory() const

Retrieves image category

unsigned int getWidth() const

Retrieves image width in pixels

unsigned int getHeight() const

Retrieves image height in pixels

Type getType() const

Retrieves image type

std::chrono::microseconds getExposureTime() const

Retrieves exposure time

int getSensitivity() const

Retrieves sensitivity, as an ISO value

int getColorTemperature() const

Retrieves white-balance color temperature of the light source, in kelvins

int getLensPosition() const

Retrieves lens position, range 0..255. Returns -1 if not available

float getLensPositionRaw() const

Retrieves lens position, range 0.0f..1.0f. Returns -1 if not available

ImgFrame &setTimestamp(std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> timestamp)

Retrieves image timestamp related to dai::Clock::now()

ImgFrame &setTimestampDevice(std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> timestamp)

Sets image timestamp related to dai::Clock::now()

ImgFrame &setInstanceNum(unsigned int instance)

Instance number relates to the origin of the frame (which camera)

Parameters
  • instance: Instance number

ImgFrame &setCategory(unsigned int category)

Parameters
  • category: Image category

ImgFrame &setSequenceNum(int64_t seq)

Specifies sequence number

Parameters
  • seq: Sequence number

ImgFrame &setWidth(unsigned int width)

Specifies frame width

Parameters
  • width: frame width

ImgFrame &setHeight(unsigned int height)

Specifies frame height

Parameters
  • height: frame height

ImgFrame &setSize(unsigned int width, unsigned int height)

Specifies frame size

Parameters
  • height: frame height

  • width: frame width

ImgFrame &setSize(std::tuple<unsigned int, unsigned int> size)

Specifies frame size

Parameters
  • size: frame size

ImgFrame &setType(Type type)

Specifies frame type, RGB, BGR, …

Parameters
  • type: Type of image

ImgFrame &setFrame(cv::Mat frame)

Copies cv::Mat data to

ImgFrame buffer
Note

This API only available if OpenCV support is enabled

Parameters
  • frame: Input cv::Mat frame from which to copy the data

cv::Mat getFrame(bool copy = false)

Retrieves data as cv::Mat with specified width, height and type

Note

This API only available if OpenCV support is enabled

Return

cv::Mat with corresponding to ImgFrame parameters

Parameters
  • copy: If false only a reference to data is made, otherwise a copy

cv::Mat getCvFrame()

Retrieves cv::Mat suitable for use in common opencv functions.

ImgFrame is converted to color BGR interleaved or grayscale depending on type.
Note

This API only available if OpenCV support is enabled

A copy is always made

Return

cv::Mat for use in opencv functions

std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> getTimestamp() const

Retrieves timestamp related to dai::Clock::now()

std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> getTimestampDevice() const

Retrieves timestamp directly captured from device’s monotonic clock, not synchronized to host time. Used mostly for debugging

class IMUData : public dai::Buffer
#include <IMUData.hpp>

IMUData message. Carries normalized detection results

Public Functions

IMUData()

Construct IMUData message.

Public Members

std::vector<IMUPacket> &packets

Detections.

struct IMUPacket
#include <RawIMUData.hpp>

IMU output

Contains combined output for all possible modes. Only the enabled outputs are populated.

struct IMUProperties : public dai::PropertiesSerializable<Properties, IMUProperties>
struct IMUReport

Subclassed by dai::IMUReportAccelerometer, dai::IMUReportGyroscope, dai::IMUReportMagneticField, dai::IMUReportRotationVectorWAcc

Public Functions

std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> getTimestamp() const

Retrieves timestamp related to dai::Clock::now()

std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> getTimestampDevice() const

Retrieves timestamp directly captured from device’s monotonic clock, not synchronized to host time. Used mostly for debugging

int32_t getSequenceNum() const

Retrieves IMU report sequence number

Public Members

int32_t sequence = 0

The sequence number increments once for each report sent. Gaps in the sequence numbers indicate missing or dropped reports. Max value 2^32 after which resets to 0.

Accuracy accuracy = Accuracy::UNRELIABLE

Accuracy of sensor

Timestamp timestamp = {}

Generation timestamp, synced to host time

Timestamp tsDevice = {}

Generation timestamp, direct device monotonic clock

struct IMUReportAccelerometer : public dai::IMUReport
#include <RawIMUData.hpp>

Accelerometer.

Units are [m/s^2]

struct IMUReportGyroscope : public dai::IMUReport
#include <RawIMUData.hpp>

Gyroscope.

Units are [rad/s]

struct IMUReportMagneticField : public dai::IMUReport
#include <RawIMUData.hpp>

Magnetic field.

Units are [uTesla]

struct IMUReportRotationVectorWAcc : public dai::IMUReport
#include <RawIMUData.hpp>

Rotation Vector with Accuracy.

Contains quaternion components: i,j,k,real

Public Members

float i = 0

Quaternion component i.

float j = 0

Quaternion component j.

float k = 0

Quaternion component k.

float real = 0

Quaternion component, real.

float rotationVectorAccuracy = 0

Accuracy estimate [radians], 0 means no estimate.

struct IMUSensorConfig

Public Members

bool sensitivityRelative = false

Change reports relative (vs absolute)

uint16_t changeSensitivity = 0

Report-on-change threshold.

template<typename T>
class LockingQueue
struct LogMessage
struct MemoryInfo
#include <MemoryInfo.hpp>

MemoryInfo structure

Free, remaining and total memory stats

struct MessageDemuxProperties : public dai::PropertiesSerializable<Properties, MessageDemuxProperties>
#include <MessageDemuxProperties.hpp>

MessageDemux does not have any properties to set

class MessageGroup : public dai::Buffer
#include <MessageGroup.hpp>

MessageGroup message. Carries multiple messages in one.

Public Functions

MessageGroup()

Construct MessageGroup message.

std::shared_ptr<ADatatype> operator[](const std::string &name)

Group.

bool isSynced(int64_t thresholdNs) const

True if all messages in the group are in the interval

Parameters
  • thresholdNs: Maximal interval between messages

int64_t getIntervalNs() const

Retrieves interval between the first and the last message in the group.

std::vector<std::string> getMessageNames() const

Gets the names of messages in the group

MessageGroup &setTimestamp(std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> timestamp)

Sets image timestamp related to dai::Clock::now()

MessageGroup &setTimestampDevice(std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> timestamp)

Sets image timestamp related to dai::Clock::now()

MessageGroup &setSequenceNum(int64_t sequenceNum)

Retrieves image sequence number

struct MonoCameraProperties : public dai::PropertiesSerializable<Properties, MonoCameraProperties>
#include <MonoCameraProperties.hpp>

Specify properties for MonoCamera such as camera ID, …

Public Types

enum SensorResolution

Select the camera sensor resolution: 1280×720, 1280×800, 640×400, 640×480, 1920×1200

Values:

enumerator THE_720_P
enumerator THE_800_P
enumerator THE_400_P
enumerator THE_480_P
enumerator THE_1200_P

Public Members

CameraBoardSocket boardSocket = CameraBoardSocket::AUTO

Which socket will mono camera use

std::string cameraName = ""

Which camera name will mono camera use

CameraImageOrientation imageOrientation = CameraImageOrientation::AUTO

Camera sensor image orientation / pixel readout

SensorResolution resolution = SensorResolution::THE_720_P

Select the camera sensor resolution

float fps = 30.0

Camera sensor FPS

int isp3aFps = 0

Isp 3A rate (auto focus, auto exposure, auto white balance, camera controls etc.). Default (0) matches the camera FPS, meaning that 3A is running on each frame. Reducing the rate of 3A reduces the CPU usage on CSS, but also increases the convergence rate of 3A. Note that camera controls will be processed at this rate. E.g. if camera is running at 30 fps, and camera control is sent at every frame, but 3A fps is set to 15, the camera control messages will be processed at 15 fps rate, which will lead to queueing.

int numFramesPool = 3

Frame pool size for the main output, ISP processed

int numFramesPoolRaw = 3

Frame pool size for the raw output

std::vector<dai::FrameEvent> eventFilter = {dai::FrameEvent::READOUT_START}

List of events to receive, the rest will be ignored

tl::optional<bool> rawPacked

Configures whether the camera raw frames are saved as MIPI-packed to memory. The packed format is more efficient, consuming less memory on device, and less data to send to host: RAW10: 4 pixels saved on 5 bytes, RAW12: 2 pixels saved on 3 bytes. When packing is disabled (false), data is saved lsb-aligned, e.g. a RAW10 pixel will be stored as uint16, on bits 9..0: 0b0000’00pp’pppp’pppp. Default is auto: enabled for standard color/monochrome cameras where ISP can work with both packed/unpacked, but disabled for other cameras like ToF.

struct NeuralNetworkProperties : public dai::PropertiesSerializable<Properties, NeuralNetworkProperties>
#include <NeuralNetworkProperties.hpp>

Specify properties for NeuralNetwork such as blob path, …

Subclassed by dai::PropertiesSerializable< NeuralNetworkProperties, DetectionNetworkProperties >

Public Members

tl::optional<std::uint32_t> blobSize

Blob binary size in bytes

std::string blobUri

Uri which points to blob

std::uint32_t numFrames = 8

Number of available output tensors in pool

std::uint32_t numThreads = 0

Number of threads to create for running inference. 0 = auto

std::uint32_t numNCEPerThread = 0

Number of NCE (Neural Compute Engine) per inference thread. 0 = auto

class NNData : public dai::Buffer
#include <NNData.hpp>

NNData message. Carries tensors and their metadata

Public Functions

NNData()

Construct NNData message.

NNData &setLayer(const std::string &name, std::vector<std::uint8_t> data)

Set a layer with datatype U8.

Parameters
  • name: Name of the layer

  • data: Data to store

NNData &setLayer(const std::string &name, const std::vector<int> &data)

Set a layer with datatype U8. Integers are cast to bytes.

Parameters
  • name: Name of the layer

  • data: Data to store

NNData &setLayer(const std::string &name, std::vector<float> data)

Set a layer with datatype FP16. Float values are converted to FP16.

Parameters
  • name: Name of the layer

  • data: Data to store

NNData &setLayer(const std::string &name, std::vector<double> data)

Set a layer with datatype FP16. Double values are converted to FP16.

Parameters
  • name: Name of the layer

  • data: Data to store

std::vector<std::string> getAllLayerNames() const

Return

Names of all layers added

std::vector<TensorInfo> getAllLayers() const

Return

All layers and their information

bool getLayer(const std::string &name, TensorInfo &tensor) const

Retrieve layers tensor information

Return

True if layer exists, false otherwise

Parameters
  • name: Name of the layer

  • [out] tensor: Outputs tensor information of that layer

bool hasLayer(const std::string &name) const

Checks if given layer exists

Return

True if layer exists, false otherwise

Parameters
  • name: Name of the layer

bool getLayerDatatype(const std::string &name, TensorInfo::DataType &datatype) const

Retrieve datatype of a layers tensor

Return

True if layer exists, false otherwise

Parameters
  • name: Name of the layer

  • [out] datatype: Datatype of layers tensor

std::vector<std::uint8_t> getLayerUInt8(const std::string &name) const

Convenience function to retrieve U8 data from layer

Return

U8 binary data

Parameters
  • name: Name of the layer

std::vector<float> getLayerFp16(const std::string &name) const

Convenience function to retrieve float values from layers FP16 tensor

Return

Float data

Parameters
  • name: Name of the layer

std::vector<std::int32_t> getLayerInt32(const std::string &name) const

Convenience function to retrieve INT32 values from layers tensor

Return

INT32 data

Parameters
  • name: Name of the layer

std::vector<std::uint8_t> getFirstLayerUInt8() const

Convenience function to retrieve U8 data from first layer

Return

U8 binary data

std::vector<float> getFirstLayerFp16() const

Convenience function to retrieve float values from first layers FP16 tensor

Return

Float data

std::vector<std::int32_t> getFirstLayerInt32() const

Convenience function to retrieve INT32 values from first layers tensor

Return

INT32 data

NNData &setTimestamp(std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> timestamp)

Sets image timestamp related to dai::Clock::now()

NNData &setTimestampDevice(std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> timestamp)

Sets image timestamp related to dai::Clock::now()

NNData &setSequenceNum(int64_t sequenceNum)

Retrieves image sequence number

class Node
#include <Node.hpp>

Abstract Node.

Subclassed by dai::NodeCRTP< Node, AprilTag, AprilTagProperties >, dai::NodeCRTP< Node, Camera, CameraProperties >, dai::NodeCRTP< Node, Cast, CastProperties >, dai::NodeCRTP< Node, ColorCamera, ColorCameraProperties >, dai::NodeCRTP< Node, DetectionParser, DetectionParserProperties >, dai::NodeCRTP< Node, EdgeDetector, EdgeDetectorProperties >, dai::NodeCRTP< Node, FeatureTracker, FeatureTrackerProperties >, dai::NodeCRTP< Node, ImageAlign, ImageAlignProperties >, dai::NodeCRTP< Node, ImageManip, ImageManipProperties >, dai::NodeCRTP< Node, IMU, IMUProperties >, dai::NodeCRTP< Node, MessageDemux, MessageDemuxProperties >, dai::NodeCRTP< Node, MonoCamera, MonoCameraProperties >, dai::NodeCRTP< Node, NeuralNetwork, NeuralNetworkProperties >, dai::NodeCRTP< Node, ObjectTracker, ObjectTrackerProperties >, dai::NodeCRTP< Node, PointCloud, PointCloudProperties >, dai::NodeCRTP< Node, Script, ScriptProperties >, dai::NodeCRTP< Node, SpatialLocationCalculator, SpatialLocationCalculatorProperties >, dai::NodeCRTP< Node, SPIIn, SPIInProperties >, dai::NodeCRTP< Node, SPIOut, SPIOutProperties >, dai::NodeCRTP< Node, StereoDepth, StereoDepthProperties >, dai::NodeCRTP< Node, Sync, SyncProperties >, dai::NodeCRTP< Node, SystemLogger, SystemLoggerProperties >, dai::NodeCRTP< Node, ToF, ToFProperties >, dai::NodeCRTP< Node, UVC, UVCProperties >, dai::NodeCRTP< Node, VideoEncoder, VideoEncoderProperties >, dai::NodeCRTP< Node, Warp, WarpProperties >, dai::NodeCRTP< Node, XLinkIn, XLinkInProperties >, dai::NodeCRTP< Node, XLinkOut, XLinkOutProperties >

Public Types

using Id = std::int64_t

Node identificator. Unique for every node on a single Pipeline.

Public Functions

std::unique_ptr<Node> clone() const = 0

Deep copy the node.

const char *getName() const = 0

Retrieves nodes name.

std::vector<Output> getOutputs()

Retrieves all nodes outputs.

std::vector<Input> getInputs()

Retrieves all nodes inputs.

std::vector<Output*> getOutputRefs()

Retrieves reference to node outputs.

std::vector<const Output*> getOutputRefs() const

Retrieves reference to node outputs.

std::vector<Input*> getInputRefs()

Retrieves reference to node inputs.

std::vector<const Input*> getInputRefs() const

Retrieves reference to node inputs.

Node(const std::shared_ptr<PipelineImpl> &p, Id nodeId, std::unique_ptr<Properties> props)

Constructs Node.

const AssetManager &getAssetManager() const

Get node AssetManager as a const reference.

AssetManager &getAssetManager()

Get node AssetManager as a reference.

Public Members

const Id id

Id of node.

struct Connection
#include <Node.hpp>

Connection between an Input and Output.

struct DatatypeHierarchy
class Input

Public Functions

Input(Node &par, std::string n, Type t, std::vector<DatatypeHierarchy> types)

Constructs Input with default blocking and queueSize options.

Input(Node &par, std::string n, Type t, bool blocking, int queueSize, std::vector<DatatypeHierarchy> types)

Constructs Input with specified blocking and queueSize options.

Input(Node &par, std::string n, Type t, bool blocking, int queueSize, bool waitForMessage, std::vector<DatatypeHierarchy> types)

Constructs Input with specified blocking and queueSize as well as additional options.

Input(Node &par, std::string group, std::string n, Type t, bool blocking, int queueSize, bool waitForMessage, std::vector<DatatypeHierarchy> types)

Constructs Input with specified blocking and queueSize as well as additional options.

std::string toString() const

Input to string representation.

void setBlocking(bool blocking)

Overrides default input queue behavior.

Parameters
  • blocking: True blocking, false overwriting

bool getBlocking() const

Get input queue behavior

Return

True blocking, false overwriting

void setQueueSize(int size)

Overrides default input queue size. If queue size fills up, behavior depends on blocking attribute

Parameters
  • size: Maximum input queue size

int getQueueSize() const

Get input queue size.

Return

Maximum input queue size

void setWaitForMessage(bool waitForMessage)

Overrides default wait for message behavior. Applicable for nodes with multiple inputs. Specifies behavior whether to wait for this input when a Node processes certain data or not.

Parameters
  • waitForMessage: Whether to wait for message to arrive to this input or not

bool getWaitForMessage() const

Get behavior whether to wait for this input when a Node processes certain data or not

Return

Whether to wait for message to arrive to this input or not

void setReusePreviousMessage(bool reusePreviousMessage)

Equivalent to setWaitForMessage but with inverted logic.

bool getReusePreviousMessage() const

Equivalent to getWaitForMessage but with inverted logic.

class InputMap : public std::unordered_map<std::string, Input>
#include <Node.hpp>

Input map which keeps track of inputs assigned to a node Extends std::unordered_map<std::string, dai::Node::Input>

Public Functions

Input &operator[](const std::string &key)

Create or modify an input.

class Output

Public Functions

std::string toString() const

Output to string representation.

bool isSamePipeline(const Input &in)

Check if this output and given input are on the same pipeline.

See

canConnect for checking if connection is possible

Return

True if output and input are on the same pipeline

bool canConnect(const Input &in)

Check if connection is possible

Return

True if connection is possible, false otherwise

Parameters
  • in: Input to connect to

std::vector<Connection> getConnections()

Retrieve all connections from this output

Return

Vector of connections

void link(const Input &in)

Link current output to input.

Throws an error if this output cannot be linked to given input, or if they are already linked

Parameters

void unlink(const Input &in)

Unlink a previously linked connection

Throws an error if not linked.

Parameters
  • in: Input from which to unlink from

class OutputMap : public std::unordered_map<std::string, Output>
#include <Node.hpp>

Output map which keeps track of extra outputs assigned to a node Extends std::unordered_map<std::string, dai::Node::Output>

Public Functions

Output &operator[](const std::string &key)

Create or modify an input.

struct NodeConnectionSchema
#include <NodeConnectionSchema.hpp>

Specifies a connection between nodes IOs

template<typename Base, typename Derived, typename Props>
class NodeCRTP : public Base

Public Members

Properties &properties

Underlying properties.

struct NodeIoInfo
#include <NodeIoInfo.hpp>

NodeIo informations such as name, type, …

struct NodeObjInfo
#include <NodeObjInfo.hpp>

NodeObj information structure.

struct IoInfoKey
struct ObjectTrackerProperties : public dai::PropertiesSerializable<Properties, ObjectTrackerProperties>
#include <ObjectTrackerProperties.hpp>

Specify properties for ObjectTracker

Public Members

float trackerThreshold = 0.0

Confidence threshold for tracklets. Above this threshold detections will be tracked. Default 0, all detections are tracked.

std::int32_t maxObjectsToTrack = 60

Maximum number of objects to track. Maximum 60 for SHORT_TERM_KCF, maximum 1000 for other tracking methods. Default 60.

std::vector<std::uint32_t> detectionLabelsToTrack

Which detections labels to track. Default all labels are tracked.

TrackerType trackerType = TrackerType::ZERO_TERM_IMAGELESS

Tracking method.

TrackerIdAssignmentPolicy trackerIdAssignmentPolicy = TrackerIdAssignmentPolicy::UNIQUE_ID

New ID assignment policy.

bool trackingPerClass = true

Whether tracker should take into consideration class label for tracking.

class OpenVINO
#include <OpenVINO.hpp>

Support for basic OpenVINO related actions like version identification of neural network blobs,…

Public Types

enum Version

OpenVINO Version supported version information.

Values:

enumerator VERSION_2020_3
enumerator VERSION_2020_4
enumerator VERSION_2021_1
enumerator VERSION_2021_2
enumerator VERSION_2021_3
enumerator VERSION_2021_4
enumerator VERSION_2022_1
enumerator VERSION_UNIVERSAL

Public Static Functions

std::vector<Version> getVersions()

Return

Supported versions

std::string getVersionName(Version version)

Returns string representation of a given version

Return

Name of a given version

Parameters

Version parseVersionName(const std::string &versionString)

Creates Version from string representation. Throws if not possible.

Return

Version object if successful

Parameters

std::vector<Version> getBlobSupportedVersions(std::uint32_t majorVersion, std::uint32_t minorVersion)

Returns a list of potentially supported versions for a specified blob major and minor versions.

Return

Vector of potentially supported versions

Parameters
  • majorVersion: Major version from OpenVINO blob

  • minorVersion: Minor version from OpenVINO blob

Version getBlobLatestSupportedVersion(std::uint32_t majorVersion, std::uint32_t minorVersion)

Returns latest potentially supported version by a given blob version.

Return

Latest potentially supported version

Parameters
  • majorVersion: Major version from OpenVINO blob

  • minorVersion: Minor version from OpenVINO blob

Version getBlobVersion(std::uint32_t majorVersion, std::uint32_t minorVersion)

Returns OpenVINO version of a given blob minor/major revision.

Return

Latest potentially supported version

Parameters
  • majorVersion: Major version from OpenVINO blob

  • minorVersion: Minor version from OpenVINO blob

bool areVersionsBlobCompatible(Version v1, Version v2)

Checks whether two blob versions are compatible

Public Static Attributes

static constexpr const Version DEFAULT_VERSION = VERSION_2022_1

Main OpenVINO version.

struct Blob
#include <OpenVINO.hpp>

OpenVINO Blob.

Public Functions

Blob(std::vector<uint8_t> data)

Construct a new Blob from data in memory.

Parameters
  • data: In memory blob

Blob(const dai::Path &path)

Construct a new Blob by loading from a filesystem path.

Parameters
  • path: Filesystem path to the blob

Public Members

Version version

OpenVINO version.

std::unordered_map<std::string, TensorInfo> networkInputs

Map of input names to additional information.

std::unordered_map<std::string, TensorInfo> networkOutputs

Map of output names to additional information.

uint32_t stageCount = 0

Number of network stages.

uint32_t numShaves = 0

Number of shaves the blob was compiled for.

uint32_t numSlices = 0

Number of CMX slices the blob was compiled for.

std::vector<uint8_t> data

Blob data.

class Path
#include <Path.hpp>

Represents paths on a filesystem; accepts utf-8, Windows utf-16 wchar_t, or std::filesystem::path.

It is suitable for direct use with OS APIs. Features are limited to character-set conversion of paths. It is not intended as a full replacement for std::filesystem::path

Public Types

using value_type = char

character used by native-encoding of filesystem

Public Functions

Path(string_type &&source) noexcept

Construct Path object from source.

Parameters
  • source: native-encoding character sequence; no conversion

Path(const string_type &source)

Construct Path object from source.

Parameters
  • source: native-encoding character sequence; no conversion

Path(const value_type *source)

Construct Path object from source.

Parameters
  • source: pointer to null-terminated native-encoding character sequence; no conversion

std::string string() const

Get path in native-encoding string; no conversion.

Return

std::string

std::string u8string() const

Get path in utf-8.

Return

std::string in utf-8

operator string_type() const noexcept

Implicitly convert to native-encoding string, suitable for use with OS APIs.

Return

std::string of utf-8 on most OSs, std::wstring of utf-16 on Windows

const string_type &native() const noexcept

Returns native-encoding string by const reference, suitable for use with OS APIs.

Return

const std::string& of utf-8 on most OSs, const std::wstring& of utf-16 on Windows

bool empty() const noexcept

Observes if path is empty (contains no string/folders/filename)

Return

bool true if the path is empty, false otherwise

template<typename T>
class Pimpl
class Pipeline
#include <Pipeline.hpp>

Represents the pipeline, set of nodes and connections between them.

Public Functions

Pipeline()

Constructs a new pipeline

Pipeline clone() const

Clone the pipeline (Creates a copy)

GlobalProperties getGlobalProperties() const

Return

Global properties of current pipeline

PipelineSchema getPipelineSchema(SerializationType type = DEFAULT_SERIALIZATION_TYPE) const

Return

Pipeline schema

nlohmann::json serializeToJson() const

Returns whole pipeline represented as JSON.

template<class N>
std::shared_ptr<N> create()

Adds a node to pipeline.

Node is specified by template argument N

void remove(std::shared_ptr<Node> node)

Removes a node from pipeline.

std::vector<std::shared_ptr<const Node>> getAllNodes() const

Get a vector of all nodes.

std::vector<std::shared_ptr<Node>> getAllNodes()

Get a vector of all nodes.

std::shared_ptr<const Node> getNode(Node::Id id) const

Get node with id if it exists, nullptr otherwise.

std::shared_ptr<Node> getNode(Node::Id id)

Get node with id if it exists, nullptr otherwise.

std::vector<Node::Connection> getConnections() const

Get all connections.

const NodeConnectionMap &getConnectionMap() const

Get a reference to internal connection representation.

const NodeMap &getNodeMap() const

Get a reference to internal node map.

void link(const Node::Output &out, const Node::Input &in)

Link output to an input. Both nodes must be on the same pipeline

Throws an error if they aren’t or cannot be connected

Parameters
  • out: Nodes output to connect from

  • in: Nodes input to connect to

void unlink(const Node::Output &out, const Node::Input &in)

Unlink output from an input.

Throws an error if link doesn’t exists

Parameters
  • out: Nodes output to unlink from

  • in: Nodes input to unlink to

const AssetManager &getAssetManager() const

Get pipelines AssetManager as reference.

AssetManager &getAssetManager()

Get pipelines AssetManager as reference.

void setOpenVINOVersion(OpenVINO::Version version)

Set a specific OpenVINO version to use with this pipeline.

void setCalibrationData(CalibrationHandler calibrationDataHandler)

Sets the calibration in pipeline which overrides the calibration data in eeprom

Parameters
  • calibrationDataHandler: CalibrationHandler object which is loaded with calibration information.

CalibrationHandler getCalibrationData() const

gets the calibration data which is set through pipeline

Return

the calibrationHandler with calib data in the pipeline

OpenVINO::Version getOpenVINOVersion() const

Get possible OpenVINO version to run this pipeline.

tl::optional<OpenVINO::Version> getRequiredOpenVINOVersion() const

Get required OpenVINO version to run this pipeline. Can be none.

void setCameraTuningBlobPath(const dai::Path &path)

Set a camera IQ (Image Quality) tuning blob, used for all cameras.

void setXLinkChunkSize(int sizeBytes)

Set chunk size for splitting device-sent XLink packets, in bytes. A larger value could increase performance, with 0 disabling chunking. A negative value won’t modify the device defaults - configured per protocol, currently 64*1024 for both USB and Ethernet.

void setSippBufferSize(int sizeBytes)

SIPP (Signal Image Processing Pipeline) internal memory pool. SIPP is a framework used to schedule HW filters, e.g. ISP, Warp, Median filter etc. Changing the size of this pool is meant for advanced use cases, pushing the limits of the HW. By default memory is allocated in high speed CMX memory. Setting to 0 will allocate in DDR 256 kilobytes. Units are bytes.

void setSippDmaBufferSize(int sizeBytes)

SIPP (Signal Image Processing Pipeline) internal DMA memory pool. SIPP is a framework used to schedule HW filters, e.g. ISP, Warp, Median filter etc. Changing the size of this pool is meant for advanced use cases, pushing the limits of the HW. Memory is allocated in high speed CMX memory Units are bytes.

bool isOpenVINOVersionCompatible(OpenVINO::Version version) const

Checks whether a given OpenVINO version is compatible with the pipeline.

void setBoardConfig(BoardConfig board)

Sets board configuration.

BoardConfig getBoardConfig() const

Gets board configuration.

Device::Config getDeviceConfig() const

Get device configuration needed for this pipeline.

class PipelineImpl
struct PipelineSchema
#include <PipelineSchema.hpp>

Specifies whole pipeline, nodes, properties and connections between nodes IOs

struct Point2f
#include <Point2f.hpp>

Point2f structure

x and y coordinates that define a 2D point.

struct Point3f
#include <Point3f.hpp>

Point3f structure

x,y,z coordinates that define a 3D point.

class PointCloudConfig : public dai::Buffer
#include <PointCloudConfig.hpp>

PointCloudConfig message. Carries ROI (region of interest) and threshold for depth calculation

Public Functions

PointCloudConfig()

Construct PointCloudConfig message.

PointCloudConfig &set(dai::RawPointCloudConfig config)

Set explicit configuration.

Parameters
  • config: Explicit configuration

dai::RawPointCloudConfig get() const

Retrieve configuration data for SpatialLocationCalculator.

Return

config for SpatialLocationCalculator

bool getSparse() const

Retrieve sparse point cloud calculation status.

Return

true if sparse point cloud calculation is enabled, false otherwise

std::array<std::array<float, 4>, 4> getTransformationMatrix() const

Retrieve transformation matrix for point cloud calculation.

Return

4x4 transformation matrix

PointCloudConfig &setSparse(bool enable)

Enable or disable sparse point cloud calculation.

Parameters
  • enable:

PointCloudConfig &setTransformationMatrix(const std::array<std::array<float, 4>, 4> &transformationMatrix)

Set 4x4 transformation matrix for point cloud calculation. Default is an identity matrix.

Parameters
  • transformationMatrix:

PointCloudConfig &setTransformationMatrix(const std::array<std::array<float, 3>, 3> &transformationMatrix)

Set 3x3 transformation matrix for point cloud calculation. Default is an identity matrix.

Parameters
  • transformationMatrix:

class PointCloudData : public dai::Buffer
#include <PointCloudData.hpp>

PointCloudData message. Carries point cloud data.

Public Functions

PointCloudData()

Construct PointCloudData message.

unsigned int getInstanceNum() const

Retrieves instance number

unsigned int getWidth() const

Retrieves the height in pixels - in case of a sparse point cloud, this represents the hight of the frame which was used to generate the point cloud

unsigned int getHeight() const

Retrieves the height in pixels - in case of a sparse point cloud, this represents the hight of the frame which was used to generate the point cloud

float getMinX() const

Retrieves minimal x coordinate in depth units (millimeter by default)

float getMinY() const

Retrieves minimal y coordinate in depth units (millimeter by default)

float getMinZ() const

Retrieves minimal z coordinate in depth units (millimeter by default)

float getMaxX() const

Retrieves maximal x coordinate in depth units (millimeter by default)

float getMaxY() const

Retrieves maximal y coordinate in depth units (millimeter by default)

float getMaxZ() const

Retrieves maximal z coordinate in depth units (millimeter by default)

bool isSparse() const

Retrieves whether point cloud is sparse

PointCloudData &setTimestamp(std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> timestamp)

Retrieves image timestamp related to dai::Clock::now()

PointCloudData &setTimestampDevice(std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> timestamp)

Sets image timestamp related to dai::Clock::now()

PointCloudData &setInstanceNum(unsigned int instance)

Instance number relates to the origin of the frame (which camera)

Parameters
  • instance: Instance number

PointCloudData &setSequenceNum(int64_t seq)

Specifies sequence number

Parameters
  • seq: Sequence number

PointCloudData &setWidth(unsigned int width)

Specifies frame width

Parameters
  • width: frame width

PointCloudData &setHeight(unsigned int height)

Specifies frame height

Parameters
  • height: frame height

PointCloudData &setSize(unsigned int width, unsigned int height)

Specifies frame size

Parameters
  • height: frame height

  • width: frame width

PointCloudData &setSize(std::tuple<unsigned int, unsigned int> size)

Specifies frame size

Parameters
  • size: frame size

PointCloudData &setMinX(float val)

Specifies minimal x coordinate in depth units (millimeter by default)

Parameters
  • val: minimal x coordinate in depth units (millimeter by default)

PointCloudData &setMinY(float val)

Specifies minimal y coordinate in depth units (millimeter by default)

Parameters
  • val: minimal y coordinate in depth units (millimeter by default)

PointCloudData &setMinZ(float val)

Specifies minimal z coordinate in depth units (millimeter by default)

Parameters
  • val: minimal z coordinate in depth units (millimeter by default)

PointCloudData &setMaxX(float val)

Specifies maximal x coordinate in depth units (millimeter by default)

Parameters
  • val: maximal x coordinate in depth units (millimeter by default)

PointCloudData &setMaxY(float val)

Specifies maximal y coordinate in depth units (millimeter by default)

Parameters
  • val: maximal y coordinate in depth units (millimeter by default)

PointCloudData &setMaxZ(float val)

Specifies maximal z coordinate in depth units (millimeter by default)

Parameters
  • val: maximal z coordinate in depth units (millimeter by default)

int64_t getSequenceNum() const

Retrieves sequence number

std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> getTimestamp() const

Retrieves timestamp related to dai::Clock::now()

std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> getTimestampDevice() const

Retrieves timestamp directly captured from device’s monotonic clock, not synchronized to host time. Used mostly for debugging

template<typename ...T>
struct dependent_false
struct PointCloudProperties : public dai::PropertiesSerializable<Properties, PointCloudProperties>
#include <PointCloudProperties.hpp>

Specify properties for PointCloud

struct ProfilingData
struct Properties
#include <Properties.hpp>

Base Properties structure.

Subclassed by dai::PropertiesSerializable< Properties, AprilTagProperties >, dai::PropertiesSerializable< Properties, CameraProperties >, dai::PropertiesSerializable< Properties, CastProperties >, dai::PropertiesSerializable< Properties, ColorCameraProperties >, dai::PropertiesSerializable< Properties, DetectionParserProperties >, dai::PropertiesSerializable< Properties, EdgeDetectorProperties >, dai::PropertiesSerializable< Properties, FeatureTrackerProperties >, dai::PropertiesSerializable< Properties, GlobalProperties >, dai::PropertiesSerializable< Properties, ImageAlignProperties >, dai::PropertiesSerializable< Properties, ImageManipProperties >, dai::PropertiesSerializable< Properties, IMUProperties >, dai::PropertiesSerializable< Properties, MessageDemuxProperties >, dai::PropertiesSerializable< Properties, MonoCameraProperties >, dai::PropertiesSerializable< Properties, NeuralNetworkProperties >, dai::PropertiesSerializable< Properties, ObjectTrackerProperties >, dai::PropertiesSerializable< Properties, PointCloudProperties >, dai::PropertiesSerializable< Properties, ScriptProperties >, dai::PropertiesSerializable< Properties, SpatialLocationCalculatorProperties >, dai::PropertiesSerializable< Properties, SPIInProperties >, dai::PropertiesSerializable< Properties, SPIOutProperties >, dai::PropertiesSerializable< Properties, StereoDepthProperties >, dai::PropertiesSerializable< Properties, SyncProperties >, dai::PropertiesSerializable< Properties, SystemLoggerProperties >, dai::PropertiesSerializable< Properties, ToFProperties >, dai::PropertiesSerializable< Properties, UVCProperties >, dai::PropertiesSerializable< Properties, VideoEncoderProperties >, dai::PropertiesSerializable< Properties, WarpProperties >, dai::PropertiesSerializable< Properties, XLinkInProperties >, dai::PropertiesSerializable< Properties, XLinkOutProperties >

template<typename Base, typename Derived>
struct PropertiesSerializable : public Base
#include <Properties.hpp>

Serializable properties.

struct RawAprilTagConfig : public dai::RawBuffer
#include <RawAprilTagConfig.hpp>

RawAprilTags configuration structure.

Public Types

enum Family

Supported AprilTag families.

Values:

enumerator TAG_36H11
enumerator TAG_36H10
enumerator TAG_25H9
enumerator TAG_16H5
enumerator TAG_CIR21H7
enumerator TAG_STAND41H12

Public Members

Family family = Family::TAG_36H11

AprilTag family.

std::int32_t quadDecimate = 4

Detection of quads can be done on a lower-resolution image, improving speed at a cost of pose accuracy and a slight decrease in detection rate. Decoding the binary payload is still done at full resolution.

float quadSigma = 0.0f

What Gaussian blur should be applied to the segmented image. Parameter is the standard deviation in pixels. Very noisy images benefit from non-zero values (e.g. 0.8).

bool refineEdges = true

When non-zero, the edges of the each quad are adjusted to “snap

to” strong gradients nearby. This is useful when decimation is employed, as it can increase the quality of the initial quad estimate substantially. Generally recommended to be on. Very computationally inexpensive. Option is ignored if quadDecimate = 1.

float decodeSharpening = 0.25f

How much sharpening should be done to decoded images? This can help decode small tags but may or may not help in odd lighting conditions or low light conditions. The default value is 0.25.

std::int32_t maxHammingDistance = 1

Max number of error bits that should be corrected. Accepting large numbers of corrected errors leads to greatly increased false positive rates. As of this implementation, the detector cannot detect tags with a hamming distance greater than 2.

QuadThresholds quadThresholds

AprilTag quad threshold parameters.

struct QuadThresholds
#include <RawAprilTagConfig.hpp>

AprilTag quad threshold parameters.

Public Members

std::int32_t minClusterPixels = 5

Reject quads containing too few pixels.

std::int32_t maxNmaxima = 10

How many corner candidates to consider when segmenting a group of pixels into a quad.

float criticalDegree = 10.f

Reject quads where pairs of edges have angles that are close to straight or close to 180 degrees. Zero means that no quads are rejected. (In degrees).

float maxLineFitMse = 10.f

When fitting lines to the contours, what is the maximum mean squared error allowed? This is useful in rejecting contours that are far from being quad shaped; rejecting these quads “early” saves expensive decoding processing.

std::int32_t minWhiteBlackDiff = 5

When we build our model of black & white pixels, we add an extra check that the white model must be (overall) brighter than the black model. How much brighter? (in pixel values: [0,255]).

bool deglitch = false

Should the thresholded image be deglitched? Only useful for very noisy images

struct RawAprilTags : public dai::RawBuffer
#include <RawAprilTags.hpp>

RawAprilTags structure.

struct RawBuffer
#include <RawBuffer.hpp>

RawBuffer structure.

Subclassed by dai::RawAprilTagConfig, dai::RawAprilTags, dai::RawCameraControl, dai::RawEdgeDetectorConfig, dai::RawEncodedFrame, dai::RawFeatureTrackerConfig, dai::RawImageAlignConfig, dai::RawImageManipConfig, dai::RawImgDetections, dai::RawImgFrame, dai::RawIMUData, dai::RawMessageGroup, dai::RawNNData, dai::RawPointCloudConfig, dai::RawPointCloudData, dai::RawSpatialImgDetections, dai::RawSpatialLocationCalculatorConfig, dai::RawSpatialLocations, dai::RawStereoDepthConfig, dai::RawSystemInformation, dai::RawToFConfig, dai::RawTrackedFeatures, dai::RawTracklets

struct RawCameraControl : public dai::RawBuffer
#include <RawCameraControl.hpp>

RawCameraControl structure.

Public Members

uint8_t lensPosition = 0

Lens/VCM position, range: 0..255. Used with autoFocusMode = OFF. With current IMX378 modules:

  • max 255: macro focus, at 8cm distance

  • infinite focus at about 120..130 (may vary from module to module)

  • lower values lead to out-of-focus (lens too close to the sensor array)

struct ManualExposureParams
struct RegionParams
struct StrobeConfig

Public Members

uint8_t enable

Enable strobe output.

uint8_t activeLevel

1 for normal polarity (high-active), 0 otherwise

int8_t gpioNumber

GPIO number to drive, or -1 if sensor driven.

struct StrobeTimings

Public Members

int32_t exposureBeginOffsetUs

Start offset in microseconds, relative to exposure window.

int32_t exposureEndOffsetUs

End offset in microseconds, relative to exposure window.

uint32_t durationUs

Fixed duration in microseconds. If set (non-zero), overrides exposureEndOffsetUs

struct RawEdgeDetectorConfig : public dai::RawBuffer
#include <RawEdgeDetectorConfig.hpp>

RawEdgeDetectorConfig configuration structure.

struct RawEncodedFrame : public dai::RawBuffer
struct CameraSettings
struct RawFeatureTrackerConfig : public dai::RawBuffer
#include <RawFeatureTrackerConfig.hpp>

RawFeatureTrackerConfig configuration structure.

Public Members

CornerDetector cornerDetector

Corner detector configuration. Used for feature detection.

MotionEstimator motionEstimator

Motion estimator configuration. Used for feature reidentification between current and previous features.

FeatureMaintainer featureMaintainer

FeatureMaintainer configuration. Used for feature maintaining.

struct CornerDetector
#include <RawFeatureTrackerConfig.hpp>

Corner detector configuration structure.

Public Members

Type type = Type::HARRIS

Corner detector algorithm type.

std::int32_t cellGridDimension = 4

Ensures distributed feature detection across the image. Image is divided into horizontal and vertical cells, each cell has a target feature count = numTargetFeatures / cellGridDimension. Each cell has its own feature threshold. A value of 4 means that the image is divided into 4x4 cells of equal width/height. Maximum 4, minimum 1.

std::int32_t numTargetFeatures = 320

Target number of features to detect. Maximum number of features is determined at runtime based on algorithm type.

std::int32_t numMaxFeatures = AUTO

Hard limit for the maximum number of features that can be detected. 0 means auto, will be set to the maximum value based on memory constraints.

bool enableSobel = true

Enable 3x3 Sobel operator to smoothen the image whose gradient is to be computed. If disabled, a simple 1D row/column differentiator is used for gradient.

bool enableSorting = true

Enable sorting detected features based on their score or not.

Thresholds thresholds

Threshold settings. These are advanced settings, suitable for debugging/special cases.

struct Thresholds
#include <RawFeatureTrackerConfig.hpp>

Threshold settings structure for corner detector.

Public Members

float initialValue = AUTO

Minimum strength of a feature which will be detected. 0 means automatic threshold update. Recommended so the tracker can adapt to different scenes/textures. Each cell has its own threshold. Empirical value.

float min = AUTO

Minimum limit for threshold. Applicable when automatic threshold update is enabled. 0 means auto, 6000000 for HARRIS, 1200 for SHI_THOMASI. Empirical value.

float max = AUTO

Maximum limit for threshold. Applicable when automatic threshold update is enabled. 0 means auto. Empirical value.

float decreaseFactor = 0.9f

When detected number of features exceeds the maximum in a cell threshold is lowered by multiplying its value with this factor.

float increaseFactor = 1.1f

When detected number of features doesn’t exceed the maximum in a cell, threshold is increased by multiplying its value with this factor.

struct FeatureMaintainer
#include <RawFeatureTrackerConfig.hpp>

FeatureMaintainer configuration structure.

Public Members

bool enable = true

Enable feature maintaining or not.

float minimumDistanceBetweenFeatures = 50

Used to filter out detected feature points that are too close. Requires sorting enabled in detector. Unit of measurement is squared euclidean distance in pixels.

float lostFeatureErrorThreshold = 50000

Optical flow measures the tracking error for every feature. If the point can’t be tracked or it’s out of the image it will set this error to a maximum value. This threshold defines the level where the tracking accuracy is considered too bad to keep the point.

float trackedFeatureThreshold = 200000

Once a feature was detected and we started tracking it, we need to update its Harris score on each image. This is needed because a feature point can disappear, or it can become too weak to be tracked. This threshold defines the point where such a feature must be dropped. As the goal of the algorithm is to provide longer tracks, we try to add strong points and track them until they are absolutely untrackable. This is why, this value is usually smaller than the detection threshold.

struct MotionEstimator
#include <RawFeatureTrackerConfig.hpp>

Used for feature reidentification between current and previous features.

Public Members

bool enable = true

Enable motion estimation or not.

Type type = Type::LUCAS_KANADE_OPTICAL_FLOW

Motion estimator algorithm type.

OpticalFlow opticalFlow

Optical flow configuration. Takes effect only if MotionEstimator algorithm type set to LUCAS_KANADE_OPTICAL_FLOW.

struct OpticalFlow
#include <RawFeatureTrackerConfig.hpp>

Optical flow configuration structure.

Public Members

std::int32_t pyramidLevels = AUTO

Number of pyramid levels, only for optical flow. AUTO means it’s decided based on input resolution: 3 if image width <= 640, else 4. Valid values are either 3/4 for VGA, 4 for 720p and above.

std::int32_t searchWindowWidth = 5

Image patch width used to track features. Must be an odd number, maximum 9. N means the algorithm will be able to track motion at most (N-1)/2 pixels in a direction per pyramid level. Increasing this number increases runtime

std::int32_t searchWindowHeight = 5

Image patch height used to track features. Must be an odd number, maximum 9. N means the algorithm will be able to track motion at most (N-1)/2 pixels in a direction per pyramid level. Increasing this number increases runtime

float epsilon = 0.01f

Feature tracking termination criteria. Optical flow will refine the feature position on each pyramid level until the displacement between two refinements is smaller than this value. Decreasing this number increases runtime.

std::int32_t maxIterations = 9

Feature tracking termination criteria. Optical flow will refine the feature position maximum this many times on each pyramid level. If the Epsilon criteria described in the previous chapter is not met after this number of iterations, the algorithm will continue with the current calculated value. Increasing this number increases runtime.

struct RawGroupMessage
struct RawImageAlignConfig : public dai::RawBuffer
#include <RawImageAlignConfig.hpp>

RawImageAlignConfig configuration structure.

Public Members

uint16_t staticDepthPlane = 0

Optional static depth plane to align to, in depth units, by default millimeters

struct RawImageManipConfig : public dai::RawBuffer
#include <RawImageManipConfig.hpp>

RawImageManipConfig structure.

Public Members

Interpolation interpolation = Interpolation::AUTO

Interpolation type to use.

struct CropConfig
struct CropRect
struct FormatConfig
struct ResizeConfig

Public Members

bool keepAspectRatio = true

Whether to keep aspect ratio of input or not

struct RawImgDetections : public dai::RawBuffer
#include <RawImgDetections.hpp>

RawImgDetections structure.

struct RawImgFrame : public dai::RawBuffer
#include <RawImgFrame.hpp>

RawImgFrame structure.

struct CameraSettings
struct Specs
struct RawIMUData : public dai::RawBuffer
struct RawMessageGroup : public dai::RawBuffer
struct RawNNData : public dai::RawBuffer
#include <RawNNData.hpp>

RawNNData structure.

struct RawPointCloudConfig : public dai::RawBuffer
#include <RawPointCloudConfig.hpp>

RawPointCloudConfig configuration structure.

struct RawPointCloudData : public dai::RawBuffer
struct RawSpatialImgDetections : public dai::RawBuffer
#include <RawSpatialImgDetections.hpp>

RawSpatialImgDetections structure.

struct RawSpatialLocationCalculatorConfig : public dai::RawBuffer
#include <RawSpatialLocationCalculatorConfig.hpp>

RawSpatialLocation configuration structure.

struct RawSpatialLocations : public dai::RawBuffer
#include <RawSpatialLocations.hpp>

RawSpatialLocations structure.

struct RawStereoDepthConfig : public dai::RawBuffer
#include <RawStereoDepthConfig.hpp>

RawStereoDepthConfig configuration structure.

Public Members

AlgorithmControl algorithmControl

Controls the flow of stereo algorithm - left-right check, subpixel etc.

PostProcessing postProcessing

Controls the postprocessing of disparity and/or depth map.

CensusTransform censusTransform

Census transform settings.

CostMatching costMatching

Cost matching settings.

CostAggregation costAggregation

Cost aggregation settings.

struct AlgorithmControl

Public Types

enum DepthAlign

Align the disparity/depth to the perspective of a rectified output, or center it

Values:

enumerator RECTIFIED_RIGHT
enumerator RECTIFIED_LEFT
enumerator CENTER
enum DepthUnit

Measurement unit for depth data

Values:

enumerator METER
enumerator CENTIMETER
enumerator MILLIMETER
enumerator INCH
enumerator FOOT
enumerator CUSTOM

Public Members

DepthAlign depthAlign = DepthAlign::RECTIFIED_RIGHT

Set the disparity/depth alignment to the perspective of a rectified output, or center it

DepthUnit depthUnit = DepthUnit::MILLIMETER

Measurement unit for depth data. Depth data is integer value, multiple of depth unit.

float customDepthUnitMultiplier = 1000.f

Custom depth unit multiplier, if custom depth unit is enabled, relative to 1 meter. A multiplier of 1000 effectively means depth unit in millimeter.

bool enableLeftRightCheck = true

Computes and combines disparities in both L-R and R-L directions, and combine them. For better occlusion handling

bool enableExtended = false

Disparity range increased from 95 to 190, combined from full resolution and downscaled images. Suitable for short range objects

bool enableSubpixel = false

Computes disparity with sub-pixel interpolation (5 fractional bits), suitable for long range

std::int32_t leftRightCheckThreshold = 10

Left-right check threshold for left-right, right-left disparity map combine, 0..128 Used only when left-right check mode is enabled. Defines the maximum difference between the confidence of pixels from left-right and right-left confidence maps

std::int32_t subpixelFractionalBits = 3

Number of fractional bits for subpixel mode

Valid values: 3,4,5

Defines the number of fractional disparities: 2^x

Median filter postprocessing is supported only for 3 fractional bits

std::int32_t disparityShift = 0

Shift input frame by a number of pixels to increase minimum depth. For example shifting by 48 will change effective disparity search range from (0,95] to [48,143]. An alternative approach to reducing the minZ. We normally only recommend doing this when it is known that there will be no objects farther away than MaxZ, such as having a depth camera mounted above a table pointing down at the table surface.

tl::optional<float> centerAlignmentShiftFactor

Used only for debug purposes. centerAlignmentShiftFactor is set automatically in firmware, from camera extrinsics when depth alignment to camera is enabled. Center alignment is achieved by shifting the obtained disparity map by a scale factor. It’s used to align to a different camera that is on the same horizontal baseline as the two stereo cameras. E.g. if we have a device with 10 cm stereo baseline, and we have another camera inbetween, 9cm from the LEFT camera and 1 cm from the RIGHT camera we can align the obtained disparity map using a scale factor of 0.9. Note that aligning disparity map to a different camera involves 2 steps:

  1. Shifting obtained disparity map.

  2. Warping the image to counter rotate and scaling to match the FOV. Center alignment factor 1 is equivalent to RECTIFIED_RIGHT Center alignment factor 0 is equivalent to RECTIFIED_LEFT

std::int32_t numInvalidateEdgePixels = 0

Invalidate X amount of pixels at the edge of disparity frame. For right and center alignment X pixels will be invalidated from the right edge, for left alignment from the left edge.

struct CensusTransform
#include <RawStereoDepthConfig.hpp>

The basic cost function used by the Stereo Accelerator for matching the left and right images is the Census Transform. It works on a block of pixels and computes a bit vector which represents the structure of the image in that block. There are two types of Census Transform based on how the middle pixel is used: Classic Approach and Modified Census. The comparisons that are made between pixels can be or not thresholded. In some cases a mask can be applied to filter out only specific bits from the entire bit stream. All these approaches are: Classic Approach: Uses middle pixel to compare against all its neighbors over a defined window. Each comparison results in a new bit, that is 0 if central pixel is smaller, or 1 if is it bigger than its neighbor. Modified Census Transform: same as classic Census Transform, but instead of comparing central pixel with its neighbors, the window mean will be compared with each pixel over the window. Thresholding Census Transform: same as classic Census Transform, but it is not enough that a neighbor pixel to be bigger than the central pixel, it must be significant bigger (based on a threshold). Census Transform with Mask: same as classic Census Transform, but in this case not all of the pixel from the support window are part of the binary descriptor. We use a ma sk “M” to define which pixels are part of the binary descriptor (1), and which pixels should be skipped (0).

Public Types

enum KernelSize

Census transform kernel size possible values.

Values:

enumerator AUTO
enumerator KERNEL_5x5
enumerator KERNEL_7x7
enumerator KERNEL_7x9

Public Members

KernelSize kernelSize = KernelSize::AUTO

Census transform kernel size.

uint64_t kernelMask = 0

Census transform mask, default - auto, mask is set based on resolution and kernel size. Disabled for 400p input resolution. Enabled for 720p. 0XA82415 for 5x5 census transform kernel. 0XAA02A8154055 for 7x7 census transform kernel. 0X2AA00AA805540155 for 7x9 census transform kernel. Empirical values.

bool enableMeanMode = true

If enabled, each pixel in the window is compared with the mean window value instead of the central pixel.

uint32_t threshold = 0

Census transform comparison threshold value.

struct CostAggregation
#include <RawStereoDepthConfig.hpp>

Cost Aggregation is based on Semi Global Block Matching (SGBM). This algorithm uses a semi global technique to aggregate the cost map. Ultimately the idea is to build inertia into the stereo algorithm. If a pixel has very little texture information, then odds are the correct disparity for this pixel is close to that of the previous pixel considered. This means that we get improved results in areas with low texture.

Public Members

uint8_t divisionFactor = 1

Cost calculation linear equation parameters.

uint16_t horizontalPenaltyCostP1 = defaultPenaltyP1

Horizontal P1 penalty cost parameter.

uint16_t horizontalPenaltyCostP2 = defaultPenaltyP2

Horizontal P2 penalty cost parameter.

uint16_t verticalPenaltyCostP1 = defaultPenaltyP1

Vertical P1 penalty cost parameter.

uint16_t verticalPenaltyCostP2 = defaultPenaltyP2

Vertical P2 penalty cost parameter.

struct CostMatching
#include <RawStereoDepthConfig.hpp>

The matching cost is way of measuring the similarity of image locations in stereo correspondence algorithm. Based on the configuration parameters and based on the descriptor type, a linear equation is applied to computing the cost for each candidate disparity at each pixel.

Public Types

enum DisparityWidth

Disparity search range: 64 or 96 pixels are supported by the HW.

Values:

enumerator DISPARITY_64
enumerator DISPARITY_96

Public Members

DisparityWidth disparityWidth = DisparityWidth::DISPARITY_96

Disparity search range, default 96 pixels.

bool enableCompanding = false

Disparity companding using sparse matching. Matching pixel by pixel for N disparities. Matching every 2nd pixel for M disparitites. Matching every 4th pixel for T disparities. In case of 96 disparities: N=48, M=32, T=16. This way the search range is extended to 176 disparities, by sparse matching. Note: when enabling this flag only depth map will be affected, disparity map is not.

uint8_t invalidDisparityValue = 0

Used only for debug purposes, SW postprocessing handled only invalid value of 0 properly.

uint8_t confidenceThreshold = 245

Disparities with confidence value under this threshold are accepted. Higher confidence threshold means disparities with less confidence are accepted too.

LinearEquationParameters linearEquationParameters

Cost calculation linear equation parameters.

struct LinearEquationParameters
#include <RawStereoDepthConfig.hpp>

The linear equation applied for computing the cost is: COMB_COST = α*AD + β*(CTC<<3). CLAMP(COMB_COST >> 5, threshold). Where AD is the Absolute Difference between 2 pixels values. CTC is the Census Transform Cost between 2 pixels, based on Hamming distance (xor). The α and β parameters are subject to fine tuning by the user.

struct PostProcessing
#include <RawStereoDepthConfig.hpp>

Post-processing filters, all the filters are applied in disparity domain.

Public Members

MedianFilter median = MedianFilter::KERNEL_5x5

Set kernel size for disparity/depth median filtering, or disable

std::int16_t bilateralSigmaValue = 0

Sigma value for bilateral filter. 0 means disabled. A larger value of the parameter means that farther colors within the pixel neighborhood will be mixed together.

SpatialFilter spatialFilter

Edge-preserving filtering: This type of filter will smooth the depth noise while attempting to preserve edges.

TemporalFilter temporalFilter

Temporal filtering with optional persistence.

ThresholdFilter thresholdFilter

Threshold filtering. Filters out distances outside of a given interval.

BrightnessFilter brightnessFilter

Brightness filtering. If input frame pixel is too dark or too bright, disparity will be invalidated. The idea is that for too dark/too bright pixels we have low confidence, since that area was under/over exposed and details were lost.

SpeckleFilter speckleFilter

Speckle filtering. Removes speckle noise.

DecimationFilter decimationFilter

Decimation filter. Reduces disparity/depth map x/y complexity, reducing runtime complexity for other filters.

struct BrightnessFilter
#include <RawStereoDepthConfig.hpp>

Brightness filtering. If input frame pixel is too dark or too bright, disparity will be invalidated. The idea is that for too dark/too bright pixels we have low confidence, since that area was under/over exposed and details were lost.

Public Members

std::int32_t minBrightness = 0

Minimum pixel brightness. If input pixel is less or equal than this value the depth value is invalidated.

std::int32_t maxBrightness = 256

Maximum range in depth units. If input pixel is less or equal than this value the depth value is invalidated.

struct DecimationFilter
#include <RawStereoDepthConfig.hpp>

Decimation filter. Reduces the depth scene complexity. The filter runs on kernel sizes [2x2] to [8x8] pixels.

Public Types

enum DecimationMode

Decimation algorithm type.

Values:

enumerator PIXEL_SKIPPING
enumerator NON_ZERO_MEDIAN
enumerator NON_ZERO_MEAN

Public Members

std::uint32_t decimationFactor = 1

Decimation factor. Valid values are 1,2,3,4. Disparity/depth map x/y resolution will be decimated with this value.

DecimationMode decimationMode = DecimationMode::PIXEL_SKIPPING

Decimation algorithm type.

struct SpatialFilter
#include <RawStereoDepthConfig.hpp>

1D edge-preserving spatial filter using high-order domain transform.

Public Members

bool enable = false

Whether to enable or disable the filter.

std::uint8_t holeFillingRadius = 2

An in-place heuristic symmetric hole-filling mode applied horizontally during the filter passes. Intended to rectify minor artefacts with minimal performance impact. Search radius for hole filling.

float alpha = 0.5f

The Alpha factor in an exponential moving average with Alpha=1 - no filter. Alpha = 0 - infinite filter. Determines the amount of smoothing.

std::int32_t delta = 0

Step-size boundary. Establishes the threshold used to preserve “edges”. If the disparity value between neighboring pixels exceed the disparity threshold set by this delta parameter, then filtering will be temporarily disabled. Default value 0 means auto: 3 disparity integer levels. In case of subpixel mode it’s 3*number of subpixel levels.

std::int32_t numIterations = 1

Number of iterations over the image in both horizontal and vertical direction.

struct SpeckleFilter
#include <RawStereoDepthConfig.hpp>

Speckle filtering. Removes speckle noise.

Public Members

bool enable = false

Whether to enable or disable the filter.

std::uint32_t speckleRange = 50

Speckle search range.

struct TemporalFilter
#include <RawStereoDepthConfig.hpp>

Temporal filtering with optional persistence.

Public Types

enum PersistencyMode

Persistency algorithm type.

Values:

enumerator PERSISTENCY_OFF
enumerator VALID_8_OUT_OF_8
enumerator VALID_2_IN_LAST_3
enumerator VALID_2_IN_LAST_4
enumerator VALID_2_OUT_OF_8
enumerator VALID_1_IN_LAST_2
enumerator VALID_1_IN_LAST_5
enumerator VALID_1_IN_LAST_8
enumerator PERSISTENCY_INDEFINITELY

Public Members

bool enable = false

Whether to enable or disable the filter.

PersistencyMode persistencyMode = PersistencyMode::VALID_2_IN_LAST_4

Persistency mode. If the current disparity/depth value is invalid, it will be replaced by an older value, based on persistency mode.

float alpha = 0.4f

The Alpha factor in an exponential moving average with Alpha=1 - no filter. Alpha = 0 - infinite filter. Determines the extent of the temporal history that should be averaged.

std::int32_t delta = 0

Step-size boundary. Establishes the threshold used to preserve surfaces (edges). If the disparity value between neighboring pixels exceed the disparity threshold set by this delta parameter, then filtering will be temporarily disabled. Default value 0 means auto: 3 disparity integer levels. In case of subpixel mode it’s 3*number of subpixel levels.

struct ThresholdFilter
#include <RawStereoDepthConfig.hpp>

Threshold filtering. Filters out distances outside of a given interval.

Public Members

std::int32_t minRange = 0

Minimum range in depth units. Depth values under this value are invalidated.

std::int32_t maxRange = 65535

Maximum range in depth units. Depth values over this value are invalidated.

struct RawSystemInformation : public dai::RawBuffer
#include <RawSystemInformation.hpp>

System information of device

Memory usage, cpu usage and chip temperature

Public Members

MemoryInfo ddrMemoryUsage

DDR memory usage.

MemoryInfo cmxMemoryUsage

CMX memory usage.

MemoryInfo leonCssMemoryUsage

LeonCss heap usage.

MemoryInfo leonMssMemoryUsage

LeonMss heap usage.

CpuUsage leonCssCpuUsage

LeonCss cpu usage.

CpuUsage leonMssCpuUsage

LeonMss cpu usage.

ChipTemperature chipTemperature

Chip temperatures.

struct RawToFConfig : public dai::RawBuffer
#include <RawToFConfig.hpp>

RawToFConfig configuration structure.

Public Members

MedianFilter median = MedianFilter::MEDIAN_OFF

Set kernel size for depth median filtering, or disable

struct RawTrackedFeatures : public dai::RawBuffer
#include <RawTrackedFeatures.hpp>

RawTrackedFeatures structure.

struct RawTracklets : public dai::RawBuffer
#include <RawTracklets.hpp>

RawTracklets structure.

struct Rect
#include <Rect.hpp>

Rect structure

x,y coordinates together with width and height that define a rectangle. Can be either normalized [0,1] or absolute representation.

Public Functions

Point2f topLeft() const

The top-left corner.

Point2f bottomRight() const

The bottom-right corner

Size2f size() const

Size (width, height) of the rectangle

float area() const

Area (width*height) of the rectangle

bool empty() const

True if rectangle is empty.

bool contains(const Point2f &pt) const

Checks whether the rectangle contains the point.

bool isNormalized() const

Whether rectangle is normalized (coordinates in [0,1] range) or not.

Rect denormalize(int destWidth, int destHeight) const

Denormalize rectangle.

Parameters
  • destWidth: Destination frame width.

  • destHeight: Destination frame height.

Rect normalize(int srcWidth, int srcHeight) const

Normalize rectangle.

Parameters
  • srcWidth: Source frame width.

  • srcHeight: Source frame height.

struct RotatedRect
#include <RotatedRect.hpp>

RotatedRect structure.

Public Members

float angle = 0.f

degrees, increasing clockwise

struct ScriptProperties : public dai::PropertiesSerializable<Properties, ScriptProperties>
#include <ScriptProperties.hpp>

Specify ScriptProperties options such as script uri, script name, …

Public Members

std::string scriptUri

Uri which points to actual script

std::string scriptName = "<script>"

Name of script

ProcessorType processor = ProcessorType::LEON_CSS

Which processor should execute the script

struct Size2f
#include <Size2f.hpp>

Size2f structure

width, height values define the size of the shape/frame

struct SpatialDetectionNetworkProperties : public dai::PropertiesSerializable<DetectionNetworkProperties, SpatialDetectionNetworkProperties>
#include <SpatialDetectionNetworkProperties.hpp>

Specify properties for SpatialDetectionNetwork

struct SpatialImgDetection : public dai::ImgDetection
#include <RawSpatialImgDetections.hpp>

SpatialImgDetection structure

Contains image detection results together with spatial location data.

class SpatialImgDetections : public dai::Buffer
#include <SpatialImgDetections.hpp>

SpatialImgDetections message. Carries detection results together with spatial location data

Public Functions

SpatialImgDetections()

Construct SpatialImgDetections message.

SpatialImgDetections &setTimestamp(std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> timestamp)

Sets image timestamp related to dai::Clock::now()

SpatialImgDetections &setTimestampDevice(std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> timestamp)

Sets image timestamp related to dai::Clock::now()

SpatialImgDetections &setSequenceNum(int64_t sequenceNum)

Retrieves image sequence number

Public Members

std::vector<SpatialImgDetection> &detections

Detection results.

class SpatialLocationCalculatorConfig : public dai::Buffer
#include <SpatialLocationCalculatorConfig.hpp>

SpatialLocationCalculatorConfig message. Carries ROI (region of interest) and threshold for depth calculation

Public Functions

SpatialLocationCalculatorConfig()

Construct SpatialLocationCalculatorConfig message.

void setROIs(std::vector<SpatialLocationCalculatorConfigData> ROIs)

Set a vector of ROIs as configuration data.

Parameters
  • ROIs: Vector of configuration parameters for ROIs (region of interests)

void addROI(SpatialLocationCalculatorConfigData &ROI)

Add a new ROI to configuration data.

Parameters
  • roi: Configuration parameters for ROI (region of interest)

std::vector<SpatialLocationCalculatorConfigData> getConfigData() const

Retrieve configuration data for SpatialLocationCalculator

Return

Vector of configuration parameters for ROIs (region of interests)

SpatialLocationCalculatorConfig &set(dai::RawSpatialLocationCalculatorConfig config)

Set explicit configuration.

Parameters
  • config: Explicit configuration

dai::RawSpatialLocationCalculatorConfig get() const

Retrieve configuration data for SpatialLocationCalculator.

Return

config for SpatialLocationCalculator

struct SpatialLocationCalculatorConfigData
#include <RawSpatialLocationCalculatorConfig.hpp>

SpatialLocation configuration data structure.

Public Members

Rect roi

Region of interest for spatial location calculation.

SpatialLocationCalculatorConfigThresholds depthThresholds

Upper and lower thresholds for depth values to take into consideration.

SpatialLocationCalculatorAlgorithm calculationAlgorithm = SpatialLocationCalculatorAlgorithm::MEDIAN

Calculation method used to obtain spatial locations Average/mean: the average of ROI is used for calculation. Min: the minimum value inside ROI is used for calculation. Max: the maximum value inside ROI is used for calculation. Mode: the most frequent value inside ROI is used for calculation. Median: the median value inside ROI is used for calculation. Default: median.

std::int32_t stepSize = AUTO

Step size for calculation. Step size 1 means that every pixel is taken into calculation, size 2 means every second etc. Default value AUTO: for AVERAGE, MIN, MAX step size is 1; for MODE/MEDIAN it’s 2.

struct SpatialLocationCalculatorConfigThresholds
#include <RawSpatialLocationCalculatorConfig.hpp>

SpatialLocation configuration thresholds structure

Contains configuration data for lower and upper threshold in depth units (millimeter by default) for ROI. Values outside of threshold range will be ignored when calculating spatial coordinates from depth map.

Public Members

uint32_t lowerThreshold = 0

Values less or equal than this threshold are not taken into calculation.

uint32_t upperThreshold = 65535

Values greater or equal than this threshold are not taken into calculation.

class SpatialLocationCalculatorData : public dai::Buffer
#include <SpatialLocationCalculatorData.hpp>

SpatialLocationCalculatorData message. Carries spatial information (X,Y,Z) and their configuration parameters

Public Functions

SpatialLocationCalculatorData()

Construct SpatialLocationCalculatorData message.

std::vector<SpatialLocations> &getSpatialLocations() const

Retrieve configuration data for SpatialLocationCalculatorData.

Return

Vector of spatial location data, carrying spatial information (X,Y,Z)

SpatialLocationCalculatorData &setTimestamp(std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> timestamp)

Sets image timestamp related to dai::Clock::now()

SpatialLocationCalculatorData &setTimestampDevice(std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> timestamp)

Sets image timestamp related to dai::Clock::now()

SpatialLocationCalculatorData &setSequenceNum(int64_t sequenceNum)

Retrieves image sequence number

struct SpatialLocationCalculatorProperties : public dai::PropertiesSerializable<Properties, SpatialLocationCalculatorProperties>
#include <SpatialLocationCalculatorProperties.hpp>

Specify properties for SpatialLocationCalculator

struct SpatialLocations
#include <RawSpatialLocations.hpp>

SpatialLocations structure

Contains configuration data, average depth for the calculated ROI on depth map. Together with spatial coordinates: x,y,z relative to the center of depth map. Units are in depth units (millimeter by default).

Public Members

SpatialLocationCalculatorConfigData config

Configuration for selected ROI

float depthAverage = 0.f

Average of depth values inside the ROI between the specified thresholds in config. Calculated only if calculation method is set to AVERAGE or MIN oR MAX.

float depthMode = 0.f

Most frequent of depth values inside the ROI between the specified thresholds in config. Calculated only if calculation method is set to MODE.

float depthMedian = 0.f

Median of depth values inside the ROI between the specified thresholds in config. Calculated only if calculation method is set to MEDIAN.

std::uint16_t depthMin = 0

Minimum of depth values inside the ROI between the specified thresholds in config. Calculated only if calculation method is set to AVERAGE or MIN oR MAX.

std::uint16_t depthMax = 0

Maximum of depth values inside the ROI between the specified thresholds in config. Calculated only if calculation method is set to AVERAGE or MIN oR MAX.

std::uint32_t depthAveragePixelCount = 0

Number of depth values used in calculations.

Point3f spatialCoordinates

Spatial coordinates - x,y,z; x,y are the relative positions of the center of ROI to the center of depth map

struct SPIInProperties : public dai::PropertiesSerializable<Properties, SPIInProperties>
#include <SPIInProperties.hpp>

Properties for SPIIn node

Public Members

std::string streamName

Name of stream

int busId = 0

SPI bus to use

std::uint32_t maxDataSize = dai::device::XLINK_USB_BUFFER_MAX_SIZE

Maximum input data size

std::uint32_t numFrames = 4

Number of frames in pool

struct SPIOutProperties : public dai::PropertiesSerializable<Properties, SPIOutProperties>
#include <SPIOutProperties.hpp>

Specify properties for SPIOut node

Public Members

std::string streamName

Name of stream

int busId = 0

SPI bus to use

class StereoDepthConfig : public dai::Buffer
#include <StereoDepthConfig.hpp>

StereoDepthConfig message.

Public Functions

StereoDepthConfig()

Construct StereoDepthConfig message.

StereoDepthConfig &setDepthAlign(AlgorithmControl::DepthAlign align)

Parameters
  • align: Set the disparity/depth alignment: centered (between the ‘left’ and ‘right’ inputs), or from the perspective of a rectified output stream

StereoDepthConfig &setConfidenceThreshold(int confThr)

Confidence threshold for disparity calculation

Parameters
  • confThr: Confidence threshold value 0..255

int getConfidenceThreshold() const

Get confidence threshold for disparity calculation

StereoDepthConfig &setMedianFilter(MedianFilter median)

Parameters
  • median: Set kernel size for disparity/depth median filtering, or disable

MedianFilter getMedianFilter() const

Get median filter setting

StereoDepthConfig &setBilateralFilterSigma(uint16_t sigma)

A larger value of the parameter means that farther colors within the pixel neighborhood will be mixed together, resulting in larger areas of semi-equal color.

Parameters
  • sigma: Set sigma value for 5x5 bilateral filter. 0..65535

uint16_t getBilateralFilterSigma() const

Get sigma value for 5x5 bilateral filter

StereoDepthConfig &setLeftRightCheckThreshold(int threshold)

Parameters
  • threshold: Set threshold for left-right, right-left disparity map combine, 0..255

int getLeftRightCheckThreshold() const

Get threshold for left-right check combine

StereoDepthConfig &setLeftRightCheck(bool enable)

Computes and combines disparities in both L-R and R-L directions, and combine them.

For better occlusion handling, discarding invalid disparity values

StereoDepthConfig &setExtendedDisparity(bool enable)

Disparity range increased from 95 to 190, combined from full resolution and downscaled images. Suitable for short range objects

StereoDepthConfig &setSubpixel(bool enable)

Computes disparity with sub-pixel interpolation (3 fractional bits by default).

Suitable for long range. Currently incompatible with extended disparity

StereoDepthConfig &setSubpixelFractionalBits(int subpixelFractionalBits)

Number of fractional bits for subpixel mode. Default value: 3. Valid values: 3,4,5. Defines the number of fractional disparities: 2^x. Median filter postprocessing is supported only for 3 fractional bits.

StereoDepthConfig &setDepthUnit(AlgorithmControl::DepthUnit depthUnit)

Set depth unit of depth map.

Meter, centimeter, millimeter, inch, foot or custom unit is available.

StereoDepthConfig &setDisparityShift(int disparityShift)

Shift input frame by a number of pixels to increase minimum depth. For example shifting by 48 will change effective disparity search range from (0,95] to [48,143]. An alternative approach to reducing the minZ. We normally only recommend doing this when it is known that there will be no objects farther away than MaxZ, such as having a depth camera mounted above a table pointing down at the table surface.

StereoDepthConfig &setNumInvalidateEdgePixels(int32_t numInvalidateEdgePixels)

Invalidate X amount of pixels at the edge of disparity frame. For right and center alignment X pixels will be invalidated from the right edge, for left alignment from the left edge.

AlgorithmControl::DepthUnit getDepthUnit()

Get depth unit of depth map.

float getMaxDisparity() const

Useful for normalization of the disparity map.

Return

Maximum disparity value that the node can return

StereoDepthConfig &set(dai::RawStereoDepthConfig config)

Set explicit configuration.

Parameters
  • config: Explicit configuration

dai::RawStereoDepthConfig get() const

Retrieve configuration data for StereoDepth.

Return

config for stereo depth algorithm

struct StereoDepthProperties : public dai::PropertiesSerializable<Properties, StereoDepthProperties>
#include <StereoDepthProperties.hpp>

Specify properties for StereoDepth

Public Members

RawStereoDepthConfig initialConfig

Initial stereo config.

CameraBoardSocket depthAlignCamera = CameraBoardSocket::AUTO

Which camera to align disparity/depth to. When configured (not AUTO), takes precedence over ‘depthAlign’

bool enableRectification = true

Enable stereo rectification/dewarp or not. Useful to disable when replaying pre-recorded rectified frames.

std::int32_t rectifyEdgeFillColor = 0

Fill color for missing data at frame edges - grayscale 0..255, or -1 to replicate pixels

tl::optional<std::int32_t> width

Input frame width. Optional (taken from MonoCamera nodes if they exist)

tl::optional<std::int32_t> height

Input frame height. Optional (taken from MonoCamera nodes if they exist)

tl::optional<std::int32_t> outWidth

Output disparity/depth width. Currently only used when aligning to RGB

tl::optional<std::int32_t> outHeight

Output disparity/depth height. Currently only used when aligning to RGB

bool outKeepAspectRatio = true

Whether to keep aspect ratio of the input (rectified) or not

RectificationMesh mesh

Specify a direct warp mesh to be used for rectification, instead of intrinsics + extrinsic matrices

bool enableRuntimeStereoModeSwitch = false

Whether to enable switching stereo modes at runtime or not. E.g. standard to subpixel, standard+LR-check to subpixel + LR-check. Note: It will allocate resources for worst cases scenario, should be enabled only if dynamic mode switch is required. Default value: false.

int numFramesPool = 3

Num frames in output pool.

std::int32_t numPostProcessingShaves = AUTO

Number of shaves reserved for stereo depth post processing. Post processing can use multiple shaves to increase performance. -1 means auto, resources will be allocated based on enabled filters. 0 means that it will reuse the shave assigned for main stereo algorithm. For optimal performance it’s recommended to allocate more than 0, so post processing will run in parallel with main stereo algorithm. Minimum 1, maximum 10.

std::int32_t numPostProcessingMemorySlices = AUTO

Number of memory slices reserved for stereo depth post processing. -1 means auto, memory will be allocated based on initial stereo settings and number of shaves. 0 means that it will reuse the memory slices assigned for main stereo algorithm. For optimal performance it’s recommended to allocate more than 0, so post processing will run in parallel with main stereo algorithm. Minimum 1, maximum 6.

bool focalLengthFromCalibration = true

Whether to use horizontal focal length from calibration intrinsics (fx) or calculate based on calibration FOV. Default value is true. If set to false it’s calculated from FOV and image resolution: focalLength = calib.width / (2.f * tan(calib.fov / 2 / 180.f * pi));

tl::optional<bool> useHomographyRectification

Use 3x3 homography matrix for stereo rectification instead of sparse mesh generated on device. Default behaviour is AUTO, for lenses with FOV over 85 degrees sparse mesh is used, otherwise 3x3 homography. If custom mesh data is provided through loadMeshData or loadMeshFiles this option is ignored. true: 3x3 homography matrix generated from calibration data is used for stereo rectification, can’t correct lens distortion. false: sparse mesh is generated on-device from calibration data with mesh step specified with setMeshStep (Default: (16, 16)), can correct lens distortion. Implementation for generating the mesh is same as opencv’s initUndistortRectifyMap function. Only the first 8 distortion coefficients are used from calibration data.

tl::optional<float> baseline

Override baseline from calibration. Used only in disparity to depth conversion. Units are centimeters.

tl::optional<float> focalLength

Override focal length from calibration. Used only in disparity to depth conversion. Units are pixels.

tl::optional<bool> disparityToDepthUseSpecTranslation = tl::nullopt

Use baseline information for disparity to depth conversion from specs (design data) or from calibration. Suitable for debugging. Utilizes calibrated value as default

tl::optional<bool> rectificationUseSpecTranslation = tl::nullopt

Obtain rectification matrices using spec translation (design data) or from calibration in calculations. Suitable for debugging. Default: false

tl::optional<bool> depthAlignmentUseSpecTranslation = tl::nullopt

Use baseline information for depth alignment from specs (design data) or from calibration. Suitable for debugging. Utilizes calibrated value as default

tl::optional<float> alphaScaling

Free scaling parameter between 0 (when all the pixels in the undistorted image are valid) and 1 (when all the source image pixels are retained in the undistorted image). On some high distortion lenses, and/or due to rectification (image rotated) invalid areas may appear even with alpha=0, in these cases alpha < 0.0 helps removing invalid areas. See getOptimalNewCameraMatrix from opencv for more details.

struct RectificationMesh

Public Members

std::string meshLeftUri

Uri which points to the mesh array for ‘left’ input rectification

std::string meshRightUri

Uri which points to the mesh array for ‘right’ input rectification

tl::optional<std::uint32_t> meshSize

Mesh array size in bytes, for each of ‘left’ and ‘right’ (need to match)

uint16_t stepWidth = 16

Distance between mesh points, in the horizontal direction

uint16_t stepHeight = 16

Distance between mesh points, in the vertical direction

struct StereoPair
#include <StereoPair.hpp>

Describes which camera sockets can be used for stereo and their baseline.

Public Members

float baseline = -1

Baseline in centimeters.

struct StereoRectification
#include <StereoRectification.hpp>

StereoRectification structure.

class StreamMessageParser
class StreamPacketDesc : public streamPacketDesc_t
struct SyncProperties : public dai::PropertiesSerializable<Properties, SyncProperties>
#include <SyncProperties.hpp>

Specify properties for Sync.

Public Members

uint64_t syncThresholdNs = 10e6

The maximal interval the messages can be apart in nanoseconds.

int32_t syncAttempts = -1

The number of syncing attempts before fail (num of replaced messages).

class SystemInformation : public dai::Buffer
#include <SystemInformation.hpp>

SystemInformation message. Carries memory usage, cpu usage and chip temperatures.

Public Functions

SystemInformation()

Construct SystemInformation message.

struct SystemLoggerProperties : public dai::PropertiesSerializable<Properties, SystemLoggerProperties>
#include <SystemLoggerProperties.hpp>

SystemLoggerProperties structure

Public Members

float rateHz = 1.0f

Rate at which the messages are going to be sent in hertz

struct TensorInfo
#include <TensorInfo.hpp>

TensorInfo structure.

struct Timestamp
#include <Timestamp.hpp>

Timestamp structure.

class ToFConfig : public dai::Buffer
#include <ToFConfig.hpp>

ToFConfig message. Carries config for feature tracking algorithm

Public Functions

ToFConfig()

Construct ToFConfig message.

ToFConfig &setMedianFilter(MedianFilter median)

Parameters
  • median: Set kernel size for median filtering, or disable

ToFConfig &set(dai::RawToFConfig config)

Set explicit configuration.

Parameters
  • config: Explicit configuration

dai::RawToFConfig get() const

Retrieve configuration data for ToF.

Return

config for feature tracking algorithm

struct ToFProperties : public dai::PropertiesSerializable<Properties, ToFProperties>
#include <ToFProperties.hpp>

Specify properties for ToF

Public Members

RawToFConfig initialConfig

Initial ToF config

int numFramesPool = 4

Num frames in output pool

std::int32_t numShaves = 1

Number of shaves reserved for ToF decoding.

struct TraceEvent
struct TrackedFeature
#include <RawTrackedFeatures.hpp>

TrackedFeature structure

Public Members

Point2f position

x, y position of the detected feature

uint32_t id = 0

Feature ID. Persistent between frames if motion estimation is enabled.

uint32_t age = 0

Feature age in frames

float harrisScore = 0.f

Feature harris score

float trackingError = 0.f

Feature tracking error

class TrackedFeatures : public dai::Buffer
#include <TrackedFeatures.hpp>

TrackedFeatures message. Carries position (X, Y) of tracked features and their ID.

Public Functions

TrackedFeatures()

Construct TrackedFeatures message.

TrackedFeatures &setTimestamp(std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> timestamp)

Sets image timestamp related to dai::Clock::now()

TrackedFeatures &setTimestampDevice(std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> timestamp)

Sets image timestamp related to dai::Clock::now()

TrackedFeatures &setSequenceNum(int64_t sequenceNum)

Retrieves image sequence number

struct Tracklet
#include <RawTracklets.hpp>

Tracklet structure

Contains tracklets from object tracker output.

Public Members

Rect roi

Tracked region of interest.

std::int32_t id = 0

Tracklet’s ID.

std::int32_t label = 0

Tracklet’s label ID.

std::int32_t age = 0

Number of frames it is being tracked for.

TrackingStatus status = TrackingStatus::LOST

Status of tracklet.

ImgDetection srcImgDetection

Image detection that is tracked.

Point3f spatialCoordinates

Spatial coordinates of tracklet.

class Tracklets : public dai::Buffer
#include <Tracklets.hpp>

Tracklets message. Carries object tracking information.

Public Functions

Tracklets()

Construct Tracklets message.

Tracklets &setTimestamp(std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> timestamp)

Sets image timestamp related to dai::Clock::now()

Tracklets &setTimestampDevice(std::chrono::time_point<std::chrono::steady_clock, std::chrono::steady_clock::duration> timestamp)

Sets image timestamp related to dai::Clock::now()

Tracklets &setSequenceNum(int64_t sequenceNum)

Retrieves image sequence number

Public Members

std::vector<Tracklet> &tracklets

Retrieve data for Tracklets.

Return

Vector of object tracker data, carrying tracking information.

struct UVCProperties : public dai::PropertiesSerializable<Properties, UVCProperties>
#include <UVCProperties.hpp>

Properties for UVC node

Public Members

std::unordered_map<int, int> gpioInit

<gpio_number, value> list for GPIOs to set at init

std::unordered_map<int, int> gpioStreamOn

<gpio_number, value> list for GPIOs to set when streaming is enabled

std::unordered_map<int, int> gpioStreamOff

<gpio_number, value> list for GPIOs to set when streaming is disabled

struct Version
#include <Version.hpp>

Version structure.

Public Functions

Version(const std::string &v)

Construct Version from string.

Version(unsigned major, unsigned minor, unsigned patch)

Construct Version major, minor and patch numbers.

Version(unsigned major, unsigned minor, unsigned patch, std::string buildInfo)

Construct Version major, minor and patch numbers with buildInfo.

std::string toString() const

Convert Version to string.

std::string toStringSemver() const

Convert Version to semver (no build information) string.

std::string getBuildInfo() const

Get build info.

Version getSemver() const

Retrieves semver version (no build information)

struct VideoEncoderProperties : public dai::PropertiesSerializable<Properties, VideoEncoderProperties>
#include <VideoEncoderProperties.hpp>

Specify properties for VideoEncoder such as profile, bitrate, …

Public Types

enum RateControlMode

Rate control mode specifies if constant or variable bitrate should be used (H264 / H265)

Values:

enumerator CBR
enumerator VBR
enum Profile

Encoding profile, H264 (AVC), H265 (HEVC) or MJPEG

Values:

enumerator H264_BASELINE
enumerator H264_HIGH
enumerator H264_MAIN
enumerator H265_MAIN
enumerator MJPEG

Public Members

std::int32_t bitrate = 0

Specifies preferred bitrate (in bit/s) of compressed output bitstream in CBR mode

“0” for automatic computation, based on input resolution and FPS: 720p30: 4Mbps, 1080p30: 8.5Mbps, 1440p30: 14Mbps, 2160p30: 20Mbps

std::int32_t keyframeFrequency = 30

Every x number of frames a keyframe will be inserted

std::int32_t maxBitrate = 0

Specifies maximum bitrate (in bit/s) of compressed output bitstream in CBR mode

“0” to follow bitrate setting

std::int32_t numBFrames = 0

Specifies number of B frames to be inserted

std::uint32_t numFramesPool = 0

This options specifies how many frames are available in this node’s pool. Helps when receiver is slow at consuming.

Value “0” indicates automatic number of frames assignment

std::int32_t outputFrameSize = 0

Specifies max output frame size in pool. Value “0” indicates auto

Profile profile = Profile::H264_BASELINE

Encoding profile, H264, H265 or MJPEG

std::int32_t quality = 80

Value between 0-100% (approximates quality)

bool lossless = false

Lossless mode ([M]JPEG only)

RateControlMode rateCtrlMode = RateControlMode::CBR

Rate control mode specifies if constant or variable bitrate should be used (H264 / H265)

float frameRate = 30.0f

Frame rate

struct WarpProperties : public dai::PropertiesSerializable<Properties, WarpProperties>
#include <WarpProperties.hpp>

Specify properties for Warp

Public Members

int outputWidth = 0

Output width.

int outputHeight = 0

Output height.

int outputFrameSize = 1 * 1024 * 1024

Maximum output frame size in bytes (eg: 300x300 BGR image -> 300*300*3 bytes)

int numFramesPool = 4

Num frames in output pool.

int meshWidth = 0

Custom warp mesh width. Set to zero to disable.

int meshHeight = 0

Custom warp mesh height. Set to zero to disable.

std::string meshUri = ""

Custom warp mesh uri. Set to empty string to disable.

std::vector<int> warpHwIds

Warp HW IDs to use, if empty, use auto/default.

Interpolation interpolation = Interpolation::AUTO

Interpolation type to use.

class XLinkConnection
#include <XLinkConnection.hpp>

Represents connection between host and device over XLink protocol

Public Functions

void close()

Explicitly closes xlink connection.

Note

This function does not need to be explicitly called as destructor closes the connection automatically

bool isClosed() const

Is the connection already closed (or disconnected)

Warning

This function is thread-unsafe and may return outdated incorrect values. It is only meant for use in simple single-threaded code. Well written code should handle exceptions when calling any DepthAI apis to handle hardware events and multithreaded use.

ProfilingData getProfilingData()

Get current accumulated profiling data

Return

ProfilingData from the specific connection

Public Static Functions

std::vector<DeviceInfo> getAllConnectedDevices(XLinkDeviceState_t state = X_LINK_ANY_STATE, bool skipInvalidDevices = true)

Returns information of all connected devices with given state

Return

Vector of connected device information

Parameters
  • state: State which the devices should be in

  • skipInvalidDevices: whether or not to skip over devices that cannot be successfully communicated with

std::tuple<bool, DeviceInfo> getFirstDevice(XLinkDeviceState_t state = X_LINK_ANY_STATE, bool skipInvalidDevices = true)

Returns information of first device with given state

Return

Device information

Parameters
  • state: State which the device should be in

std::tuple<bool, DeviceInfo> getDeviceByMxId(std::string mxId, XLinkDeviceState_t state = X_LINK_ANY_STATE, bool skipInvalidDevice = true)

Finds a device by MX ID. Example: 14442C10D13EABCE00

Return

Tuple of bool and DeviceInfo. Bool specifies if device was found. DeviceInfo specifies the found device

Parameters
  • mxId: MyraidX ID which uniquely specifies a device

  • state: Which state should the device be in

  • skipInvalidDevices: Whether or not to skip devices that cannot be fully detected

DeviceInfo bootBootloader(const DeviceInfo &devInfo)

Tries booting the given device into bootloader state

Return

New device information if successful

Parameters
  • devInfo: Information of device which it should boot into bootloader state

ProfilingData getGlobalProfilingData()

Get current accumulated profiling data

Return

ProfilingData from the specific connection

struct XLinkError : public runtime_error

Subclassed by dai::XLinkReadError, dai::XLinkWriteError

struct XLinkInProperties : public dai::PropertiesSerializable<Properties, XLinkInProperties>
#include <XLinkInProperties.hpp>

Specify properties for XLinkIn such as stream name, …

Public Members

std::string streamName

Name of stream

std::uint32_t maxDataSize = dai::device::XLINK_USB_BUFFER_MAX_SIZE

Maximum input data size

std::uint32_t numFrames = 8

Number of frames in pool

struct XLinkOutProperties : public dai::PropertiesSerializable<Properties, XLinkOutProperties>
#include <XLinkOutProperties.hpp>

Specify properties for XLinkOut such as stream name, …

Public Members

float maxFpsLimit = -1

Set a limit to how many packets will be sent further to host

std::string streamName

Name of stream

bool metadataOnly = false

Whether to transfer data or only object attributes

struct XLinkReadError : public dai::XLinkError
class XLinkStream
struct XLinkWriteError : public dai::XLinkError
namespace bootloader

Enums

enum Memory

Values:

enumerator AUTO
enumerator FLASH
enumerator EMMC
enum Section

Values:

enumerator AUTO
enumerator HEADER
enumerator BOOTLOADER
enumerator BOOTLOADER_CONFIG
enumerator APPLICATION
enumerator USER_BOOTLOADER
enum Type

Values:

enumerator AUTO
enumerator USB
enumerator NETWORK

Functions

DEPTHAI_BOOTLOADER_NLOHMANN_DEFINE_TYPE_OPTIONAL_NON_INTRUSIVE(NetworkConfig, timeoutMs, ipv4, ipv4Mask, ipv4Gateway, ipv4Dns, ipv4DnsAlt, staticIpv4, ipv6, ipv6Prefix, ipv6Gateway, ipv6Dns, ipv6DnsAlt, staticIpv6, mac)
DEPTHAI_BOOTLOADER_NLOHMANN_DEFINE_TYPE_OPTIONAL_NON_INTRUSIVE(UsbConfig, timeoutMs, maxUsbSpeed, vid, pid)
DEPTHAI_BOOTLOADER_NLOHMANN_DEFINE_TYPE_OPTIONAL_NON_INTRUSIVE(Config, appMem, usb, network, userBlSize, userBlChecksum)
const Structure getStructure(Type type)

Variables

struct Config

Subclassed by dai::DeviceBootloader::Config

struct NetworkBootloaderStructure : public dai::bootloader::Structure
struct NetworkConfig

Public Members

int timeoutMs = 30000

If timeout < 0 - waits forever if timeout == 0 - no timeout if timeout > 0 - waits timeout milliseconds

struct Structure

Subclassed by dai::bootloader::NetworkBootloaderStructure, dai::bootloader::UsbBootloaderStructure

struct UsbBootloaderStructure : public dai::bootloader::Structure
struct UsbConfig

Public Members

int timeoutMs = 3000

If timeout < 0 - waits forever if timeout == 0 - no timeout if timeout > 0 - waits timeout milliseconds

int maxUsbSpeed = 3

UNKNOWN = 0, LOW, FULL, HIGH, SUPER, SUPER_PLUS

uint16_t vid = 0x03E7

VID/PID pair used by bootloader.

namespace request

Enums

enum Command

Values:

enumerator USB_ROM_BOOT
enumerator BOOT_APPLICATION
enumerator UPDATE_FLASH
enumerator GET_BOOTLOADER_VERSION
enumerator BOOT_MEMORY
enumerator UPDATE_FLASH_EX
enumerator UPDATE_FLASH_EX_2
enumerator NO_OP
enumerator GET_BOOTLOADER_TYPE
enumerator SET_BOOTLOADER_CONFIG
enumerator GET_BOOTLOADER_CONFIG
enumerator BOOTLOADER_MEMORY
enumerator GET_BOOTLOADER_COMMIT
enumerator UPDATE_FLASH_BOOT_HEADER
enumerator READ_FLASH
enumerator GET_APPLICATION_DETAILS
enumerator GET_MEMORY_DETAILS
enumerator IS_USER_BOOTLOADER
struct BaseRequest

Subclassed by dai::bootloader::request::BootApplication, dai::bootloader::request::BootloaderMemory, dai::bootloader::request::BootMemory, dai::bootloader::request::GetApplicationDetails, dai::bootloader::request::GetBootloaderCommit, dai::bootloader::request::GetBootloaderConfig, dai::bootloader::request::GetBootloaderType, dai::bootloader::request::GetBootloaderVersion, dai::bootloader::request::GetMemoryDetails, dai::bootloader::request::IsUserBootloader, dai::bootloader::request::ReadFlash, dai::bootloader::request::SetBootloaderConfig, dai::bootloader::request::UpdateFlash, dai::bootloader::request::UpdateFlashBootHeader, dai::bootloader::request::UpdateFlashEx, dai::bootloader::request::UpdateFlashEx2, dai::bootloader::request::UsbRomBoot

struct BootApplication : public dai::bootloader::request::BaseRequest
struct BootloaderMemory : public dai::bootloader::request::BaseRequest
struct BootMemory : public dai::bootloader::request::BaseRequest
struct GetApplicationDetails : public dai::bootloader::request::BaseRequest
struct GetBootloaderCommit : public dai::bootloader::request::BaseRequest
struct GetBootloaderConfig : public dai::bootloader::request::BaseRequest
struct GetBootloaderType : public dai::bootloader::request::BaseRequest
struct GetBootloaderVersion : public dai::bootloader::request::BaseRequest
struct GetMemoryDetails : public dai::bootloader::request::BaseRequest
struct IsUserBootloader : public dai::bootloader::request::BaseRequest
struct ReadFlash : public dai::bootloader::request::BaseRequest
struct SetBootloaderConfig : public dai::bootloader::request::BaseRequest
struct UpdateFlash : public dai::bootloader::request::BaseRequest
struct UpdateFlashBootHeader : public dai::bootloader::request::BaseRequest
struct UpdateFlashEx : public dai::bootloader::request::BaseRequest
struct UpdateFlashEx2 : public dai::bootloader::request::BaseRequest
struct UsbRomBoot : public dai::bootloader::request::BaseRequest
namespace response

Enums

enum Command

Values:

enumerator FLASH_COMPLETE
enumerator FLASH_STATUS_UPDATE
enumerator BOOTLOADER_VERSION
enumerator BOOTLOADER_TYPE
enumerator GET_BOOTLOADER_CONFIG
enumerator BOOTLOADER_MEMORY
enumerator BOOT_APPLICATION
enumerator BOOTLOADER_COMMIT
enumerator READ_FLASH
enumerator APPLICATION_DETAILS
enumerator MEMORY_DETAILS
enumerator IS_USER_BOOTLOADER
enumerator NO_OP
struct ApplicationDetails : public dai::bootloader::response::BaseResponse
struct BaseResponse

Subclassed by dai::bootloader::response::ApplicationDetails, dai::bootloader::response::BootApplication, dai::bootloader::response::BootloaderCommit, dai::bootloader::response::BootloaderMemory, dai::bootloader::response::BootloaderType, dai::bootloader::response::BootloaderVersion, dai::bootloader::response::FlashComplete, dai::bootloader::response::FlashStatusUpdate, dai::bootloader::response::GetBootloaderConfig, dai::bootloader::response::IsUserBootloader, dai::bootloader::response::MemoryDetails, dai::bootloader::response::NoOp, dai::bootloader::response::ReadFlash

struct BootApplication : public dai::bootloader::response::BaseResponse
struct BootloaderCommit : public dai::bootloader::response::BaseResponse
struct BootloaderMemory : public dai::bootloader::response::BaseResponse
struct BootloaderType : public dai::bootloader::response::BaseResponse
struct BootloaderVersion : public dai::bootloader::response::BaseResponse
struct FlashComplete : public dai::bootloader::response::BaseResponse
struct FlashStatusUpdate : public dai::bootloader::response::BaseResponse
struct GetBootloaderConfig : public dai::bootloader::response::BaseResponse
struct IsUserBootloader : public dai::bootloader::response::BaseResponse
struct MemoryDetails : public dai::bootloader::response::BaseResponse
struct NoOp : public dai::bootloader::response::BaseResponse
struct ReadFlash : public dai::bootloader::response::BaseResponse
namespace device

Variables

namespace node
class AprilTag : public dai::NodeCRTP<Node, AprilTag, AprilTagProperties>
#include <AprilTag.hpp>

AprilTag node.

Public Functions

void setWaitForConfigInput(bool wait)

Specify whether or not wait until configuration message arrives to inputConfig Input.

Parameters
  • wait: True to wait for configuration message, false otherwise.

Public Members

AprilTagConfig initialConfig

Initial config to use when calculating spatial location data.

Input inputConfig = {*this, "inputConfig", Input::Type::SReceiver, false, 4, {{DatatypeEnum::AprilTagConfig, false}}}

Input AprilTagConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.

Input inputImage = {*this, "inputImage", Input::Type::SReceiver, false, 4, {{DatatypeEnum::ImgFrame, false}}}

Input message with depth data used to retrieve spatial information about detected object. Default queue is non-blocking with size 4.

Output out = {*this, "out", Output::Type::MSender, {{DatatypeEnum::AprilTags, false}}}

Outputs AprilTags message that carries spatial location results.

Output outConfig = {*this, "outConfig", Output::Type::MSender, {{DatatypeEnum::AprilTagConfig, false}}}

Outputs AprilTagConfig message that contains current configuration.

Output passthroughInputImage = {*this, "passthroughInputImage", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Passthrough message on which the calculation was performed. Suitable for when input queue is set to non-blocking behavior.

class Camera : public dai::NodeCRTP<Node, Camera, CameraProperties>
#include <Camera.hpp>

Camera node. Experimental node, for both mono and color types of sensors.

Public Functions

Camera(const std::shared_ptr<PipelineImpl> &par, int64_t nodeId)

Constructs Camera node.

void setBoardSocket(CameraBoardSocket boardSocket)

Specify which board socket to use

Parameters
  • boardSocket: Board socket to use

CameraBoardSocket getBoardSocket() const

Retrieves which board socket to use

Return

Board socket to use

void setCamera(std::string name)

Specify which camera to use by name

Parameters
  • name: Name of the camera to use

std::string getCamera() const

Retrieves which camera to use by name

Return

Name of the camera to use

void setImageOrientation(CameraImageOrientation imageOrientation)

Set camera image orientation.

CameraImageOrientation getImageOrientation() const

Get camera image orientation.

void setSize(std::tuple<int, int> size)

Set desired resolution. Sets sensor size to best fit.

void setSize(int width, int height)

Set desired resolution. Sets sensor size to best fit.

void setPreviewSize(int width, int height)

Set preview output size.

void setPreviewSize(std::tuple<int, int> size)

Set preview output size, as a tuple <width, height>

void setVideoSize(int width, int height)

Set video output size.

void setVideoSize(std::tuple<int, int> size)

Set video output size, as a tuple <width, height>

void setStillSize(int width, int height)

Set still output size.

void setStillSize(std::tuple<int, int> size)

Set still output size, as a tuple <width, height>

void setFps(float fps)

Set rate at which camera should produce frames

Parameters
  • fps: Rate in frames per second

void setIsp3aFps(int isp3aFps)

Isp 3A rate (auto focus, auto exposure, auto white balance, camera controls etc.). Default (0) matches the camera FPS, meaning that 3A is running on each frame. Reducing the rate of 3A reduces the CPU usage on CSS, but also increases the convergence rate of 3A. Note that camera controls will be processed at this rate. E.g. if camera is running at 30 fps, and camera control is sent at every frame, but 3A fps is set to 15, the camera control messages will be processed at 15 fps rate, which will lead to queueing.

float getFps() const

Get rate at which camera should produce frames

Return

Rate in frames per second

std::tuple<int, int> getPreviewSize() const

Get preview size as tuple.

int getPreviewWidth() const

Get preview width.

int getPreviewHeight() const

Get preview height.

std::tuple<int, int> getVideoSize() const

Get video size as tuple.

int getVideoWidth() const

Get video width.

int getVideoHeight() const

Get video height.

std::tuple<int, int> getStillSize() const

Get still size as tuple.

int getStillWidth() const

Get still width.

int getStillHeight() const

Get still height.

std::tuple<int, int> getSize() const

Get sensor resolution as size.

int getWidth() const

Get sensor resolution width.

int getHeight() const

Get sensor resolution height.

void setMeshSource(Properties::WarpMeshSource source)

Set the source of the warp mesh or disable.

Properties::WarpMeshSource getMeshSource() const

Gets the source of the warp mesh.

void loadMeshFile(const dai::Path &warpMesh)

Specify local filesystem paths to the undistort mesh calibration files.

When a mesh calibration is set, it overrides the camera intrinsics/extrinsics matrices. Overrides useHomographyRectification behavior. Mesh format: a sequence of (y,x) points as ‘float’ with coordinates from the input image to be mapped in the output. The mesh can be subsampled, configured by setMeshStep.

With a 1280x800 resolution and the default (16,16) step, the required mesh size is:

width: 1280 / 16 + 1 = 81

height: 800 / 16 + 1 = 51

void loadMeshData(span<const std::uint8_t> warpMesh)

Specify mesh calibration data for undistortion See loadMeshFiles for the expected data format

void setMeshStep(int width, int height)

Set the distance between mesh points. Default: (32, 32)

std::tuple<int, int> getMeshStep() const

Gets the distance between mesh points.

void setCalibrationAlpha(float alpha)

Set calibration alpha parameter that determines FOV of undistorted frames.

tl::optional<float> getCalibrationAlpha() const

Get calibration alpha parameter that determines FOV of undistorted frames.

void setRawOutputPacked(bool packed)

Configures whether the camera raw frames are saved as MIPI-packed to memory. The packed format is more efficient, consuming less memory on device, and less data to send to host: RAW10: 4 pixels saved on 5 bytes, RAW12: 2 pixels saved on 3 bytes. When packing is disabled (false), data is saved lsb-aligned, e.g. a RAW10 pixel will be stored as uint16, on bits 9..0: 0b0000’00pp’pppp’pppp. Default is auto: enabled for standard color/monochrome cameras where ISP can work with both packed/unpacked, but disabled for other cameras like ToF.

Public Members

CameraControl initialControl

Initial control options to apply to sensor

Input inputConfig = {*this, "inputConfig", Input::Type::SReceiver, false, 8, {{DatatypeEnum::ImageManipConfig, false}}}

Input for ImageManipConfig message, which can modify crop parameters in runtime

Default queue is non-blocking with size 8

Input inputControl = {*this, "inputControl", Input::Type::SReceiver, true, 8, {{DatatypeEnum::CameraControl, false}}}

Input for CameraControl message, which can modify camera parameters in runtime

Default queue is blocking with size 8

Output video = {*this, "video", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries NV12 encoded (YUV420, UV plane interleaved) frame data.

Suitable for use with VideoEncoder node

Output preview = {*this, "preview", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries BGR/RGB planar/interleaved encoded frame data.

Suitable for use with NeuralNetwork node

Output still = {*this, "still", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries NV12 encoded (YUV420, UV plane interleaved) frame data.

The message is sent only when a CameraControl message arrives to inputControl with captureStill command set.

Output isp = {*this, "isp", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries YUV420 planar (I420/IYUV) frame data.

Generated by the ISP engine, and the source for the ‘video’, ‘preview’ and ‘still’ outputs

Output raw = {*this, "raw", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries RAW10-packed (MIPI CSI-2 format) frame data.

Captured directly from the camera sensor, and the source for the ‘isp’ output.

Output frameEvent = {*this, "frameEvent", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs metadata-only ImgFrame message as an early indicator of an incoming frame.

It’s sent on the MIPI SoF (start-of-frame) event, just after the exposure of the current frame has finished and before the exposure for next frame starts. Could be used to synchronize various processes with camera capture. Fields populated: camera id, sequence number, timestamp

Public Static Functions

int getScaledSize(int input, int num, int denom)

Computes the scaled size given numerator and denominator

class Cast : public dai::NodeCRTP<Node, Cast, CastProperties>
#include <Cast.hpp>

Cast node.

Public Functions

Cast &setNumFramesPool(int numFramesPool)

Set number of frames in pool

Parameters
  • numFramesPool: Number of frames in pool

Cast &setOutputFrameType(dai::RawImgFrame::Type outputType)

Set output frame type

Parameters
  • outputType: Output frame type

Cast &setScale(float scale)

Set scale

Parameters
  • scale: Scale

Cast &setOffset(float offset)

Set offset

Parameters
  • offset: Offset

Public Members

Input input = {*this, "input", Input::Type::SReceiver, false, 4, true, {{DatatypeEnum::ImgFrame, false}, {DatatypeEnum::NNData, false}}}

Input NNData or ImgFrame message.

Output output = {*this, "output", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Output ImgFrame message.

Output passthroughInput = {*this, "passthroughInput", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}, {DatatypeEnum::NNData, false}}}

Passthrough input message.

class ColorCamera : public dai::NodeCRTP<Node, ColorCamera, ColorCameraProperties>
#include <ColorCamera.hpp>

ColorCamera node. For use with color sensors.

Public Functions

ColorCamera(const std::shared_ptr<PipelineImpl> &par, int64_t nodeId)

Constructs ColorCamera node.

int getScaledSize(int input, int num, int denom) const

Computes the scaled size given numerator and denominator

void setBoardSocket(CameraBoardSocket boardSocket)

Specify which board socket to use

Parameters
  • boardSocket: Board socket to use

CameraBoardSocket getBoardSocket() const

Retrieves which board socket to use

Return

Board socket to use

void setCamera(std::string name)

Specify which camera to use by name

Parameters
  • name: Name of the camera to use

std::string getCamera() const

Retrieves which camera to use by name

Return

Name of the camera to use

void setCamId(int64_t id)

Set which color camera to use.

int64_t getCamId() const

Get which color camera to use.

void setImageOrientation(CameraImageOrientation imageOrientation)

Set camera image orientation.

CameraImageOrientation getImageOrientation() const

Get camera image orientation.

void setColorOrder(ColorCameraProperties::ColorOrder colorOrder)

Set color order of preview output images. RGB or BGR.

ColorCameraProperties::ColorOrder getColorOrder() const

Get color order of preview output frames. RGB or BGR.

void setInterleaved(bool interleaved)

Set planar or interleaved data of preview output frames.

bool getInterleaved() const

Get planar or interleaved data of preview output frames.

void setFp16(bool fp16)

Set fp16 (0..255) data type of preview output frames.

bool getFp16() const

Get fp16 (0..255) data of preview output frames.

void setPreviewSize(int width, int height)

Set preview output size.

void setPreviewSize(std::tuple<int, int> size)

Set preview output size, as a tuple <width, height>

void setPreviewNumFramesPool(int num)

Set number of frames in preview pool.

void setVideoSize(int width, int height)

Set video output size.

void setVideoSize(std::tuple<int, int> size)

Set video output size, as a tuple <width, height>

void setVideoNumFramesPool(int num)

Set number of frames in preview pool.

void setStillSize(int width, int height)

Set still output size.

void setStillSize(std::tuple<int, int> size)

Set still output size, as a tuple <width, height>

void setStillNumFramesPool(int num)

Set number of frames in preview pool.

void setResolution(Properties::SensorResolution resolution)

Set sensor resolution.

Properties::SensorResolution getResolution() const

Get sensor resolution.

void setRawNumFramesPool(int num)

Set number of frames in raw pool.

void setIspNumFramesPool(int num)

Set number of frames in isp pool.

void setNumFramesPool(int raw, int isp, int preview, int video, int still)

Set number of frames in all pools.

void setIspScale(int numerator, int denominator)

Set ‘isp’ output scaling (numerator/denominator), preserving the aspect ratio. The fraction numerator/denominator is simplified first to a irreducible form, then a set of hardware scaler constraints applies: max numerator = 16, max denominator = 63

void setIspScale(std::tuple<int, int> scale)

Set ‘isp’ output scaling, as a tuple <numerator, denominator>

void setIspScale(int horizNum, int horizDenom, int vertNum, int vertDenom)

Set ‘isp’ output scaling, per each direction. If the horizontal scaling factor (horizNum/horizDen) is different than the vertical scaling factor (vertNum/vertDen), a distorted (stretched or squished) image is generated

void setIspScale(std::tuple<int, int> horizScale, std::tuple<int, int> vertScale)

Set ‘isp’ output scaling, per each direction, as <numerator, denominator> tuples.

void setFps(float fps)

Set rate at which camera should produce frames

Parameters
  • fps: Rate in frames per second

void setIsp3aFps(int isp3aFps)

Isp 3A rate (auto focus, auto exposure, auto white balance, camera controls etc.). Default (0) matches the camera FPS, meaning that 3A is running on each frame. Reducing the rate of 3A reduces the CPU usage on CSS, but also increases the convergence rate of 3A. Note that camera controls will be processed at this rate. E.g. if camera is running at 30 fps, and camera control is sent at every frame, but 3A fps is set to 15, the camera control messages will be processed at 15 fps rate, which will lead to queueing.

float getFps() const

Get rate at which camera should produce frames

Return

Rate in frames per second

std::tuple<int, int> getPreviewSize() const

Get preview size as tuple.

int getPreviewWidth() const

Get preview width.

int getPreviewHeight() const

Get preview height.

std::tuple<int, int> getVideoSize() const

Get video size as tuple.

int getVideoWidth() const

Get video width.

int getVideoHeight() const

Get video height.

std::tuple<int, int> getStillSize() const

Get still size as tuple.

int getStillWidth() const

Get still width.

int getStillHeight() const

Get still height.

std::tuple<int, int> getResolutionSize() const

Get sensor resolution as size.

int getResolutionWidth() const

Get sensor resolution width.

int getResolutionHeight() const

Get sensor resolution height.

std::tuple<int, int> getIspSize() const

Get ‘isp’ output resolution as size, after scaling.

int getIspWidth() const

Get ‘isp’ output width.

int getIspHeight() const

Get ‘isp’ output height.

void sensorCenterCrop()

Specify sensor center crop. Resolution size / video size

void setSensorCrop(float x, float y)

Specifies the cropping that happens when converting ISP to video output. By default, video will be center cropped from the ISP output. Note that this doesn’t actually do on-sensor cropping (and MIPI-stream only that region), but it does postprocessing on the ISP (on RVC).

Parameters
  • x: Top left X coordinate

  • y: Top left Y coordinate

std::tuple<float, float> getSensorCrop() const

Return

Sensor top left crop coordinates

float getSensorCropX() const

Get sensor top left x crop coordinate.

float getSensorCropY() const

Get sensor top left y crop coordinate.

void setWaitForConfigInput(bool wait)

Specify to wait until inputConfig receives a configuration message, before sending out a frame.

Parameters
  • wait: True to wait for inputConfig message, false otherwise

bool getWaitForConfigInput() const

See

setWaitForConfigInput

Return

True if wait for inputConfig message, false otherwise

void setPreviewKeepAspectRatio(bool keep)

Specifies whether preview output should preserve aspect ratio, after downscaling from video size or not.

Parameters
  • keep: If true, a larger crop region will be considered to still be able to create the final image in the specified aspect ratio. Otherwise video size is resized to fit preview size

bool getPreviewKeepAspectRatio()

See

setPreviewKeepAspectRatio

Return

Preview keep aspect ratio option

int getPreviewNumFramesPool()

Get number of frames in preview pool.

int getVideoNumFramesPool()

Get number of frames in video pool.

int getStillNumFramesPool()

Get number of frames in still pool.

int getRawNumFramesPool()

Get number of frames in raw pool.

int getIspNumFramesPool()

Get number of frames in isp pool.

void setRawOutputPacked(bool packed)

Configures whether the camera raw frames are saved as MIPI-packed to memory. The packed format is more efficient, consuming less memory on device, and less data to send to host: RAW10: 4 pixels saved on 5 bytes, RAW12: 2 pixels saved on 3 bytes. When packing is disabled (false), data is saved lsb-aligned, e.g. a RAW10 pixel will be stored as uint16, on bits 9..0: 0b0000’00pp’pppp’pppp. Default is auto: enabled for standard color/monochrome cameras where ISP can work with both packed/unpacked, but disabled for other cameras like ToF.

Public Members

CameraControl initialControl

Initial control options to apply to sensor

Input inputConfig = {*this, "inputConfig", Input::Type::SReceiver, false, 8, {{DatatypeEnum::ImageManipConfig, false}}}

Input for ImageManipConfig message, which can modify crop parameters in runtime

Default queue is non-blocking with size 8

Input inputControl = {*this, "inputControl", Input::Type::SReceiver, true, 8, {{DatatypeEnum::CameraControl, false}}}

Input for CameraControl message, which can modify camera parameters in runtime

Default queue is blocking with size 8

Output video = {*this, "video", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries NV12 encoded (YUV420, UV plane interleaved) frame data.

Suitable for use with VideoEncoder node

Output preview = {*this, "preview", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries BGR/RGB planar/interleaved encoded frame data.

Suitable for use with NeuralNetwork node

Output still = {*this, "still", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries NV12 encoded (YUV420, UV plane interleaved) frame data.

The message is sent only when a CameraControl message arrives to inputControl with captureStill command set.

Output isp = {*this, "isp", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries YUV420 planar (I420/IYUV) frame data.

Generated by the ISP engine, and the source for the ‘video’, ‘preview’ and ‘still’ outputs

Output raw = {*this, "raw", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries RAW10-packed (MIPI CSI-2 format) frame data.

Captured directly from the camera sensor, and the source for the ‘isp’ output.

Output frameEvent = {*this, "frameEvent", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs metadata-only ImgFrame message as an early indicator of an incoming frame.

It’s sent on the MIPI SoF (start-of-frame) event, just after the exposure of the current frame has finished and before the exposure for next frame starts. Could be used to synchronize various processes with camera capture. Fields populated: camera id, sequence number, timestamp

class DetectionNetwork : public dai::NodeCRTP<NeuralNetwork, DetectionNetwork, DetectionNetworkProperties>
#include <DetectionNetwork.hpp>

DetectionNetwork, base for different network specializations.

Public Functions

void setConfidenceThreshold(float thresh)

Specifies confidence threshold at which to filter the rest of the detections.

Parameters
  • thresh: Detection confidence must be greater than specified threshold to be added to the list

float getConfidenceThreshold() const

Retrieves threshold at which to filter the rest of the detections.

Return

Detection confidence

Public Members

Output out = {*this, "out", Output::Type::MSender, {{DatatypeEnum::ImgDetections, false}}}

Outputs ImgDetections message that carries parsed detection results. Overrides NeuralNetwork ‘out’ with ImgDetections output message type.

Output outNetwork = {*this, "outNetwork", Output::Type::MSender, {{DatatypeEnum::NNData, false}}}

Outputs unparsed inference results.

class DetectionParser : public dai::NodeCRTP<Node, DetectionParser, DetectionParserProperties>
#include <DetectionParser.hpp>

DetectionParser node. Parses detection results from different neural networks and is being used internally by MobileNetDetectionNetwork and YoloDetectionNetwork.

Public Functions

void setNumFramesPool(int numFramesPool)

Specify number of frames in pool.

Parameters
  • numFramesPool: How many frames should the pool have

int getNumFramesPool()

Returns number of frames in pool

void setBlob(const OpenVINO::Blob &blob)

Retrieves some input tensor information from the blob

Parameters
  • blob: OpenVINO blob to retrieve the information from

void setNNFamily(DetectionNetworkType type)

Sets NN Family to parse

DetectionNetworkType getNNFamily()

Gets NN Family to parse

void setConfidenceThreshold(float thresh)

Specifies confidence threshold at which to filter the rest of the detections.

Parameters
  • thresh: Detection confidence must be greater than specified threshold to be added to the list

float getConfidenceThreshold() const

Retrieves threshold at which to filter the rest of the detections.

Return

Detection confidence

void setNumClasses(int numClasses)

Set num classes.

void setCoordinateSize(int coordinates)

Set coordianate size.

void setAnchors(std::vector<float> anchors)

Set anchors.

void setAnchorMasks(std::map<std::string, std::vector<int>> anchorMasks)

Set anchor masks.

void setIouThreshold(float thresh)

Set Iou threshold.

int getNumClasses() const

Get num classes.

int getCoordinateSize() const

Get coordianate size.

std::vector<float> getAnchors() const

Get anchors.

std::map<std::string, std::vector<int>> getAnchorMasks() const

Get anchor masks.

float getIouThreshold() const

Get Iou threshold.

Public Members

Input input = {*this, "in", Input::Type::SReceiver, true, 5, true, {{DatatypeEnum::NNData, true}}}

Input NN results with detection data to parse Default queue is blocking with size 5

Output out = {*this, "out", Output::Type::MSender, {{DatatypeEnum::ImgDetections, false}}}

Outputs image frame with detected edges

class EdgeDetector : public dai::NodeCRTP<Node, EdgeDetector, EdgeDetectorProperties>
#include <EdgeDetector.hpp>

EdgeDetector node. Performs edge detection using 3x3 Sobel filter.

Public Functions

void setWaitForConfigInput(bool wait)

Specify whether or not wait until configuration message arrives to inputConfig Input.

Parameters
  • wait: True to wait for configuration message, false otherwise.

bool getWaitForConfigInput() const

See

setWaitForConfigInput

Return

True if wait for inputConfig message, false otherwise

void setNumFramesPool(int numFramesPool)

Specify number of frames in pool.

Parameters
  • numFramesPool: How many frames should the pool have

void setMaxOutputFrameSize(int maxFrameSize)

Specify maximum size of output image.

Parameters
  • maxFrameSize: Maximum frame size in bytes

Public Members

EdgeDetectorConfig initialConfig

Initial config to use for edge detection.

Input inputConfig = {*this, "inputConfig", Input::Type::SReceiver, false, 4, {{DatatypeEnum::EdgeDetectorConfig, false}}}

Input EdgeDetectorConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.

Input inputImage = {*this, "inputImage", Input::Type::SReceiver, false, 4, true, {{DatatypeEnum::ImgFrame, false}}}

Input image on which edge detection is performed. Default queue is non-blocking with size 4.

Output outputImage = {*this, "outputImage", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs image frame with detected edges

Output passthroughInputImage = {*this, "passthroughInputImage", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Passthrough message on which the calculation was performed.

class FeatureTracker : public dai::NodeCRTP<Node, FeatureTracker, FeatureTrackerProperties>
#include <FeatureTracker.hpp>

FeatureTracker node. Performs feature tracking and reidentification using motion estimation between 2 consecutive frames.

Public Functions

void setWaitForConfigInput(bool wait)

Specify whether or not wait until configuration message arrives to inputConfig Input.

Parameters
  • wait: True to wait for configuration message, false otherwise.

bool getWaitForConfigInput() const

See

setWaitForConfigInput

Return

True if wait for inputConfig message, false otherwise

void setHardwareResources(int numShaves, int numMemorySlices)

Specify allocated hardware resources for feature tracking. 2 shaves/memory slices are required for optical flow, 1 for corner detection only.

Parameters
  • numShaves: Number of shaves. Maximum 2.

  • numMemorySlices: Number of memory slices. Maximum 2.

Public Members

FeatureTrackerConfig initialConfig

Initial config to use for feature tracking.

Input inputConfig = {*this, "inputConfig", Input::Type::SReceiver, false, 4, {{DatatypeEnum::FeatureTrackerConfig, false}}}

Input FeatureTrackerConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.

Input inputImage = {*this, "inputImage", Input::Type::SReceiver, false, 4, true, {{DatatypeEnum::ImgFrame, false}}}

Input message with frame data on which feature tracking is performed. Default queue is non-blocking with size 4.

Output outputFeatures = {*this, "outputFeatures", Output::Type::MSender, {{DatatypeEnum::TrackedFeatures, false}}}

Outputs TrackedFeatures message that carries tracked features results.

Output passthroughInputImage = {*this, "passthroughInputImage", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Passthrough message on which the calculation was performed. Suitable for when input queue is set to non-blocking behavior.

class ImageAlign : public dai::NodeCRTP<Node, ImageAlign, ImageAlignProperties>
#include <ImageAlign.hpp>

ImageAlign node. Calculates spatial location data on a set of ROIs on depth map.

Public Functions

ImageAlign &setOutputSize(int alignWidth, int alignHeight)

Specify the output size of the aligned image

ImageAlign &setOutKeepAspectRatio(bool keep)

Specify whether to keep aspect ratio when resizing

ImageAlign &setInterpolation(Interpolation interp)

Specify interpolation method to use when resizing

ImageAlign &setNumShaves(int numShaves)

Specify number of shaves to use for this node

ImageAlign &setNumFramesPool(int numFramesPool)

Specify number of frames in the pool

Public Members

ImageAlignConfig initialConfig

Initial config to use when calculating spatial location data.

Input inputConfig = {*this, "inputConfig", Input::Type::SReceiver, false, 4, {{DatatypeEnum::ImageAlignConfig, false}}}

Input message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.

Input input = {*this, "input", Input::Type::SReceiver, false, 4, true, {{DatatypeEnum::ImgFrame, false}}}

Input message. Default queue is non-blocking with size 4.

Input inputAlignTo = {*this, "inputAlignTo", Input::Type::SReceiver, false, 1, true, {{DatatypeEnum::ImgFrame, false}}}

Input align to message. Default queue is non-blocking with size 1.

Output outputAligned = {*this, "outputAligned", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that is aligned to inputAlignTo.

Output passthroughInput = {*this, "passthroughInput", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Passthrough message on which the calculation was performed. Suitable for when input queue is set to non-blocking behavior.

class ImageManip : public dai::NodeCRTP<Node, ImageManip, ImageManipProperties>
#include <ImageManip.hpp>

ImageManip node. Capability to crop, resize, warp, … incoming image frames.

Public Functions

void setWaitForConfigInput(bool wait)

Specify whether or not wait until configuration message arrives to inputConfig Input.

Parameters
  • wait: True to wait for configuration message, false otherwise.

bool getWaitForConfigInput() const

See

setWaitForConfigInput

Return

True if wait for inputConfig message, false otherwise

void setNumFramesPool(int numFramesPool)

Specify number of frames in pool.

Parameters
  • numFramesPool: How many frames should the pool have

void setMaxOutputFrameSize(int maxFrameSize)

Specify maximum size of output image.

Parameters
  • maxFrameSize: Maximum frame size in bytes

void setWarpMesh(const std::vector<Point2f> &meshData, int width, int height)

Set a custom warp mesh

Parameters
  • meshData: 2D plane of mesh points, starting from top left to bottom right

  • width: Width of mesh

  • height: Height of mesh

Public Members

ImageManipConfig initialConfig

Initial config to use when manipulating frames

Input inputConfig = {*this, "inputConfig", Input::Type::SReceiver, true, 8, {{DatatypeEnum::ImageManipConfig, true}}}

Input ImageManipConfig message with ability to modify parameters in runtime Default queue is blocking with size 8

Input inputImage = {*this, "inputImage", Input::Type::SReceiver, true, 8, true, {{DatatypeEnum::ImgFrame, true}}}

Input image to be modified Default queue is blocking with size 8

Output out = {*this, "out", Output::Type::MSender, {{DatatypeEnum::ImgFrame, true}}}

Outputs ImgFrame message that carries modified image.

class IMU : public dai::NodeCRTP<Node, IMU, IMUProperties>
#include <IMU.hpp>

IMU node for BNO08X.

Public Functions

IMU(const std::shared_ptr<PipelineImpl> &par, int64_t nodeId)

Constructs IMU node.

void enableIMUSensor(IMUSensorConfig sensorConfig)

Enable a new IMU sensor with explicit configuration

void enableIMUSensor(const std::vector<IMUSensorConfig> &sensorConfigs)

Enable a list of IMU sensors with explicit configuration

void enableIMUSensor(IMUSensor sensor, uint32_t reportRate)

Enable a new IMU sensor with default configuration

void enableIMUSensor(const std::vector<IMUSensor> &sensors, uint32_t reportRate)

Enable a list of IMU sensors with default configuration

void setBatchReportThreshold(std::int32_t batchReportThreshold)

Above this packet threshold data will be sent to host, if queue is not blocked

std::int32_t getBatchReportThreshold() const

Above this packet threshold data will be sent to host, if queue is not blocked

void setMaxBatchReports(std::int32_t maxBatchReports)

Maximum number of IMU packets in a batch report

std::int32_t getMaxBatchReports() const

Maximum number of IMU packets in a batch report

Public Members

Output out = {*this, "out", Output::Type::MSender, {{DatatypeEnum::IMUData, false}}}

Outputs IMUData message that carries IMU packets.

class MessageDemux : public dai::NodeCRTP<Node, MessageDemux, MessageDemuxProperties>

Public Members

Input input = {*this, "input", Input::Type::SReceiver, {{DatatypeEnum::MessageGroup, false}}}

Input message of type MessageGroup

OutputMap outputs

A map of outputs, where keys are same as in the input MessageGroup

class MobileNetDetectionNetwork : public dai::NodeCRTP<DetectionNetwork, MobileNetDetectionNetwork, DetectionNetworkProperties>
#include <DetectionNetwork.hpp>

MobileNetDetectionNetwork node. Parses MobileNet results.

class MobileNetSpatialDetectionNetwork : public dai::NodeCRTP<SpatialDetectionNetwork, MobileNetSpatialDetectionNetwork, SpatialDetectionNetworkProperties>
#include <SpatialDetectionNetwork.hpp>

MobileNetSpatialDetectionNetwork node. Mobilenet-SSD based network with spatial location data.

class MonoCamera : public dai::NodeCRTP<Node, MonoCamera, MonoCameraProperties>
#include <MonoCamera.hpp>

MonoCamera node. For use with grayscale sensors.

Public Functions

void setBoardSocket(CameraBoardSocket boardSocket)

Specify which board socket to use

Parameters
  • boardSocket: Board socket to use

CameraBoardSocket getBoardSocket() const

Retrieves which board socket to use

Return

Board socket to use

void setCamera(std::string name)

Specify which camera to use by name

Parameters
  • name: Name of the camera to use

std::string getCamera() const

Retrieves which camera to use by name

Return

Name of the camera to use

void setImageOrientation(CameraImageOrientation imageOrientation)

Set camera image orientation.

CameraImageOrientation getImageOrientation() const

Get camera image orientation.

void setResolution(Properties::SensorResolution resolution)

Set sensor resolution.

Properties::SensorResolution getResolution() const

Get sensor resolution.

void setFps(float fps)

Set rate at which camera should produce frames

Parameters
  • fps: Rate in frames per second

void setIsp3aFps(int isp3aFps)

Isp 3A rate (auto focus, auto exposure, auto white balance, camera controls etc.). Default (0) matches the camera FPS, meaning that 3A is running on each frame. Reducing the rate of 3A reduces the CPU usage on CSS, but also increases the convergence rate of 3A. Note that camera controls will be processed at this rate. E.g. if camera is running at 30 fps, and camera control is sent at every frame, but 3A fps is set to 15, the camera control messages will be processed at 15 fps rate, which will lead to queueing.

float getFps() const

Get rate at which camera should produce frames

Return

Rate in frames per second

std::tuple<int, int> getResolutionSize() const

Get sensor resolution as size.

int getResolutionWidth() const

Get sensor resolution width.

int getResolutionHeight() const

Get sensor resolution height.

void setNumFramesPool(int num)

Set number of frames in main (ISP output) pool.

void setRawNumFramesPool(int num)

Set number of frames in raw pool.

int getNumFramesPool() const

Get number of frames in main (ISP output) pool.

int getRawNumFramesPool() const

Get number of frames in raw pool.

void setRawOutputPacked(bool packed)

Configures whether the camera raw frames are saved as MIPI-packed to memory. The packed format is more efficient, consuming less memory on device, and less data to send to host: RAW10: 4 pixels saved on 5 bytes, RAW12: 2 pixels saved on 3 bytes. When packing is disabled (false), data is saved lsb-aligned, e.g. a RAW10 pixel will be stored as uint16, on bits 9..0: 0b0000’00pp’pppp’pppp. Default is auto: enabled for standard color/monochrome cameras where ISP can work with both packed/unpacked, but disabled for other cameras like ToF.

Public Members

CameraControl initialControl

Initial control options to apply to sensor

Input inputControl = {*this, "inputControl", Input::Type::SReceiver, true, 8, {{DatatypeEnum::CameraControl, false}}}

Input for CameraControl message, which can modify camera parameters in runtime Default queue is blocking with size 8

Output out = {*this, "out", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries RAW8 encoded (grayscale) frame data.

Suitable for use StereoDepth node. Processed by ISP

Output raw = {*this, "raw", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries RAW10-packed (MIPI CSI-2 format) frame data.

Captured directly from the camera sensor

Output frameEvent = {*this, "frameEvent", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs metadata-only ImgFrame message as an early indicator of an incoming frame.

It’s sent on the MIPI SoF (start-of-frame) event, just after the exposure of the current frame has finished and before the exposure for next frame starts. Could be used to synchronize various processes with camera capture. Fields populated: camera id, sequence number, timestamp

class NeuralNetwork : public dai::NodeCRTP<Node, NeuralNetwork, NeuralNetworkProperties>
#include <NeuralNetwork.hpp>

NeuralNetwork node. Runs a neural inference on input data.

Public Functions

void setBlobPath(const dai::Path &path)

Load network blob into assets and use once pipeline is started.

Exceptions
  • Error: if file doesn’t exist or isn’t a valid network blob.

Parameters
  • path: Path to network blob

void setBlob(OpenVINO::Blob blob)

Load network blob into assets and use once pipeline is started.

Parameters
  • blob: Network blob

void setBlob(const dai::Path &path)

Same functionality as the setBlobPath(). Load network blob into assets and use once pipeline is started.

Exceptions
  • Error: if file doesn’t exist or isn’t a valid network blob.

Parameters
  • path: Path to network blob

void setNumPoolFrames(int numFrames)

Specifies how many frames will be available in the pool

Parameters
  • numFrames: How many frames will pool have

void setNumInferenceThreads(int numThreads)

How many threads should the node use to run the network.

Parameters
  • numThreads: Number of threads to dedicate to this node

void setNumNCEPerInferenceThread(int numNCEPerThread)

How many Neural Compute Engines should a single thread use for inference

Parameters
  • numNCEPerThread: Number of NCE per thread

int getNumInferenceThreads()

How many inference threads will be used to run the network

Return

Number of threads, 0, 1 or 2. Zero means AUTO

Public Members

Input input = {*this, "in", Input::Type::SReceiver, true, 5, true, {{DatatypeEnum::Buffer, true}}}

Input message with data to be inferred upon Default queue is blocking with size 5

Output out = {*this, "out", Output::Type::MSender, {{DatatypeEnum::NNData, false}}}

Outputs NNData message that carries inference results

Output passthrough = {*this, "passthrough", Output::Type::MSender, {{DatatypeEnum::Buffer, true}}}

Passthrough message on which the inference was performed.

Suitable for when input queue is set to non-blocking behavior.

InputMap inputs

Inputs mapped to network inputs. Useful for inferring from separate data sources Default input is non-blocking with queue size 1 and waits for messages

OutputMap passthroughs

Passthroughs which correspond to specified input

class ObjectTracker : public dai::NodeCRTP<Node, ObjectTracker, ObjectTrackerProperties>
#include <ObjectTracker.hpp>

ObjectTracker node. Performs object tracking using Kalman filter and hungarian algorithm.

Public Functions

void setTrackerThreshold(float threshold)

Specify tracker threshold.

Parameters
  • threshold: Above this threshold the detected objects will be tracked. Default 0, all image detections are tracked.

void setMaxObjectsToTrack(std::int32_t maxObjectsToTrack)

Specify maximum number of object to track.

Parameters
  • maxObjectsToTrack: Maximum number of object to track. Maximum 60 in case of SHORT_TERM_KCF, otherwise 1000.

void setDetectionLabelsToTrack(std::vector<std::uint32_t> labels)

Specify detection labels to track.

Parameters
  • labels: Detection labels to track. Default every label is tracked from image detection network output.

void setTrackerType(TrackerType type)

Specify tracker type algorithm.

Parameters
  • type: Tracker type.

void setTrackerIdAssignmentPolicy(TrackerIdAssignmentPolicy type)

Specify tracker ID assignment policy.

Parameters
  • type: Tracker ID assignment policy.

void setTrackingPerClass(bool trackingPerClass)

Whether tracker should take into consideration class label for tracking.

Public Members

Input inputTrackerFrame = {*this, "inputTrackerFrame", Input::Type::SReceiver, false, 4, true, {{DatatypeEnum::ImgFrame, false}}}

Input ImgFrame message on which tracking will be performed. RGBp, BGRp, NV12, YUV420p types are supported. Default queue is non-blocking with size 4.

Input inputDetectionFrame = {*this, "inputDetectionFrame", Input::Type::SReceiver, false, 4, true, {{DatatypeEnum::ImgFrame, false}}}

Input ImgFrame message on which object detection was performed. Default queue is non-blocking with size 4.

Input inputDetections = {*this, "inputDetections", Input::Type::SReceiver, false, 4, true, {{DatatypeEnum::ImgDetections, true}}}

Input message with image detection from neural network. Default queue is non-blocking with size 4.

Output out = {*this, "out", Output::Type::MSender, {{DatatypeEnum::Tracklets, false}}}

Outputs Tracklets message that carries object tracking results.

Output passthroughTrackerFrame = {*this, "passthroughTrackerFrame", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Passthrough ImgFrame message on which tracking was performed. Suitable for when input queue is set to non-blocking behavior.

Output passthroughDetectionFrame = {*this, "passthroughDetectionFrame", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Passthrough ImgFrame message on which object detection was performed. Suitable for when input queue is set to non-blocking behavior.

Output passthroughDetections = {*this, "passthroughDetections", Output::Type::MSender, {{DatatypeEnum::ImgDetections, true}}}

Passthrough image detections message from neural network output. Suitable for when input queue is set to non-blocking behavior.

class PointCloud : public dai::NodeCRTP<Node, PointCloud, PointCloudProperties>
#include <PointCloud.hpp>

PointCloud node. Computes point cloud from depth frames.

Public Functions

void setNumFramesPool(int numFramesPool)

Specify number of frames in pool.

Parameters
  • numFramesPool: How many frames should the pool have

Public Members

PointCloudConfig initialConfig

Initial config to use when computing the point cloud.

Input inputConfig = {*this, "inputConfig", Input::Type::SReceiver, false, 4, {{DatatypeEnum::PointCloudConfig, false}}}

Input PointCloudConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.

Input inputDepth = {*this, "inputDepth", Input::Type::SReceiver, false, 4, true, {{DatatypeEnum::ImgFrame, false}}}

Input message with depth data used to create the point cloud. Default queue is non-blocking with size 4.

Output outputPointCloud = {*this, "outputPointCloud", Output::Type::MSender, {{DatatypeEnum::PointCloudData, false}}}

Outputs PointCloudData message

Output passthroughDepth = {*this, "passthroughDepth", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Passthrough depth from which the point cloud was calculated. Suitable for when input queue is set to non-blocking behavior.

class Script : public dai::NodeCRTP<Node, Script, ScriptProperties>

Public Functions

void setScriptPath(const dai::Path &path, const std::string &name = "")

Specify local filesystem path to load the script

Parameters
  • path: Filesystem path to load the script

  • name: Optionally set a name of this script, otherwise the name defaults to the path

void setScript(const std::string &script, const std::string &name = "")

Sets script data to be interpreted

Parameters
  • script: Script string to be interpreted

  • name: Optionally set a name of this script

void setScript(const std::vector<std::uint8_t> &data, const std::string &name = "")

Sets script data to be interpreted

Parameters
  • data: Binary data that represents the script to be interpreted

  • name: Optionally set a name of this script

dai::Path getScriptPath() const

Get filesystem path from where script was loaded.

Return

dai::Path from where script was loaded, otherwise returns empty path

std::string getScriptName() const

Get the script name in utf-8.

When name set with setScript() or setScriptPath(), returns that name. When script loaded with setScriptPath() with name not provided, returns the utf-8 string of that path. Otherwise, returns “<script>”

Return

std::string of script name in utf-8

void setProcessor(ProcessorType type)

Set on which processor the script should run

Parameters
  • type: Processor type - Leon CSS or Leon MSS

ProcessorType getProcessor() const

Get on which processor the script should run

Return

Processor type - Leon CSS or Leon MSS

Public Members

InputMap inputs

Inputs to Script node. Can be accessed using subscript operator (Eg: inputs[‘in1’]) By default inputs are set to blocking with queue size 8

OutputMap outputs

Outputs from Script node. Can be accessed subscript operator (Eg: outputs[‘out1’])

class SpatialDetectionNetwork : public dai::NodeCRTP<DetectionNetwork, SpatialDetectionNetwork, SpatialDetectionNetworkProperties>
#include <SpatialDetectionNetwork.hpp>

SpatialDetectionNetwork node. Runs a neural inference on input image and calculates spatial location data.

Public Functions

void setBoundingBoxScaleFactor(float scaleFactor)

Specifies scale factor for detected bounding boxes.

Parameters
  • scaleFactor: Scale factor must be in the interval (0,1].

void setDepthLowerThreshold(uint32_t lowerThreshold)

Specifies lower threshold in depth units (millimeter by default) for depth values which will used to calculate spatial data

Parameters
  • lowerThreshold: LowerThreshold must be in the interval [0,upperThreshold] and less than upperThreshold.

void setDepthUpperThreshold(uint32_t upperThreshold)

Specifies upper threshold in depth units (millimeter by default) for depth values which will used to calculate spatial data

Parameters
  • upperThreshold: UpperThreshold must be in the interval (lowerThreshold,65535].

void setSpatialCalculationAlgorithm(dai::SpatialLocationCalculatorAlgorithm calculationAlgorithm)

Specifies spatial location calculator algorithm: Average/Min/Max

Parameters
  • calculationAlgorithm: Calculation algorithm.

void setSpatialCalculationStepSize(int stepSize)

Specifies spatial location calculator step size for depth calculation. Step size 1 means that every pixel is taken into calculation, size 2 means every second etc.

Parameters
  • stepSize: Step size.

Public Members

Input input = {*this, "in", Input::Type::SReceiver, true, 5, true, {{DatatypeEnum::ImgFrame, false}}}

Input message with data to be inferred upon Default queue is blocking with size 5

Input inputDepth = {*this, "inputDepth", Input::Type::SReceiver, false, 4, true, {{DatatypeEnum::ImgFrame, false}}}

Input message with depth data used to retrieve spatial information about detected object Default queue is non-blocking with size 4

Output out = {*this, "out", Output::Type::MSender, {{DatatypeEnum::SpatialImgDetections, false}}}

Outputs ImgDetections message that carries parsed detection results.

Output boundingBoxMapping = {*this, "boundingBoxMapping", Output::Type::MSender, {{DatatypeEnum::SpatialLocationCalculatorConfig, false}}}

Outputs mapping of detected bounding boxes relative to depth map

Suitable for when displaying remapped bounding boxes on depth frame

Output passthrough = {*this, "passthrough", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Passthrough message on which the inference was performed.

Suitable for when input queue is set to non-blocking behavior.

Output passthroughDepth = {*this, "passthroughDepth", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Passthrough message for depth frame on which the spatial location calculation was performed.

Suitable for when input queue is set to non-blocking behavior.

Output spatialLocationCalculatorOutput{*this, "spatialLocationCalculatorOutput", Output::Type::MSender, {{DatatypeEnum::SpatialLocationCalculatorData, false}}}

Output of SpatialLocationCalculator node, which is used internally by SpatialDetectionNetwork. Suitable when extra information is required from SpatialLocationCalculator node, e.g. minimum, maximum distance.

class SpatialLocationCalculator : public dai::NodeCRTP<Node, SpatialLocationCalculator, SpatialLocationCalculatorProperties>
#include <SpatialLocationCalculator.hpp>

SpatialLocationCalculator node. Calculates spatial location data on a set of ROIs on depth map.

Public Functions

void setWaitForConfigInput(bool wait)

Specify whether or not wait until configuration message arrives to inputConfig Input.

Parameters
  • wait: True to wait for configuration message, false otherwise.

bool getWaitForConfigInput() const

See

setWaitForConfigInput

Return

True if wait for inputConfig message, false otherwise

Public Members

SpatialLocationCalculatorConfig initialConfig

Initial config to use when calculating spatial location data.

Input inputConfig = {*this, "inputConfig", Input::Type::SReceiver, false, 4, {{DatatypeEnum::SpatialLocationCalculatorConfig, false}}}

Input SpatialLocationCalculatorConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.

Input inputDepth = {*this, "inputDepth", Input::Type::SReceiver, false, 4, true, {{DatatypeEnum::ImgFrame, false}}}

Input message with depth data used to retrieve spatial information about detected object. Default queue is non-blocking with size 4.

Output out = {*this, "out", Output::Type::MSender, {{DatatypeEnum::SpatialLocationCalculatorData, false}}}

Outputs SpatialLocationCalculatorData message that carries spatial location results.

Output passthroughDepth = {*this, "passthroughDepth", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Passthrough message on which the calculation was performed. Suitable for when input queue is set to non-blocking behavior.

class SPIIn : public dai::NodeCRTP<Node, SPIIn, SPIInProperties>
#include <SPIIn.hpp>

SPIIn node. Receives messages over SPI.

Public Functions

void setStreamName(const std::string &name)

Specifies stream name over which the node will receive data

Parameters
  • name: Stream name

void setBusId(int id)

Specifies SPI Bus number to use

Parameters
  • id: SPI Bus id

void setMaxDataSize(std::uint32_t maxDataSize)

Set maximum message size it can receive

Parameters
  • maxDataSize: Maximum size in bytes

void setNumFrames(std::uint32_t numFrames)

Set number of frames in pool for sending messages forward

Parameters
  • numFrames: Maximum number of frames in pool

std::string getStreamName() const

Get stream name.

int getBusId() const

Get bus id.

std::uint32_t getMaxDataSize() const

Get maximum messages size in bytes.

std::uint32_t getNumFrames() const

Get number of frames in pool.

Public Members

Output out = {*this, "out", Output::Type::MSender, {{DatatypeEnum::Buffer, true}}}

Outputs message of same type as send from host.

class SPIOut : public dai::NodeCRTP<Node, SPIOut, SPIOutProperties>
#include <SPIOut.hpp>

SPIOut node. Sends messages over SPI.

Public Functions

void setStreamName(std::string name)

Specifies stream name over which the node will send data

Parameters
  • name: Stream name

void setBusId(int busId)

Specifies SPI Bus number to use

Parameters
  • id: SPI Bus id

Public Members

Input input = {*this, "in", Input::Type::SReceiver, true, 8, true, {{DatatypeEnum::Buffer, true}}}

Input for any type of messages to be transferred over SPI stream

Default queue is blocking with size 8

class StereoDepth : public dai::NodeCRTP<Node, StereoDepth, StereoDepthProperties>
#include <StereoDepth.hpp>

StereoDepth node. Compute stereo disparity and depth from left-right image pair.

Public Types

enum PresetMode

Preset modes for stereo depth.

Values:

enumerator HIGH_ACCURACY

Prefers accuracy over density. More invalid depth values, but less outliers.

enumerator HIGH_DENSITY

Prefers density over accuracy. Less invalid depth values, but more outliers.

Public Functions

void setEmptyCalibration()

Specify that a passthrough/dummy calibration should be used, when input frames are already rectified (e.g. sourced from recordings on the host)

void loadMeshFiles(const dai::Path &pathLeft, const dai::Path &pathRight)

Specify local filesystem paths to the mesh calibration files for ‘left’ and ‘right’ inputs.

When a mesh calibration is set, it overrides the camera intrinsics/extrinsics matrices. Overrides useHomographyRectification behavior. Mesh format: a sequence of (y,x) points as ‘float’ with coordinates from the input image to be mapped in the output. The mesh can be subsampled, configured by setMeshStep.

With a 1280x800 resolution and the default (16,16) step, the required mesh size is:

width: 1280 / 16 + 1 = 81

height: 800 / 16 + 1 = 51

void loadMeshData(const std::vector<std::uint8_t> &dataLeft, const std::vector<std::uint8_t> &dataRight)

Specify mesh calibration data for ‘left’ and ‘right’ inputs, as vectors of bytes. Overrides useHomographyRectification behavior. See loadMeshFiles for the expected data format

void setMeshStep(int width, int height)

Set the distance between mesh points. Default: (16, 16)

void setInputResolution(int width, int height)

Specify input resolution size

Optional if MonoCamera exists, otherwise necessary

void setInputResolution(std::tuple<int, int> resolution)

Specify input resolution size

Optional if MonoCamera exists, otherwise necessary

void setOutputSize(int width, int height)

Specify disparity/depth output resolution size, implemented by scaling.

Currently only applicable when aligning to RGB camera

void setOutputKeepAspectRatio(bool keep)

Specifies whether the frames resized by setOutputSize should preserve aspect ratio, with potential cropping when enabled. Default true

void setMedianFilter(dai::MedianFilter median)

Parameters
  • median: Set kernel size for disparity/depth median filtering, or disable

void setDepthAlign(Properties::DepthAlign align)

Parameters
  • align: Set the disparity/depth alignment: centered (between the ‘left’ and ‘right’ inputs), or from the perspective of a rectified output stream

void setDepthAlign(CameraBoardSocket camera)

Parameters
  • camera: Set the camera from whose perspective the disparity/depth will be aligned

void setConfidenceThreshold(int confThr)

Confidence threshold for disparity calculation

Parameters
  • confThr: Confidence threshold value 0..255

void setRectification(bool enable)

Rectify input images or not.

void setLeftRightCheck(bool enable)

Computes and combines disparities in both L-R and R-L directions, and combine them.

For better occlusion handling, discarding invalid disparity values

void setSubpixel(bool enable)

Computes disparity with sub-pixel interpolation (3 fractional bits by default).

Suitable for long range. Currently incompatible with extended disparity

void setSubpixelFractionalBits(int subpixelFractionalBits)

Number of fractional bits for subpixel mode. Default value: 3. Valid values: 3,4,5. Defines the number of fractional disparities: 2^x. Median filter postprocessing is supported only for 3 fractional bits.

void setExtendedDisparity(bool enable)

Disparity range increased from 0-95 to 0-190, combined from full resolution and downscaled images.

Suitable for short range objects. Currently incompatible with sub-pixel disparity

void setRectifyEdgeFillColor(int color)

Fill color for missing data at frame edges

Parameters
  • color: Grayscale 0..255, or -1 to replicate pixels

void setRectifyMirrorFrame(bool enable)

DEPRECATED function. It was removed, since rectified images are not flipped anymore. Mirror rectified frames, only when LR-check mode is disabled. Default true. The mirroring is required to have a normal non-mirrored disparity/depth output.

A side effect of this option is disparity alignment to the perspective of left or right input: false: mapped to left and mirrored, true: mapped to right. With LR-check enabled, this option is ignored, none of the outputs are mirrored, and disparity is mapped to right.

Parameters
  • enable: True for normal disparity/depth, otherwise mirrored

void setOutputRectified(bool enable)

Enable outputting rectified frames. Optimizes computation on device side when disabled. DEPRECATED. The outputs are auto-enabled if used

void setOutputDepth(bool enable)

Enable outputting ‘depth’ stream (converted from disparity). In certain configurations, this will disable ‘disparity’ stream. DEPRECATED. The output is auto-enabled if used

void setRuntimeModeSwitch(bool enable)

Enable runtime stereo mode switch, e.g. from standard to LR-check. Note: when enabled resources allocated for worst case to enable switching to any mode.

void setNumFramesPool(int numFramesPool)

Specify number of frames in pool.

Parameters
  • numFramesPool: How many frames should the pool have

float getMaxDisparity() const

Useful for normalization of the disparity map.

Return

Maximum disparity value that the node can return

void setPostProcessingHardwareResources(int numShaves, int numMemorySlices)

Specify allocated hardware resources for stereo depth. Suitable only to increase post processing runtime.

Parameters
  • numShaves: Number of shaves.

  • numMemorySlices: Number of memory slices.

void setDefaultProfilePreset(PresetMode mode)

Sets a default preset based on specified option.

Parameters
  • mode: Stereo depth preset mode

void setFocalLengthFromCalibration(bool focalLengthFromCalibration)

Whether to use focal length from calibration intrinsics or calculate based on calibration FOV. Default value is true.

void useHomographyRectification(bool useHomographyRectification)

Use 3x3 homography matrix for stereo rectification instead of sparse mesh generated on device. Default behaviour is AUTO, for lenses with FOV over 85 degrees sparse mesh is used, otherwise 3x3 homography. If custom mesh data is provided through loadMeshData or loadMeshFiles this option is ignored.

Parameters
  • useHomographyRectification: true: 3x3 homography matrix generated from calibration data is used for stereo rectification, can’t correct lens distortion. false: sparse mesh is generated on-device from calibration data with mesh step specified with setMeshStep (Default: (16, 16)), can correct lens distortion. Implementation for generating the mesh is same as opencv’s initUndistortRectifyMap function. Only the first 8 distortion coefficients are used from calibration data.

void enableDistortionCorrection(bool enableDistortionCorrection)

Equivalent to useHomographyRectification(!enableDistortionCorrection)

void setBaseline(float baseline)

Override baseline from calibration. Used only in disparity to depth conversion. Units are centimeters.

void setFocalLength(float focalLength)

Override focal length from calibration. Used only in disparity to depth conversion. Units are pixels.

void setDisparityToDepthUseSpecTranslation(bool specTranslation)

Use baseline information for disparity to depth conversion from specs (design data) or from calibration. Default: true

void setRectificationUseSpecTranslation(bool specTranslation)

Obtain rectification matrices using spec translation (design data) or from calibration in calculations. Should be used only for debugging. Default: false

void setDepthAlignmentUseSpecTranslation(bool specTranslation)

Use baseline information for depth alignment from specs (design data) or from calibration. Default: true

void setAlphaScaling(float alpha)

Free scaling parameter between 0 (when all the pixels in the undistorted image are valid) and 1 (when all the source image pixels are retained in the undistorted image). On some high distortion lenses, and/or due to rectification (image rotated) invalid areas may appear even with alpha=0, in these cases alpha < 0.0 helps removing invalid areas. See getOptimalNewCameraMatrix from opencv for more details.

Public Members

StereoDepthConfig initialConfig

Initial config to use for StereoDepth.

Input inputConfig = {*this, "inputConfig", Input::Type::SReceiver, false, 4, {{DatatypeEnum::StereoDepthConfig, false}}}

Input StereoDepthConfig message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.

Input left = {*this, "left", Input::Type::SReceiver, false, 8, true, {{DatatypeEnum::ImgFrame, true}}}

Input for left ImgFrame of left-right pair

Default queue is non-blocking with size 8

Input right = {*this, "right", Input::Type::SReceiver, false, 8, true, {{DatatypeEnum::ImgFrame, true}}}

Input for right ImgFrame of left-right pair

Default queue is non-blocking with size 8

Output depth = {*this, "depth", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries RAW16 encoded (0..65535) depth data in depth units (millimeter by default).

Non-determined / invalid depth values are set to 0

Output disparity = {*this, "disparity", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries RAW8 / RAW16 encoded disparity data: RAW8 encoded (0..95) for standard mode; RAW8 encoded (0..190) for extended disparity mode; RAW16 encoded for subpixel disparity mode:

  • 0..760 for 3 fractional bits (by default)

  • 0..1520 for 4 fractional bits

  • 0..3040 for 5 fractional bits

Output syncedLeft = {*this, "syncedLeft", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Passthrough ImgFrame message from ‘left’ Input.

Output syncedRight = {*this, "syncedRight", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Passthrough ImgFrame message from ‘right’ Input.

Output rectifiedLeft = {*this, "rectifiedLeft", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries RAW8 encoded (grayscale) rectified frame data.

Output rectifiedRight = {*this, "rectifiedRight", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries RAW8 encoded (grayscale) rectified frame data.

Output outConfig = {*this, "outConfig", Output::Type::MSender, {{DatatypeEnum::StereoDepthConfig, false}}}

Outputs StereoDepthConfig message that contains current stereo configuration.

Output debugDispLrCheckIt1 = {*this, "debugDispLrCheckIt1", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries left-right check first iteration (before combining with second iteration) disparity map. Useful for debugging/fine tuning.

Output debugDispLrCheckIt2 = {*this, "debugDispLrCheckIt2", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries left-right check second iteration (before combining with first iteration) disparity map. Useful for debugging/fine tuning.

Output debugExtDispLrCheckIt1 = {*this, "debugExtDispLrCheckIt1", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries extended left-right check first iteration (downscaled frame, before combining with second iteration) disparity map. Useful for debugging/fine tuning.

Output debugExtDispLrCheckIt2 = {*this, "debugExtDispLrCheckIt2", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries extended left-right check second iteration (downscaled frame, before combining with first iteration) disparity map. Useful for debugging/fine tuning.

Output debugDispCostDump = {*this, "debugDispCostDump", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries cost dump of disparity map. Useful for debugging/fine tuning.

Output confidenceMap = {*this, "confidenceMap", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries RAW8 confidence map. Lower values means higher confidence of the calculated disparity value. RGB alignment, left-right check or any postproccessing (e.g. median filter) is not performed on confidence map.

class Sync : public dai::NodeCRTP<Node, Sync, SyncProperties>

Public Functions

void setSyncThreshold(std::chrono::nanoseconds syncThreshold)

Set the maximal interval between messages in the group

Parameters
  • syncThreshold: Maximal interval between messages in the group

void setSyncAttempts(int syncAttempts)

Set the number of attempts to get the specified max interval between messages in the group

Parameters
  • syncAttempts: Number of attempts to get the specified max interval between messages in the group:

    • if syncAttempts = 0 then the node sends a message as soon at the group is filled

    • if syncAttempts > 0 then the node will make syncAttemts attempts to synchronize before sending out a message

    • if syncAttempts = -1 (default) then the node will only send a message if successfully synchronized

std::chrono::nanoseconds getSyncThreshold() const

Gets the maximal interval between messages in the group in milliseconds

int getSyncAttempts() const

Gets the number of sync attempts

Public Members

InputMap inputs

A map of inputs

Output out = {*this, "out", Output::Type::MSender, {{DatatypeEnum::MessageGroup, false}}}

Output message of type MessageGroup

class SystemLogger : public dai::NodeCRTP<Node, SystemLogger, SystemLoggerProperties>
#include <SystemLogger.hpp>

SystemLogger node. Send system information periodically.

Public Functions

void setRate(float hz)

Specify logging rate, at which messages will be sent out

Parameters
  • hz: Sending rate in hertz (messages per second)

float getRate()

Gets logging rate, at which messages will be sent out

Public Members

Output out = {*this, "out", Output::Type::MSender, {{DatatypeEnum::SystemInformation, false}}}

Outputs SystemInformation message that carries various system information like memory and CPU usage, temperatures, …

class ToF : public dai::NodeCRTP<Node, ToF, ToFProperties>
#include <ToF.hpp>

ToF node.

Public Functions

ToF(const std::shared_ptr<PipelineImpl> &par, int64_t nodeId)

Constructs ToF node.

ToF &setNumShaves(int numShaves)

Specify number of shaves reserved for ToF decoding.

ToF &setNumFramesPool(int numFramesPool)

Specify number of frames in output pool

Parameters
  • numFramesPool: Number of frames in output pool

Public Members

ToFConfig initialConfig

Initial config to use for depth calculation.

Input inputConfig = {*this, "inputConfig", Input::Type::SReceiver, false, 4, {{DatatypeEnum::ToFConfig, false}}}

Input ToF message with ability to modify parameters in runtime. Default queue is non-blocking with size 4.

Input input = {*this, "input", Input::Type::SReceiver, true, 8, {{DatatypeEnum::ImgFrame, true}}}

Input raw ToF data. Default queue is blocking with size 8.

Output depth = {*this, "depth", Output::Type::MSender, {{DatatypeEnum::ImgFrame, true}}}

Outputs ImgFrame message that carries decoded depth image.

Output amplitude = {*this, "amplitude", Output::Type::MSender, {{DatatypeEnum::ImgFrame, true}}}

Outputs ImgFrame message that carries amplitude image.

Output intensity = {*this, "intensity", Output::Type::MSender, {{DatatypeEnum::ImgFrame, true}}}

Outputs ImgFrame message that carries intensity image.

Output phase = {*this, "phase", Output::Type::MSender, {{DatatypeEnum::ImgFrame, true}}}

Outputs ImgFrame message that carries phase image, useful for debugging. float32 type.

class UVC : public dai::NodeCRTP<Node, UVC, UVCProperties>
#include <UVC.hpp>

UVC (USB Video Class) node.

Public Functions

void setGpiosOnInit(std::unordered_map<int, int> list)

Set GPIO list <gpio_number, value> for GPIOs to set (on/off) at init.

void setGpiosOnStreamOn(std::unordered_map<int, int> list)

Set GPIO list <gpio_number, value> for GPIOs to set when streaming is enabled.

void setGpiosOnStreamOff(std::unordered_map<int, int> list)

Set GPIO list <gpio_number, value> for GPIOs to set when streaming is disabled.

Public Members

Input input = {*this, "in", Input::Type::SReceiver, true, 8, true, {{DatatypeEnum::Buffer, true}}}

Input for image frames to be streamed over UVC

Default queue is blocking with size 8

class VideoEncoder : public dai::NodeCRTP<Node, VideoEncoder, VideoEncoderProperties>
#include <VideoEncoder.hpp>

VideoEncoder node. Encodes frames into MJPEG, H264 or H265.

Public Functions

void setDefaultProfilePreset(float fps, Properties::Profile profile)

Sets a default preset based on specified frame rate and profile

Parameters
  • fps: Frame rate in frames per second

  • profile: Encoding profile

void setDefaultProfilePreset(int width, int height, float fps, Properties::Profile profile)

Sets a default preset based on specified input size, frame rate and profile

Parameters
  • width: Input frame width

  • height: Input frame height

  • fps: Frame rate in frames per second

  • profile: Encoding profile

void setDefaultProfilePreset(std::tuple<int, int> size, float fps, Properties::Profile profile)

Sets a default preset based on specified input size, frame rate and profile

Parameters
  • size: Input frame size

  • fps: Frame rate in frames per second

  • profile: Encoding profile

void setNumFramesPool(int frames)

Set number of frames in pool

Parameters
  • frames: Number of pool frames

int getNumFramesPool() const

Get number of frames in pool

Return

Number of pool frames

void setRateControlMode(Properties::RateControlMode mode)

Set rate control mode Applicable only to H264 and H265 profiles.

void setProfile(Properties::Profile profile)

Set encoding profile.

void setProfile(std::tuple<int, int> size, Properties::Profile profile)

Set encoding profile.

void setProfile(int width, int height, Properties::Profile profile)

Set encoding profile.

void setBitrate(int bitrate)

Set output bitrate in bps, for CBR rate control mode. 0 for auto (based on frame size and FPS). Applicable only to H264 and H265 profiles.

void setBitrateKbps(int bitrateKbps)

Set output bitrate in kbps, for CBR rate control mode. 0 for auto (based on frame size and FPS). Applicable only to H264 and H265 profiles.

void setKeyframeFrequency(int freq)

Set keyframe frequency. Every Nth frame a keyframe is inserted.

Applicable only to H264 and H265 profiles

Examples:

  • 30 FPS video, keyframe frequency: 30. Every 1s a keyframe will be inserted

  • 60 FPS video, keyframe frequency: 180. Every 3s a keyframe will be inserted

void setNumBFrames(int numBFrames)

Set number of B frames to be inserted. Applicable only to H264 and H265 profiles.

void setQuality(int quality)

Set quality for [M]JPEG profile

Parameters
  • quality: Value between 0-100%. Approximates quality

void setLossless(bool lossless)

Set lossless mode. Applies only to [M]JPEG profile

Parameters
  • lossless: True to enable lossless jpeg encoding, false otherwise

void setFrameRate(float frameRate)

Sets expected frame rate

Parameters
  • frameRate: Frame rate in frames per second

void setMaxOutputFrameSize(int maxFrameSize)

Specifies maximum output encoded frame size

Properties::RateControlMode getRateControlMode() const

Get rate control mode.

Properties::Profile getProfile() const

Get profile.

int getBitrate() const

Get bitrate in bps.

int getBitrateKbps() const

Get bitrate in kbps.

int getKeyframeFrequency() const

Get keyframe frequency.

int getNumBFrames() const

Get number of B frames.

int getQuality() const

Get quality.

std::tuple<int, int> getSize() const

Get input size.

int getWidth() const

Get input width.

int getHeight() const

Get input height.

float getFrameRate() const

Get frame rate.

bool getLossless() const

Get lossless mode. Applies only when using [M]JPEG profile.

Public Members

Input input = {*this, "in", Input::Type::SReceiver, true, 4, true, {{DatatypeEnum::ImgFrame, true}}}

Input for NV12 ImgFrame to be encoded Default queue is blocking with size set by ‘setNumFramesPool’ (4).

Output bitstream = {*this, "bitstream", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries BITSTREAM encoded (MJPEG, H264 or H265) frame data. Mutually exclusive with out.

Output out = {*this, "out", Output::Type::MSender, {{DatatypeEnum::EncodedFrame, false}}}

Outputs EncodedFrame message that carries encoded (MJPEG, H264 or H265) frame data. Mutually exclusive with bitstream.

class Warp : public dai::NodeCRTP<Node, Warp, WarpProperties>
#include <Warp.hpp>

Warp node. Capability to crop, resize, warp, … incoming image frames.

Public Functions

void setOutputSize(std::tuple<int, int> size)

Sets output frame size in pixels

Parameters
  • size: width and height in pixels

void setWarpMesh(const std::vector<Point2f> &meshData, int width, int height)

Set a custom warp mesh

Parameters
  • meshData: 2D plane of mesh points, starting from top left to bottom right

  • width: Width of mesh

  • height: Height of mesh

void setNumFramesPool(int numFramesPool)

Specify number of frames in pool.

Parameters
  • numFramesPool: How many frames should the pool have

void setMaxOutputFrameSize(int maxFrameSize)

Specify maximum size of output image.

Parameters
  • maxFrameSize: Maximum frame size in bytes

void setHwIds(std::vector<int> ids)

Specify which hardware warp engines to use

Parameters
  • ids: Which warp engines to use (0, 1, 2)

std::vector<int> getHwIds() const

Retrieve which hardware warp engines to use.

void setInterpolation(dai::Interpolation interpolation)

Specify which interpolation method to use

Parameters
  • interpolation: type of interpolation

dai::Interpolation getInterpolation() const

Retrieve which interpolation method to use.

Public Members

Input inputImage = {*this, "inputImage", Input::Type::SReceiver, true, 8, true, {{DatatypeEnum::ImgFrame, true}}}

Input image to be modified Default queue is blocking with size 8

Output out = {*this, "out", Output::Type::MSender, {{DatatypeEnum::ImgFrame, true}}}

Outputs ImgFrame message that carries warped image.

class XLinkIn : public dai::NodeCRTP<Node, XLinkIn, XLinkInProperties>
#include <XLinkIn.hpp>

XLinkIn node. Receives messages over XLink.

Public Functions

void setStreamName(const std::string &name)

Specifies XLink stream name to use.

The name should not start with double underscores ‘__’, as those are reserved for internal use.

Parameters
  • name: Stream name

void setMaxDataSize(std::uint32_t maxDataSize)

Set maximum message size it can receive

Parameters
  • maxDataSize: Maximum size in bytes

void setNumFrames(std::uint32_t numFrames)

Set number of frames in pool for sending messages forward

Parameters
  • numFrames: Maximum number of frames in pool

std::string getStreamName() const

Get stream name.

std::uint32_t getMaxDataSize() const

Get maximum messages size in bytes.

std::uint32_t getNumFrames() const

Get number of frames in pool.

Public Members

Output out = {*this, "out", Output::Type::MSender, {{DatatypeEnum::Buffer, true}}}

Outputs message of same type as send from host.

class XLinkOut : public dai::NodeCRTP<Node, XLinkOut, XLinkOutProperties>
#include <XLinkOut.hpp>

XLinkOut node. Sends messages over XLink.

Public Functions

void setStreamName(const std::string &name)

Specifies XLink stream name to use.

The name should not start with double underscores ‘__’, as those are reserved for internal use.

Parameters
  • name: Stream name

void setFpsLimit(float fps)

Specifies a message sending limit. It’s approximated from specified rate.

Parameters
  • fps: Approximate rate limit in messages per second

void setMetadataOnly(bool metadataOnly)

Specify whether to transfer only messages attributes and not buffer data

std::string getStreamName() const

Get stream name.

float getFpsLimit() const

Get rate limit in messages per second.

bool getMetadataOnly() const

Get whether to transfer only messages attributes and not buffer data.

Public Members

Input input = {*this, "in", Input::Type::SReceiver, true, 8, true, {{DatatypeEnum::Buffer, true}}}

Input for any type of messages to be transferred over XLink stream

Default queue is blocking with size 8

class YoloDetectionNetwork : public dai::NodeCRTP<DetectionNetwork, YoloDetectionNetwork, DetectionNetworkProperties>
#include <DetectionNetwork.hpp>

YoloDetectionNetwork node. Parses Yolo results.

Public Functions

void setNumClasses(int numClasses)

Set num classes.

void setCoordinateSize(int coordinates)

Set coordianate size.

void setAnchors(std::vector<float> anchors)

Set anchors.

void setAnchorMasks(std::map<std::string, std::vector<int>> anchorMasks)

Set anchor masks.

void setIouThreshold(float thresh)

Set Iou threshold.

int getNumClasses() const

Get num classes.

int getCoordinateSize() const

Get coordianate size.

std::vector<float> getAnchors() const

Get anchors.

std::map<std::string, std::vector<int>> getAnchorMasks() const

Get anchor masks.

float getIouThreshold() const

Get Iou threshold.

class YoloSpatialDetectionNetwork : public dai::NodeCRTP<SpatialDetectionNetwork, YoloSpatialDetectionNetwork, SpatialDetectionNetworkProperties>
#include <SpatialDetectionNetwork.hpp>

YoloSpatialDetectionNetwork node. Yolo-based network with spatial location data.

Public Functions

void setNumClasses(const int numClasses)

Set num classes.

void setCoordinateSize(const int coordinates)

Set coordianate size.

void setAnchors(std::vector<float> anchors)

Set anchors.

void setAnchorMasks(std::map<std::string, std::vector<int>> anchorMasks)

Set anchor masks.

void setIouThreshold(float thresh)

Set Iou threshold.

int getNumClasses() const

Get num classes.

int getCoordinateSize() const

Get coordianate size.

std::vector<float> getAnchors() const

Get anchors.

std::map<std::string, std::vector<int>> getAnchorMasks() const

Get anchor masks.

float getIouThreshold() const

Get Iou threshold.

namespace utility

Functions

std::uint32_t checksum(const void *buffer, std::size_t size, uint32_t prevChecksum)

Simple hash function - djb2

Parameters
  • buffer: Pointer to buffer of data to hash

  • size: Size of buffer in bytes

  • prevChecksum: Previous checksum - useful for doing hash on blocks of data

std::uint32_t checksum(const void *buffer, std::size_t size)

Simple hash function - djb2

Parameters
  • buffer: Pointer to buffer of data to hash

  • size: Size of buffer in bytes

template<SerializationType TYPE, typename T, std::enable_if_t<TYPE == SerializationType::JSON_MSGPACK, bool> = true>
bool serialize(const T &obj, std::vector<std::uint8_t> &data)
template<SerializationType TYPE, typename T, std::enable_if_t<TYPE == SerializationType::JSON_MSGPACK, bool> = true>
bool deserialize(const std::uint8_t *data, std::size_t size, T &obj)
template<typename T>
bool serialize(const T &obj, std::vector<std::uint8_t> &data, SerializationType type)
template<typename T>
std::vector<std::uint8_t> serialize(const T &obj, SerializationType type)
template<typename T>
bool deserialize(const std::uint8_t *data, std::size_t size, T &obj, SerializationType type)
template<typename T>
bool deserialize(const std::vector<std::uint8_t> &data, T &obj, SerializationType type)
template<SerializationType TYPE, typename T>
std::vector<std::uint8_t> serialize(const T &obj)
template<SerializationType TYPE, typename T>
bool deserialize(const std::vector<std::uint8_t> &data, T &obj)
template<typename T>
bool serialize(const T &obj, std::vector<std::uint8_t> &data)
template<typename T>
std::vector<std::uint8_t> serialize(const T &obj)
template<typename T>
bool deserialize(const std::uint8_t *data, std::size_t size, T &obj)
template<typename T>
bool deserialize(const std::vector<std::uint8_t> &data, T &obj)
class VectorWriter

Got questions?

Head over to Discussion Forum for technical support or any other questions you might have.