NeuralNetwork

This node runs neural inference on input data. Any OpenVINO neural networks can be run using this node, as long as the VPU supports all layers. This allows you to pick from 200+ pre-trained model from Open Model Zoo and DepthAI Model Zoo and directly run it on the OAK device.

Neural network has to be in .blob format to be compatible with the VPU. Instructions on how to compile your neural network (NN) to .blob can be found here.

How to place it

pipeline = dai.Pipeline()
nn = pipeline.create(dai.node.NeuralNetwork)
dai::Pipeline pipeline;
auto nn = pipeline.create<dai::node::NeuralNetwork>();

Inputs and Outputs

            ┌───────────────────┐
            │                   │       out
            │                   ├───────────►
            │                   │
            │   NeuralNetwork   │
input       │                   │ passthrough
───────────►│-------------------├───────────►
            │                   │
            └───────────────────┘

Message types

Passthrough mechanism

The passthrough mechanism is very useful when a node specifies its input to be non-blocking, where messages can be overwritten. There we don’t know on which message the node performed its operation (eg NN, was inference done on frame 25 or skipped 25 and performed inference on 26). At the same time means that if: xlink and host input queues are blocking, and we receive both say passthrough and output we can do a blocking get on both of those queues and be sure to always get matching frames. They might not arrive at the same time, but both of them will arrive, and be in queue in correct spot to be taken out together.

Usage

pipeline = dai.Pipeline()
nn = pipeline.create(dai.node.NeuralNetwork)
nn.setBlobPath(bbBlobPath)
cam.out.link(nn.input)

# Send NN out to the host via XLink
nnXout = pipeline.create(dai.node.XLinkOut)
nnXout.setStreamName("nn")
nn.out.link(nnXout.input)

with dai.Device(pipeline) as device:
  qNn = device.getOutputQueue("nn")

  nnData = qNn.get() # Blocking

  # NN can output from multiple layers. Print all layer names:
  print(nnData.getAllLayerNames())

  # Get layer named "Layer1_FP16" as FP16
  layer1Data = nnData.getLayerFp16("Layer1_FP16")

  # You can now decode the output of your NN
dai::Pipeline pipeline;
auto nn = pipeline.create<dai::node::NeuralNetwork>();
nn->setBlobPath(bbBlobPath);
cam->out.link(nn->input);

// Send NN out to the host via XLink
auto nnXout = pipeline.create<dai::node::XLinkOut>();
nnXout->setStreamName("nn");
nn->out.link(nnXout->input);

dai::Device device(pipeline);
// Start the pipeline
device.startPipeline();

auto qNn = device.getOutputQueue("nn");

auto nnData = qNn->get<dai::NNData>(); // Blocking

// NN can output from multiple layers. Print all layer names:
cout << nnData->getAllLayerNames();

// Get layer named "Layer1_FP16" as FP16
auto layer1Data = nnData->getLayerFp16("Layer1_FP16");

// You can now decode the output of your NN

Reference

class depthai.node.NeuralNetwork
class Id

Node identificator. Unique for every node on a single Pipeline

getAssetManager(*args, **kwargs)

Overloaded function.

  1. getAssetManager(self: depthai.Node) -> depthai.AssetManager

  2. getAssetManager(self: depthai.Node) -> depthai.AssetManager

getInputRefs(*args, **kwargs)

Overloaded function.

  1. getInputRefs(self: depthai.Node) -> list[depthai.Node.Input]

  2. getInputRefs(self: depthai.Node) -> list[depthai.Node.Input]

getInputs(self: depthai.Node)list[depthai.Node.Input]
getName(self: depthai.Node)str
getNumInferenceThreads(self: depthai.node.NeuralNetwork)int
getOutputRefs(*args, **kwargs)

Overloaded function.

  1. getOutputRefs(self: depthai.Node) -> list[depthai.Node.Output]

  2. getOutputRefs(self: depthai.Node) -> list[depthai.Node.Output]

getOutputs(self: depthai.Node)list[depthai.Node.Output]
getParentPipeline(*args, **kwargs)

Overloaded function.

  1. getParentPipeline(self: depthai.Node) -> depthai.Pipeline

  2. getParentPipeline(self: depthai.Node) -> depthai.Pipeline

setBlob(*args, **kwargs)

Overloaded function.

  1. setBlob(self: depthai.node.NeuralNetwork, blob: depthai.OpenVINO.Blob) -> None

  2. setBlob(self: depthai.node.NeuralNetwork, path: Path) -> None

setBlobPath(self: depthai.node.NeuralNetwork, path: Path)None
setNumInferenceThreads(self: depthai.node.NeuralNetwork, numThreads: int)None
setNumNCEPerInferenceThread(self: depthai.node.NeuralNetwork, numNCEPerThread: int)None
setNumPoolFrames(self: depthai.node.NeuralNetwork, numFrames: int)None
class dai::node::NeuralNetwork : public dai::NodeCRTP<Node, NeuralNetwork, NeuralNetworkProperties>

NeuralNetwork node. Runs a neural inference on input data.

Public Functions

NeuralNetwork(const std::shared_ptr<PipelineImpl> &par, int64_t nodeId)
NeuralNetwork(const std::shared_ptr<PipelineImpl> &par, int64_t nodeId, std::unique_ptr<Properties> props)
void setBlobPath(const dai::Path &path)

Load network blob into assets and use once pipeline is started.

Exceptions
  • Error: if file doesn’t exist or isn’t a valid network blob.

Parameters
  • path: Path to network blob

void setBlob(OpenVINO::Blob blob)

Load network blob into assets and use once pipeline is started.

Parameters
  • blob: Network blob

void setBlob(const dai::Path &path)

Same functionality as the setBlobPath(). Load network blob into assets and use once pipeline is started.

Exceptions
  • Error: if file doesn’t exist or isn’t a valid network blob.

Parameters
  • path: Path to network blob

void setNumPoolFrames(int numFrames)

Specifies how many frames will be available in the pool

Parameters
  • numFrames: How many frames will pool have

void setNumInferenceThreads(int numThreads)

How many threads should the node use to run the network.

Parameters
  • numThreads: Number of threads to dedicate to this node

void setNumNCEPerInferenceThread(int numNCEPerThread)

How many Neural Compute Engines should a single thread use for inference

Parameters
  • numNCEPerThread: Number of NCE per thread

int getNumInferenceThreads()

How many inference threads will be used to run the network

Return

Number of threads, 0, 1 or 2. Zero means AUTO

Public Members

Input input = {*this, "in", Input::Type::SReceiver, true, 5, true, {{DatatypeEnum::Buffer, true}}}

Input message with data to be inferred upon Default queue is blocking with size 5

Output out = {*this, "out", Output::Type::MSender, {{DatatypeEnum::NNData, false}}}

Outputs NNData message that carries inference results

Output passthrough = {*this, "passthrough", Output::Type::MSender, {{DatatypeEnum::Buffer, true}}}

Passthrough message on which the inference was performed.

Suitable for when input queue is set to non-blocking behavior.

InputMap inputs

Inputs mapped to network inputs. Useful for inferring from separate data sources Default input is non-blocking with queue size 1 and waits for messages

OutputMap passthroughs

Passthroughs which correspond to specified input

Public Static Attributes

static constexpr const char *NAME = "NeuralNetwork"

Got questions?

Head over to Discussion Forum for technical support or any other questions you might have.