Debugging DepthAI pipeline¶
Currently, tools for debugging the DepthAI pipeline are limited. We plan on creating a software that would track all messages and queues, which would allow users to debug a “frozen” pipeline much easier, which is usually caused by a filled up blocking queue.
DepthAI debugging level¶
You can enable debugging by changing the debugging level. It’s set to warning
by default.
Level |
Logging |
---|---|
|
Only a critical error that stops/crashes the program. |
|
|
|
|
|
Will print information about CPU/RAM consumption, temperature, CMX slices and SHAVE core allocation. |
|
|
|
Trace will print out a Message whenever one is received from the device. |
Debugging can be enabled either in code:
with dai.Device() as device: # Initialize device
# Set debugging level
device.setLogLevel(dai.LogLevel.DEBUG)
device.setLogOutputLevel(dai.LogLevel.DEBUG)
Where setLogLevel
sets verbosity which filters messages that get sent from the device to the host and setLogOutputLevel
sets
verbosity which filters messages that get printed on the host (stdout). This difference allows to capture the log messages internally and
not print them to stdout, and use those to eg. display them somewhere else or analyze them.
You can also enable debugging using an environmental variable DEPTHAI_LEVEL:
DEPTHAI_LEVEL=debug python3 script.py
$env:DEPTHAI_LEVEL='debug'
python3 script.py
# Turn debugging off afterwards
Remove-Item Env:\DEPTHAI_LEVEL
set DEPTHAI_LEVEL=debug
python3 script.py
# Turn debugging off afterwards
set DEPTHAI_LEVEL=
Script node logging¶
Currently, the best way to debug a behaviour inside the Script node, is to use node.warn('')
logging capability. This will
send the warning back to the host and it will get printed to the user. Users can also print values, such as frame sequence numbers, which
would be valuable when debugging on-device frame-syncing logic.
script = pipeline.create(dai.node.Script)
script.setScript("""
buf = NNData(13)
buf.setLayer("fp16", [1.0, 1.2, 3.9, 5.5])
buf.setLayer("uint8", [6, 9, 4, 2, 0])
# Logging
node.warn(f"Names of layers: {buf.getAllLayerNames()}")
node.warn(f"Number of layers: {len(buf.getAllLayerNames())}")
node.warn(f"FP16 values: {buf.getLayerFp16('fp16')}")
node.warn(f"UINT8 values: {buf.getLayerUInt8('uint8')}")
""")
Code above will print the following values to the user:
[Script(0)] [warning] Names of layers: ['fp16', 'uint8']
[Script(0)] [warning] Number of layers: 2
[Script(0)] [warning] FP16 values: [1.2001953125, 1.2001953125, 3.900390625, 5.5]
[Script(0)] [warning] UINT8 values: [6, 9, 4, 2, 0]
Resource Debugging¶
By enabling info
log level (or lower), depthai will print usage of hardware resources,
specifically SHAVE core and CMX memory usage:
NeuralNetwork allocated resources: shaves: [0-11] cmx slices: [0-11] # 12 SHAVES/CMXs allocated to NN
ColorCamera allocated resources: no shaves; cmx slices: [13-15] # 3 CMXs allocated to Color an Mono cameras (ISP)
MonoCamera allocated resources: no shaves; cmx slices: [13-15]
StereoDepth allocated resources: shaves: [12-12] cmx slices: [12-12] # StereoDepth node consumes 1 CMX and 1 SHAVE core
ImageManip allocated resources: shaves: [15-15] no cmx slices. # ImageManip node(s) consume 1 SHAVE core
SpatialLocationCalculator allocated resources: shaves: [14-14] no cmx slices. # SLC consumes 1 SHAVE core
In total, this pipeline consumes 15 SHAVE cores and 16 CMX slices. The pipeline is running an object detection model compiled for 6 SHAVE cores.
Got questions?
We’re always happy to help with code or other questions you might have.