Configuration options

The configuration options are provided when the engine is initialized and they are case-sensitive.

Sample config on ARM devices:

{
    "debug_level": "info",
    "gpgpu_enabled": true,

    "detect_minscore": 0.1,
    "detect_quantization_enabled": true,

    "recogn_score_type": "min",
    "recogn_minscore": 0.3,
    "recogn_rectify_enabled": false,
    "recogn_quantization_enabled": true
}

debug_level

Defines the debug level to output on the console. You should use “verbose” for diagnostic, “info” in development stage and “warn” in production.
Default: “info”

type

string

pattern

“verbose” | “info” | “warn” | “error” | “fatal”

debug_write_input_image_enabled

Whether to write the transformed input image to the disk. This could be useful for debugging.
Default: false

type

bool

pattern

true | false

debug_internal_data_path

Path to the folder where to write the transformed input image. Used only if debug_write_input_image_enabled is true.
Default: “”

type

string

pattern

folder path

license_token_file

Path to the file containing the license token. First you need to generate a Runtime Key using requestRuntimeLicenseKey() function then activate the key to get a token. You should use license_token_file or license_token_data but not both.

type

string

pattern

file path

license_token_data

Base64 string representing the license token. First you need to generate a Runtime Key using requestRuntimeLicenseKey() function then activate the key to get a token. You should use license_token_file or license_token_data but not both.

type

string

pattern

base64

num_threads

Defines the maximum number of threads to use. You should not change this value unless you know what you’re doing. Set to -1 to let the SDK choose the right value. The right value the SDK will choose will likely be equal to the number of virtual core. For example, on an octa-core device the maximum number of threads will be 8.
Default: -1

type

int

pattern

]-inf, +inf[

gpgpu_enabled

Whether to enable GPGPU computing. This will enable or disable GPGPU computing on the computer vision and deep learning libraries. On ARM devices this flag will be ignored when fixed-point (integer) math implementation exist for a well-defined function. For example, this function will be disabled for the bilinear scaling as we have a fixed-point SIMD accelerated implementation. Same for many deep learning parts as we’re using QINT8 quantized inference.
Default: true

type

bool

pattern

true | false

npu_enabled

Whether to enable NPU (Neural Processing Unit) acceleration (Amlogic, NXP,…). Please read https://github.com/DoubangoTelecom/ultimateALPR-SDK/blob/master/NPU.md
Default: true
Available since: 3.9.0

type

bool

pattern

true | false

max_latency

The parallel processing method could introduce delay/latency in the delivery callback on low-end CPUs. This parameter controls the maximum latency you can tolerate. The unit is number of frames. The default value is -1 which means auto.
Default: -1

type

int

pattern

[0, +inf[

ienv_enabled

Whether to enable Image Enhancement for Night-Vision (IENV).
Default: false

type

bool

pattern

true | false

assets_folder

Path to the folder containing the configuration files and deep learning models. Default value is the current folder.
The SDK will look for the models in $(assets_folder)/models folder.
Default: .
Available since: 2.1.0

type

string

pattern

folder path

charset

Defines the charset (alphabet) to use for the recognizer. Default: latin
Available since: 2.7.0

type

string

pattern

“latin” | “korean”

openvino_enabled

Whether to use OpenVINO instead of Tensorflow as deep learning backend engine. OpenVINO is used for detection and classification but not for OCR. OpenVINO is always faster than Tensorflow on Intel products (CPUs, VPUs, GPUs, FPGAs…) and we highly recommend using it. We require a CPU with support for both AVX2 and FMA features before trying to load OpenVINO plugin (shared library). OpenVINO will be disabled with a fallback on Tensorflow if these CPU features are not detected.
Default: true
Available since: 3.0.0

type

bool

pattern

true | false

openvino_device

OpenVINO device to use for computations. We recommend using “CPU” which is always correct. If you have an Intel GPU, VPU or FPGA, then you can change this value. If you try to use any other value than “CPU” without having the right device, then OpenVINO will be completely disabled with a fallback on Tensorflow. Default: “CPU”
Available since: 3.0.0

type

string

pattern

“GNA” | “HETERO” | “CPU” | “MULTI” | “GPU” | “MYRIAD” | “HDDL ” | “FPGA”

detect_roi

Defines the Region Of Interest (ROI) for the detector. Any pixels outside region of interest will be ignored by the detector. Defining an WxH region of interest instead of resizing the image at WxH is very important as you’ll keep the same quality when you define a ROI while you’ll lose in quality when using the later.
Default: [0.f, 0.f, 0.f, 0.f]

type

float[4]

pattern

[left, right, top, bottom]

detect_minscore

Defines a threshold for the detection score. Any detection with a score below that threshold will be ignored. 0.f being poor confidence and 1.f excellent confidence.
Default: 0.3f

type

float

pattern

]0.f, 1.f]

detect_gpu_backend

Defines the GPU backend to use. This entry is only meaningful when gpgpu_enabled is equal to true . You should not set this value and must let the SDK choose the right value based on the system information. On desktop implementation, this entry will be ignored if support for CUDA is found.This value is also ignore when detect_quantization_enabled is equal to true as quantized operations are never executed on a GPU.

type

string

pattern

“opengl” | “opencl” | “nnapi” | “metal” | “none”

detect_quantization_enabled

Whether to enable quantization on ARM devices. Please note that quantized functions never run on GPU as such devices are not suitable for integer operations. GPUs are designed and optimized for floating point math. Any function with dual implementation (GPU and Quantized) will be run on GPU if this entry is set to false and on CPU if set to true. Quantized inference bring speed but slightly decrease the accuracy. We think it worth it and you should set this flag to true. Anyway, if you’re running a trial version, then an assertion will be raised when you try to set this entry to false.
Default: true

type

bool

pattern

true | false

car_noplate_detect_enabled

Whether to return cars with no plate. By default any car without plate will be silently ignored.
Default: false
Available since: 3.2.0

type

bool

pattern

true | false

car_noplate_detect_min_score

Defines a threshold for the detection score for cars with no plate. Any detection with a score below that threshold will be ignored. 0.f being poor confidence and 1.f excellent confidence.
Default: 0.8f
Available since: 3.2.0

type

float

pattern

[0.f, 1.f]

pyramidal_search_enabled

Whether to enable pyramidal search. Pyramidal search is an advanced feature to accurately detect very small or far away license plates.
Default: true

type

bool

pattern

true | false

pyramidal_search_sensitivity

Defines how sensitive the pyramidal search anchor resolution function should be. The higher this value is, the higher the number of pyramid levels will be. More levels means better accuracy but higher CPU usage and inference time. Pyramidal search will be disabled if this value is equal to 0.
Default: 0.28f

type

float

pattern

[0.f, 1.f]

pyramidal_search_minscore

Defines a threshold for the detection score associated to the plates retrieved after pyramidal search. Any detection with a score below that threshold will be ignored. 0.f being poor confidence and 1.f excellent confidence.
Default: 0.8f

type

float

pattern

]0.f, 1.f]

pyramidal_search_min_image_size_inpixels

Minimum image size (max[width, height]) in pixels to trigger pyramidal search. Pyramidal search will be disabled if the image size is less than this value. Using pyramidal search on small images is useless.
Default: 800

type

int

pattern

[0, inf]

pyramidal_search_quantization_enabled

Whether to enable quantization on ARM devices. Please note that quantized functions never run on GPU as such devices are not suitable for integer operations. GPUs are designed and optimized for floating point math. Any function with dual implementation (GPU and Quantized) will be run on GPU if this entry is set to false and on CPU if set to true. Quantized inference bring speed but slightly decrease the accuracy. We think it worth it and you should set this flag to true. Anyway, if you’re running a trial version, then an assertion will be raised when you try to set this entry to false.
Default: true

type

bool

pattern

true | false

klass_lpci_enabled

Whether to enable License Plate Country Identification (LPCI) function. To avoid adding latency to the pipeline only enable this function if you really need it.
Default: false
Available since: 3.0.0

type

bool

pattern

true | false

klass_vcr_enabled

Whether to enable Vehicle Color Recognition (VCR) function. To avoid adding latency to the pipeline only enable this function if you really need it.
Default: false
Available since: 3.0.0

type

bool

pattern

true | false

klass_vmmr_enabled

Whether to enable Vehicle Make Model Recognition (VMMR) function. To avoid adding latency to the pipeline only enable this function if you really need it.
Default: false
Available since: 3.0.0

type

bool

pattern

true | false

klass_vbsr_enabled

Whether to enable Vehicle Body Style Recognition (VBSR) function. To avoid adding latency to the pipeline only enable this function if you really need it.
Default: false
Available since: 3.2.0

type

bool

pattern

true | false

klass_vcr_gamma

1/G coefficient value to use for gamma correction operation in order to enhance the car color before applying VCR classification. More information on gamma correction could be found at https://en.wikipedia.org/wiki/Gamma_correction. Values higher than 1.0f mean lighter and lower than 1.0f mean darker. Value equal to 1.0f mean bypass gamma correction operation.
Default: 1.5f
Available since: 3.0.0

type

float

pattern

[0.f, inf]

recogn_score_type

Defines the overall score type. The recognizer outputs a recognition score ([0.f, 1.f]) for every character in the license plate. The score type defines how to compute the overall score.
- min: Takes the minimum score.
- mean: Takes the average score.
- median: Takes the median score.
- max: Takes the maximum score.
- minmax: Takes (max + min) * 0.5f.
The min score is the most robust type as it ensures that every character have at least a certain confidence value. The median score is the default type as it provide a higher recall. In production we recommend using min type.
Default: median.
Recommended: min

type

string

pattern

“min” | “mean” | “median” | “max” | “minmax”

recogn_minscore

Defines a threshold for the overall recognition score. Any recognition with a score below that threshold will be ignored. The overall score is computed based on recogn_score_type. 0.f being poor confidence and 1.f excellent confidence.
Default: 0.3f

type

float

pattern

]0.f, 1.f]

recogn_rectify_enabled

Whether to add rectification layer between the detector’s output and the recognizer’s input. A rectification layer is used to suppress the distortion. A plate is distorted when it’s skewed and/or slanted. The rectification layer will deslant and deskew the plate to make it straight which makes the recognition more accurate. Please note that you only need to enable this feature when the license plates are highly distorted. The implementation can handle moderate distortion without a rectification layer. The rectification layer adds many CPU intensive operations to the pipeline which decrease the frame rate.
More info on the rectification layer could be found here
Default: false

type

bool

pattern

true | false

recogn_rectify_polarity

This entry is only used when recogn_rectify_enabled is equal to true. In order to accurately estimate the distortion we need to know the polarity. You should set the value to both to let the SDK find the real polarity at runtime. The module used to estimate the polarity is named the polarifier. The polarifier isn’t immune to errors and could miss the correct polarity and this is why this entry could be used to define a fixed value. Defining a value other than both means the polarifier will be disabled and we’ll assume all the plate have the defined polarity value.
Default: both

type

string

pattern

“both” | “dark_on_bright” | “bright_on_dark”

recogn_rectify_polarity_preferred

This entry is only used when recogn_rectify_enabled is equal to true. Unlike recogn_rectify_polarity this entry is used as a “hint” for the polarifier. The polarifier will provide more weight to the polarity value defined by this entry as tie breaker.
Default: dark_on_bright

type

string

pattern

“both” | “dark_on_bright” | “bright_on_dark”

recogn_gpu_backend

Defines the GPU backend to use. This entry is only meaningful when gpgpu_enabled is equal to true . You should not set this value and must let the stack choose the right value based on the system information. On desktop implementation, this entry will be ignored if support for CUDA is found. This value is also ignore when recogn_quantization_enabled is equal to true as quantized operations are never executed on a GPU.

type

string

pattern

“opengl” | “opencl” | “nnapi” | “metal” | “none”

recogn_quantization_enabled

Whether to enable quantization on ARM devices. Please note that quantized functions never run on GPU as such devices are not suitable for integer operations. GPUs are designed and optimized for floating point math. Any function with dual implementation (GPU and Quantized) will be run on GPU if this entry is set to false and on CPU if set to true. Quantized inference bring speed but slightly decrease the accuracy. We think it worth it and you should set this flag to true. Anyway, if you’re running a trial version, then an assertion will be raised when you try to set this entry to false.
Default: true

type

bool

pattern

true | false