Configuration options¶
The configuration options are provided when the engine is initialized and they are case-sensitive.
Sample config on ARM devices:
{
"debug_level": "info",
"gpgpu_enabled": true,
"detect_minscore": 0.5,
"detect_quantization_enabled": true,
"recogn_score_type": "min",
"recogn_minscore": 0.2,
"recogn_rectify_enabled": false,
"recogn_quantization_enabled": true
}
debug_level¶
Defines the debug level to output on the console. You should use “verbose” for diagnostic, “info” in development stage and “warn” in production. |
|
type |
string |
pattern |
“verbose” | “info” | “warn” | “error” | “fatal” |
debug_write_input_image_enabled¶
Whether to write the transformed input image to the disk. This could be useful for debugging. |
|
type |
bool |
pattern |
true | false |
debug_internal_data_path¶
Path to the folder where to write the transformed input image. Used only if debug_write_input_image_enabled is true. |
|
type |
string |
pattern |
folder path |
license_token_file¶
Path to the file containing the license token. First you need to generate a Runtime Key using |
|
type |
string |
pattern |
file path |
license_token_data¶
Base64 string representing the license token. First you need to generate a Runtime Key using |
|
type |
string |
pattern |
base64 |
num_threads¶
Defines the maximum number of threads to use. You should not change this value unless you know what you’re doing. Set to -1 to let the SDK choose the right value. The right value the SDK will choose will likely be equal to the number of virtual core. For example, on an octa-core device the maximum number of threads will be 8. |
|
type |
int |
pattern |
[-inf, +inf] |
gpgpu_enabled¶
Whether to enable GPGPU computing. This will enable or disable GPGPU computing on the computer vision and deep learning libraries. On ARM devices this flag will be ignored when fixed-point (integer) math implementation exist for a well-defined function. For example, this function will be disabled for the bilinear scaling as we have a fixed-point SIMD accelerated implementation. Same for many deep learning parts as we’re using QINT8 quantized inference. |
|
type |
bool |
pattern |
true | false |
ielcd_enabled¶
Whether to enable Image Enhancement for Low Contrast Document (IELCD) feature. |
|
type |
boolean |
pattern |
true | false |
assets_folder¶
Path to the folder containing the configuration files and deep learning models. Default value is the current folder. |
|
type |
string |
pattern |
folder path |
detect_roi¶
Defines the Region Of Interest (ROI) for the detector. Any pixels outside region of interest will be ignored by the detector. Defining an WxH region of interest instead of resizing the image at WxH is very important as you’ll keep the same quality when you define a ROI while you’ll lose in quality when using the later. |
|
type |
float[4] |
pattern |
[left, right, top, bottom] |
detect_minscore¶
Defines a threshold for the detection score. Any detection with a score below that threshold will be ignored. 0.f being poor confidence and 1.f excellent confidence. |
|
type |
float |
pattern |
]0.f, 1.f] |
detect_gpu_backend¶
Defines the GPU backend to use. This entry is only meaningful when gpgpu_enabled is equal to true . You should not set this value and must let the SDK choose the right value based on the system information. On desktop implementation, this entry will be ignored if support for CUDA is found.This value is also ignore when detect_quantization_enabled is equal to true as quantized operations are never executed on a GPU. |
|
type |
string |
pattern |
“opengl” | “opencl” | “nnapi” | “metal” | “none” |
detect_quantization_enabled¶
Whether to enable quantization on ARM devices. Please note that quantized functions never run on GPU as such devices are not suitable for integer operations. GPUs are designed and optimized for floating point math. Any function with dual implementation (GPU and Quantized) will be run on GPU if this entry is set to false and on CPU if set to true. Quantized inference bring speed but slightly decrease the accuracy. We think it worth it and you should set this flag to true. Anyway, if you’re running a trial version, then an assertion will be raised when you try to set this entry to false. |
|
type |
bool |
pattern |
true | false |
recogn_score_type¶
Defines the overall score type. The recognizer outputs a recognition score ([0.f, 1.f]) for every character in the credit card. The score type defines how to compute the overall score. |
|
type |
string |
pattern |
“min” | “mean” | “median” | “max” | “minmax” |
recogn_minscore¶
Defines a threshold for the overall recognition score. Any recognition with a score below that threshold will be ignored. The overall score is computed based on recogn_score_type. 0.f being poor confidence and 1.f excellent confidence. |
|
type |
float |
pattern |
]0.f, 1.f] |
recogn_rectify_enabled¶
Whether to add rectification layer between the detector’s output and the recognizer’s input. A rectification layer is used to suppress the distortion. A card is distorted when it’s skewed and/or slanted. The rectification layer will deslant and deskew the card to make it straight which make the recognition more accurate. Please note that you only need to enable this feature when the cards are highly distorted. The implementation can handle moderate distortion without a rectification layer. The rectification layer adds many CPU intensive operations to the pipeline which decrease the frame rate. |
|
type |
bool |
pattern |
true | false |
recogn_rectify_polarity¶
This entry is only used when recogn_rectify_enabled is equal to true. In order to accurately estimate the distortion we need to know the polarity. You should set the value to both to let the SDK find the real polarity at runtime. The module used to estimate the polarity is named the polarifier. The polarifier isn’t immune to errors and could miss the correct polarity and this is why this entry could be used to define a fixed value. Defining a value other than both means the polarifier will be disabled and we’ll assume all the cards have the defined polarity value. |
|
type |
string |
pattern |
“both” | “dark_on_bright” | “bright_on_dark” |
recogn_rectify_polarity_preferred¶
This entry is only used when recogn_rectify_enabled is equal to true. Unlike recogn_rectify_polarity this entry is used as a “hint” for the polarifier. The polarifier will provide more weight to the polarity value defined by this entry as tie breaker. |
|
type |
string |
pattern |
“both” | “dark_on_bright” | “bright_on_dark” |
recogn_gpu_backend¶
Defines the GPU backend to use. This entry is only meaningful when gpgpu_enabled is equal to true . You should not set this value and must let the stack choose the right value based on the system information. On desktop implementation, this entry will be ignored if support for CUDA is found. This value is also ignore when recogn_quantization_enabled is equal to true as quantized operations are never executed on a GPU. |
|
type |
string |
pattern |
“opengl” | “opencl” | “nnapi” | “metal” | “none” |
recogn_quantization_enabled¶
Whether to enable quantization on ARM devices. Please note that quantized functions never run on GPU as such devices are not suitable for integer operations. GPUs are designed and optimized for floating point math. Any function with dual implementation (GPU and Quantized) will be run on GPU if this entry is set to false and on CPU if set to true. Quantized inference bring speed but slightly decrease the accuracy. We think it worth it and you should set this flag to true. Anyway, if you’re running a trial version, then an assertion will be raised when you try to set this entry to false. |
|
type |
bool |
pattern |
true | false |