Supported countries

Unlike other companies we don’t segment our implementation by region but are grouping them by charset (e.g. Latin, Arabic, Chinese…). The reference models provided on Github are trained on Latin charset ([A-Z0-9]) using license plates from more than 150 countries. The dataset predominantly contains European license plates as this is where our company is based and most of our customers are using this SDK in Europe. The implementation will work with any country using Latin charset like USA, Canada, Russia, Armenia, Monaco, India, UK, Turkey, Argentina, Mexico, Indonesia, Philippines, New Zealand, Australia, Brazil, South Africa, Mauritania, Senegal… If you have any accuracy issues with your country please let us know and we’ll add more samples in the dataset. If you can provide your own dataset it would be great.

Starting version 2.7.0 we support Korean license plates.

We can pack all the charsets and provide a single model but the accuracy will drop by 17% and this is why we’ve to keep them separated.

We have a “write-once-and-train-everywhere” implementation which means the current code used with Latin charset will work with Japanese, Chinese, Arabic or any other language without single modification. You even don’t need to update the SDK (or your code), just drop the newly trained data and start testing.

The license plate detector is agnostic and supports all countries.

The recognizer is agnostic and supports all countries but you have to provide the right trained data (same model as Latin). If you have a dataset with non-Latin charset and want it included in the SDK please contact us and we’ll do it for free.