Digit recognition with TI's microcontrollers (MCUs)
Edge AI digit recognition in a compact, low-power device footprint on cost and power optimized models for MCU solutions
Application overview
Character recognition on Edge AI involves extracting text from visual data locally on hardware with limited memory and power. The primary challenge is optimizing complex neural networks for real-time inference on low-cost microcontrollers.?
Key applications:
- Industrial IoT: autonomous reading of iron plates, labels, and inventory tags.
- Smart Devices: Enabling gesture-based input for wearables and appliances.
- Grid Infrastructure: Digitizing analog meter readings in real-time.
Starting evaluation
Data collection
The MNIST (Modified National Institute of Standards and Technology) dataset is the industry-standard "Hello World" benchmark for handwritten character recognition. It is primarily used to train and test image processing systems for identifying numerical digits.
Data quality assessment
Core Data specifications content: 70,000 grayscale images of single handwritten digits (0–9).
Structure: Split into 60,000 training images and 10,000 testing images.
Resolution: Each image is a square 28x28 pixel grid (totaling 784 pixels per image).
Preprocessing: The digits are size-normalized and centered in a fixed-size frame, making them easier for simple algorithms to process than raw data.
Build and train your model
Accelerate development with CCStudio Edge AI Studio, an intuitive graphical environment for designing, training, and deploying Al models.
Explore and train multiple architectures through an easy-to-use, GUI-based workflow. Users can also explore the comprehensive CLI tools to modify and retrain the models for better performance and optimizations
Find the right model for your needs
We use LENET-5 model which is an open souce model and optimize and quantize the model for our NPU yielding 99% accuracy and latency of just 6.05ms
Deploying your model
CCStudio? Edge AI Studio?gives a start to finish workflow for deploying trained models to embedded targets. For developers seeking deeper customisation and control, refer to the deployment guides that offer a comprehensive framework for building and integrating EdgeAI functionality into your own embedded applications
- Deploy machine learning models on MSPM0 microcontrollers using EdgeAI Studio GUI Tools
- Deploy machine learning models on MSPM0 microcontrollers using CLI tools
Choosing the right device for you
| Product number | Processing core | NPU available | Clock frequency (MHz) | ? ? ? ?PIR benchmarking metrics | ||
|---|---|---|---|---|---|---|
| Latency (ms) | Flash (kB) | SRAM (kB) | ||||
| MSPM0G5187 | Arm? Cortex?-M0+ Core | Yes | 80 | 0.39 | 5.55 | 2.4 |
All the hardware, software and resources you’ll need to get started
Hardware
LP-MSPM0G5187
MSPM0G5187 LaunchPad? development kit evaluation module,?This is needed for the EdgeAI inferencing, current data is captured using ADC and passed to the AI model after feature extraction.
Software & development tools
CCStudio? Edge AI Studio
A fully integrated no-code solution for training and compiling PIR Motion detection models, to deploy onto TI embedded microcontroller devices.?
CLI tools
Use this end-to-end model development tool that contains dataset handling, model training and compilation?
MSPM0-SDK
The MSPM0 SDK provides the ultimate collection of software, tools and documentation to accelerate the development of applications for the MSPM0 MCU platform?under a single software package.?