electronics-journal.com
11
'26
Written on Modified on
Texas Instruments Introduces AI-Enabled Microcontrollers with TinyEngine NPU
New MCU families integrate a neural processing unit and development tools to enable low-latency, energy-efficient edge AI in consumer, industrial, and robotics applications.
www.ti.com

Microcontrollers integrate AI acceleration for embedded systems
Embedded systems in consumer electronics, industrial automation, and smart devices increasingly require edge AI capabilities to process data locally and respond in real time. Texas Instruments has expanded its microcontroller portfolio with two new device families that integrate the TinyEngine neural processing unit (NPU), enabling AI inference directly within microcontroller-based systems.
The new MSPM0G5187 and AM13Ex microcontrollers combine embedded processing, AI acceleration, and development tools to allow engineers to deploy neural-network workloads on resource-constrained devices. These capabilities support applications such as wearable health monitoring, home electrical systems, robotics, and motor-control equipment where local processing improves responsiveness and reduces dependence on cloud connectivity.
Hardware accelerator reduces latency and energy consumption
At the core of the new devices is TinyEngine, a hardware accelerator designed to run neural-network inference workloads alongside the main MCU processor. The NPU executes neural-network computations in parallel with application code, allowing AI algorithms to operate without significantly affecting system performance.
Compared with microcontrollers that perform AI processing only in software, the TinyEngine accelerator reduces latency by up to 90 times per inference and lowers energy consumption by more than 120 times per inference. These improvements enable battery-powered or resource-constrained devices to perform local AI tasks such as anomaly detection, sensor classification, and adaptive control.
The accelerator also helps minimize flash memory requirements for deployed AI models, making it suitable for compact embedded systems.
Edge AI capabilities for low-cost embedded devices
The MSPM0G5187 microcontroller targets cost-sensitive and small embedded devices that require lightweight AI processing. Built around an Arm Cortex-M0+ core, the MCU integrates the TinyEngine NPU while maintaining a compact architecture suitable for low-power systems.
This design allows edge AI capabilities to be integrated into devices such as fitness wearables, household appliances, and smart electrical equipment, where local data analysis can improve automation and system monitoring.
The device is priced below US$1 in 1,000-unit quantities, providing a low-cost entry point for incorporating machine-learning functionality in embedded designs.
Real-time motor control with integrated AI processing
The AM13Ex microcontroller family addresses applications that require both real-time control and AI-enabled decision-making. The device integrates an Arm Cortex-M33 processor, TinyEngine NPU, and dedicated real-time control architecture within a single chip.
This integration allows designers to combine motor control and AI algorithms in one system. The device can manage control loops for up to four motors while running adaptive algorithms that optimize energy usage or detect system anomalies.
The MCU also includes a trigonometric math accelerator that performs calculations up to 10 times faster than coordinate rotation digital computer (CORDIC) implementations, supporting high-speed control loops in applications such as robotics, industrial automation, and advanced home appliances.
By consolidating these capabilities into a single device, designers can reduce system complexity and lower bill-of-materials costs by up to 30% compared with multi-chip architectures.
Development ecosystem supports AI model deployment
Both MCU families are supported by Texas Instruments’ Code Composer Studio integrated development environment (CCStudio IDE), which includes generative AI features designed to assist engineers with code generation, configuration, and debugging.
The development ecosystem also includes CCStudio Edge AI Studio, a free tool that helps developers train, optimize, and deploy AI models on TI embedded processors. The platform currently provides more than 60 prebuilt models and application examples, allowing developers to quickly integrate AI features into embedded devices.
This toolchain supports both hardware-accelerated AI execution using the TinyEngine NPU and software-based implementations, giving engineers flexibility in model deployment across different microcontroller configurations.
Demonstrations at embedded world 2026
Texas Instruments is showcasing these edge AI technologies at embedded world 2026 (March 10–12) in Nuremberg, Germany, where the company is presenting demonstrations at Hall 3A, Booth 131. The demonstrations highlight applications in factories, buildings, and vehicles, as well as development tools designed to accelerate embedded AI implementation.
Production quantities of the MSPM0G5187 microcontroller are available, while the AM13E23019 MCU is currently available in preproduction, with additional package and memory variants expected by the end of 2026.
www.ti.com
Edited by Industrial Journalist, Natania Lyngdoh.
Powered by AI.

