Ä¢¹½ÊÓÆµ

    MediaTek NPUs, NeuroPilot and LiteRT are ready to power AI in millions of devices

    MediaTek NPUs, NeuroPilot and LiteRT are ready to power AI in millions of devices

    On-device AI is becoming increasingly important, and we continue to deliver high-performance, power-efficient intelligence for smartphones, Chromebooks, tablets, smart TVs, IoT devices, and many other products used in everyday life. Today, nearly all of our chipsets include a MediaTek Neural Processing Unit (NPU). These AI processors are designed to scale across product tiers, delivering strong on-device AI compute capabilities while maintaining industry-leading power efficiency.

    We have collaborated with Google to integrate (the software layer for our NPUs) in the new LiteRT NeuroPilot Accelerator. This integration enables a unified AI runtime that simplifies deployment and allows developers to fully utilize MediaTek NPU capabilities across a broad range of chipsets that power millions of devices worldwide.

    This collaboration is another testament to our ecosystem commitment to scalable silicon, open collaboration, and developer-focused tooling. Support for with MediaTek NeuroPilot represents another step toward making advanced on-device AI accessible, efficient, and deployable at global scale.

    Enabling Scalable On-Device AI

    Running AI workloads locally on devices enables low-latency inference, improved privacy, reduced reliance on cloud connectivity, and greater power efficiency. As models continue to grow in size and capability, developers have required better tooling to deploy these workloads consistently across diverse hardware platforms.

    LiteRT provides a modern runtime that enables MediaTek NPUs to operate within Android and cross-platform ecosystems; developers can target the NPUs through a single API, relying on LiteRT to manage hardware selection and execution. This approach enables efficient scaling of AI applications across MediaTek’s product portfolio without the need for device-specific optimization workflows.

    Key capabilities include:

    • Unified APIs that abstract hardware and SDK differences across MediaTek NPUs
    • Support for both ahead-of-time (AOT) offline compilation and on-device compilation
    • Automatic fallback to CPU or GPU execution when required
    • Reduced development complexity across multiple MediaTek SoCs

    Optimized Execution for Generative and Multimodal Models

    LiteRT with MediaTek NeuroPilot supports a growing ecosystem of open-weight, on-device AI models optimized for MediaTek NPUs, including:

    • Gemma 3 (270M, 1B, and multimodal 3n E2B variants)
    • Qwen3 0.6B
    • EmbeddingGemma 300M

    When deployed on MediaTek NPUs, these models deliver substantial performance and efficiency improvements, achieving up to 12x faster inference than a CPU, and up to 10x faster than a GPU, while also remaining suitable for sustained on-device operation.