Interview with Mike Green from Intel on developments in Microprocessor design for the new AI age
The microprocessor industry is undergoing a seismic shift as generative AI reshapes computing demands. Traditional architectures optimized for general-purpose computing are being rethought in favor of AI-first designs—chips built to handle massive parallel processing, matrix multiplications, and real-time model inference at scale. This transformation requires innovation at every level: novel transistor designs, advanced packaging, high-bandwidth memory integration, and power-efficient acceleration.
As AI workloads proliferate across cloud, edge, and personal devices, semiconductor giants are racing to redefine performance benchmarks. The future belongs to architectures that can dynamically adapt to the computational needs of AI models, balancing raw power with energy efficiency.
This is the backdrop against which industry leaders like Intel are reinventing themselves for the next era of computing. Intel, a company synonymous with the microprocessor revolution, is undergoing one of the most significant transformations in its history.
Once the undisputed leader in CPUs, Intel has faced stiff competition in recent years from rivals that have embraced new AI-optimized architectures. In response, the company is aggressively investing in AI-centric chip design, advanced manufacturing processes, and an open ecosystem approach to AI acceleration.
An NPU (Neural Processing Unit) is a specialized processor designed to accelerate AI and machine learning workloads, particularly inference tasks. NPUs are optimized for handling matrix operations and deep learning tasksefficiently while consuming less power than traditional CPUs and GPUs.
Intel’s NPU Offerings and Use Cases
Intel has integrated NPUs into its processor lineup, particularly for AI-powered tasks in PCs, edge devices, and data centers. Here are the key models and use cases:
1. Intel NPU in Meteor Lake (Core Ultra) - Consumer & AI PCs
Model: The Intel Core Ultra processors (Meteor Lake) include a built-in NPU.
Use Cases:
AI-enhanced video conferencing (background blur, auto-framing, noise suppression).
AI-powered creative applications like Adobe Photoshop’s AI features.
Low-power AI workloads such as speech recognition and transcription.
On-device AI processing for privacy-focused applications.
2. Intel Gaudi NPUs - Data Center & AI Training
Model: Intel Gaudi (Gaudi 2 and upcoming Gaudi 3).
Use Cases:
Training and inference of large AI models, including LLMs (Large Language Models).
Cloud AI workloads competing with NVIDIA and AMD AI accelerators.
AI-driven analytics and enterprise AI applications.
3. Intel Movidius VPU (Vision Processing Unit) - Edge AI & IoT
Model: Intel Movidius Myriad X and Movidius 3700.
Use Cases:
Edge AI computing, including computer vision in drones, security cameras, and AR/VR devices.
Facial recognition and object detection in smart cameras.
AI-powered robotics and industrial automation.
Key Differentiators of Intel NPUs
Energy Efficiency: NPUs consume much less power than GPUs for AI inference tasks.
Hybrid AI Processing: Intel's Core Ultra chips combine CPU, GPU, and NPU for flexible AI workload distribution.
Open Software Support: Intel supports OpenVINO for AI model optimization across different hardware.
Intel’s NPUs are positioned for AI acceleration in PCs, edge devices, and cloud infrastructure, making them a key player in the growing AI hardware ecosystem. Are you exploring Intel NPUs for a particular project?