UL Announces the Procyon AI Image Generation Benchmark Based on Stable Diffusion

We’re excited to announce we’re expanding our AI Inference benchmark offerings with the UL Procyon AI Image Generation Benchmark, coming Monday, 25th March. AI has the potential to be one of the most significant new technologies hitting the mainstream this decade, and many industry leaders are competing to deliver the best AI Inference performance through their hardware. Last year, we launched the first of our Procyon AI Inference Benchmarks for Windows, which measured AI Inference performance with a workload using Computer Vision.

The upcoming UL Procyon AI Image Generation Benchmark provides a consistent, accurate and understandable workload for measuring the AI performance of high-end hardware, built with input from members of the industry to ensure fair and comparable results across all supported hardware.

UL Procyon AI Image Generation Benchmark

Measuring a rapidly growing range of AI hardware
Our existing AI Inference benchmark, based on an AI Computer Vision workload, made it easy to measure and compare the AI Inference performance of dedicated AI accelerators found in lightweight PCs and compare them against traditional CPUs and integrated GPUs.

As new AI-capable processors allow more device form factors to run AI tasks efficiently, the performance range of consumer AI-capable hardware has become incredibly broad. Like with our ray tracing gaming benchmarks, measuring AI Inference now requires a selection of benchmarks to optimally measure the AI inference performance of all available consumer hardware.

With a new AI Image Generation benchmark being added to the UL Procyon AI Inference Benchmarks family, we’re clarifying our previously existing AI Inference Benchmark as the UL Procyon AI Computer Vision Benchmark.

UL Procyon AI Image Generation Benchmark

The AI Image Generation Benchmark
Built around the Stable Diffusion AI model, the AI Image Generation Benchmark is considerably heavier than the computer vision benchmark and is designed for measuring and comparing the AI Inference performance of modern discrete GPUs.

To better measure the performance of both mid-range and high-end discrete graphics cards, this benchmark contains two tests built using different versions of the Stable Diffusion model, and we hope to add more tests in the future to support other performance categories.

Test performance across multiple AI Inference Engines
Like our AI Computer Vision Benchmark, you can freely switch between several leading inference engines, letting you compare engine performance differences, hardware using the same engine, or the best-case implementations of AI Inference performance across devices. By default, the benchmark selects the optimal inference engine for the system’s hardware.

Currently, the AI Image Generation Benchmark supports the following inference engines. We plan to add support for more inference engines in the future to provide optimal performance for all supported AI hardware.

  • Intel OpenVINO
  • NVIDIA TensorRT
  • ONNX runtime with DirectML

UL Procyon Benchmarking Suite
The UL Procyon benchmark suite offers flexible licensing, letting you choose the benchmarks that best meet your needs. You can buy just one benchmark or add more in any combination.

Find out more about UL Procyon benchmarks.

About Author