Skip to main content
The Quantum Dispatch
Back to Home
Cover illustration for LG Releases EXAONE 4.5: Open-Source Vision-Language AI That Outscores GPT-5-mini

LG Releases EXAONE 4.5: Open-Source Vision-Language AI That Outscores GPT-5-mini

LG AI Research's EXAONE 4.5 is a 33B multimodal VLM with Hybrid Attention architecture that outscores GPT-5-mini and Claude 4.5 Sonnet on STEM benchmarks — and it's fully open-source.

Dr. Nova Chen
Dr. Nova ChenApr 9, 20265 min read

A New Contender in the Open Multimodal LLM Race

April 9 brought a compelling new entry to the open-source AI landscape: EXAONE 4.5 from LG AI Research. This 33B-parameter Vision-Language Model earns serious attention not simply because it comes from LG — though that alone is notable given the company's consumer electronics heritage — but because it genuinely outperforms several well-established frontier models on the benchmarks that matter most for science, technology, engineering, and mathematics reasoning.

Architecture: Hybrid Attention Meets Native Multimodal Training

EXAONE 4.5's foundation is LG AI Research's proprietary Hybrid Attention architecture, which blends traditional self-attention mechanisms with an efficient local attention pattern specifically designed to reduce computational overhead in long-context reasoning tasks. The vision component is a 1.2B-parameter encoder trained end-to-end with the LLM backbone rather than attached as a post-training adapter — the same native multimodal design philosophy increasingly recognized as producing superior visual-language alignment compared to grafted vision modules.

The model supports six languages natively, a practical consideration for deployment across the global research community.

Benchmark Results Worth Taking Seriously

The headline numbers from LG AI Research's evaluation are straightforward: EXAONE 4.5 posted an average score of 77.3 across five STEM benchmarks. For context:

- **GPT-5-mini:** 73.5 average

- **Claude 4.5 Sonnet:** 74.6 average

- **Qwen-3 235B:** 77.0 average

Outperforming GPT-5-mini and Claude 4.5 Sonnet on STEM reasoning with an open-source 33B model is a meaningful result. The comparison against Qwen-3 235B is particularly striking: EXAONE 4.5 achieves comparable STEM performance with far fewer total parameters, which speaks directly to LG AI Research's architectural efficiency decisions rather than simply scaling up compute.

Open-Source Release on Hugging Face

EXAONE 4.5 is available on Hugging Face under terms permitting research and educational use, opening the model to the global academic community immediately. Researchers can fine-tune on domain-specific datasets, study its multimodal reasoning behaviors, and build educational tools on a genuinely competitive foundation — without API dependency or cost constraints.

For the broader AI research ecosystem, a competitive open multimodal model from a non-US AI lab that demonstrably competes with commercial frontier models on key benchmarks advances the healthy diversification of global AI capability.

What This Signals for the Open AI Landscape

EXAONE 4.5 arrives at a moment when the open-weight frontier is more competitive than it has ever been. Meta's Llama 4, Qwen-3, and now EXAONE 4.5 together represent a field where serious multimodal capability is no longer the exclusive domain of US hyperscalers. The open-source community gains a model that deserves rigorous evaluation alongside the incumbent commercial leaders — and the STEM benchmark numbers suggest it will hold up under scrutiny.

Sources: PR Newswire / LG AI Research (April 9, 2026), Korea Times (April 9, 2026), Hugging Face model card (April 9, 2026)