Hugging Face Forums
Too large to be loaded automatically (16GB > 10GB) issue with QWEN 2.5 VL 7B
Inference Endpoints on the Hub
John6666
April 15, 2025, 2:41am
2
Same here. Maybe related to this incident.
show post in topic
Related topics
Topic
Replies
Views
Activity
The model mistralai/Mistral-7B-Instruct-v0.1 is too large to be loaded automatically (14GB > 10GB)
Models
2
254
April 15, 2025
03 Error When Using Qwen/Qwen2.5-VL-32B-Instruct with Inference Provider
Models
7
726
November 24, 2025
Inference service for large models, such as Vicuna 13b
Beginners
0
1454
May 5, 2023
Issue with ALLaM-7B Model in Inference API - Size Limitation Error
Inference Endpoints on the Hub
1
115
March 7, 2025
500 Internal Error - We're working hard to fix this as soon as possible
🤗Transformers
46
3522
February 1, 2026