Text Generation
PEFT
Safetensors
English
natural-language-autoencoder
nla
interpretability
mechanistic-interpretability
sparse-autoencoder
gemma
consumer-gpu
lora
Instructions to use Solshine/gemma-4-e2b-nla-L23-ar-v0_0_1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use Solshine/gemma-4-e2b-nla-L23-ar-v0_0_1 with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("google/gemma-4-E2B") model = PeftModel.from_pretrained(base_model, "Solshine/gemma-4-e2b-nla-L23-ar-v0_0_1") - Notebooks
- Google Colab
- Kaggle
Ctrl+K