[Space] LoRA Lens Behavior Studio — Real-time personality steering with a slider

Hey everyone — I built a Space that takes a different approach to local LLM customization and wanted to share it here.

What it does: Instead of tweaking system prompts to get a “more creative” or “more analytical” model, LoRA Lens Behavior Studio applies behavior-level adapters extracted via contrast pairs and SVD decomposition — then lets you dial the intensity up or down with a signed alpha slider in real time.

The core pipeline:

  • Contrast pair extraction (target behavior vs. baseline)

  • Delta compression into LoRA adapters via SVD (rank 32–64)

  • Live AlphaController for non-destructive steering — nothing gets baked permanently

Demo: https://huggingface.co/spaces/Intuitivation/LoRALens-Demo

Certified across Mistral-7B, Llama-3-8B, and Qwen2.5 series. 36 behavior packs across Communication, Reasoning, Creative, Personality, Domain-Specific, and Experimental categories.

The thing that surprised people most in testing: dragging the slider from −1.0 to +1.0 on something like the Confident Clarity pack produces a noticeably different feel in the outputs — not just word choice, but sentence structure and hedging behavior. It’s not a prompt effect.

Would love feedback from anyone who’s worked with activation-level steering or has thoughts on the SVD rank tradeoffs. Happy to answer questions about the architecture.

1 Like