Ministral-8B-Instruct-2410 + rsLoRA โ€” Job-Shop Scheduling

A rsLoRA adapter fine-tuned on the Starjob job-shop scheduling problem (JSSP) dataset. The model takes a natural-language description of jobs and machines and produces a feasible schedule that minimizes makespan.

Training

Hyperparameter Value
Method rsLoRA (use_rslora = true)
LoRA rank r 32
LoRA alpha 32
Max sequence length 8192
Per-device batch 1
Gradient accumulation 8 (effective batch 8)
Epochs 1
Learning rate 2e-4
Base quantization bnb 4-bit (Unsloth)

Evaluation

200 samples (seed 42) from the small+medium split of Starjob, identical pipeline for LoRA and rsLoRA. Feasibility validates routing order, machine non-overlap, and operation completeness.

Metric Value
Feasibility 64.0% (128/200)
Exact makespan 24.5% (49/200)
Mean gap 42.93%
Median gap 9.25%
Eval time 118.9 min

Full head-to-head LoRA vs rsLoRA comparison and code: github.com/tiodh/slm_jssp.

Usage

from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

base = AutoModelForCausalLM.from_pretrained(
    "mistralai/Ministral-8B-Instruct-2410",
    device_map="auto",
    torch_dtype="auto",
)
tok = AutoTokenizer.from_pretrained("mistralai/Ministral-8B-Instruct-2410")
model = PeftModel.from_pretrained(base, "tiodh/ministral-8b-jssp-rslora")

prompt = (
    "Optimize schedule for 3 Jobs (denoted as J) across 3 Machines (denoted as M) "
    "to minimize makespan...\nJ0:\nM0:5 M1:3 M2:4\nJ1:\nM1:2 M0:4 M2:3\nJ2:\nM2:6 M0:1 M1:5\n"
)
inputs = tok(prompt, return_tensors="pt").to(model.device)
out = model.generate(**inputs, max_new_tokens=512, temperature=0.1, top_p=0.95)
print(tok.decode(out[0], skip_special_tokens=True))

License

CC BY-SA 4.0 (inherits from the Starjob dataset).

Downloads last month
11
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for tiodh/ministral-8b-jssp-rslora

Adapter
(93)
this model

Dataset used to train tiodh/ministral-8b-jssp-rslora