Hey everyone,
I’m building OpenFork because I kept running into the same boring wall when testing new AI video/audio/image workflows: CUDA issues, Python packages fighting each other, custom-node conflicts, huge model downloads, and the fear of breaking a ComfyUI setup that already works.
OpenFork is my attempt to make that part calmer.
It’s an opensource desktop client and python client + web workspace that runs AI media workflows in prebuilt Docker containers. The goal is simple: pick a workflow, let the client pull the right image, run it on your NVIDIA GPU, and send the result back into your project. No configuration needed.
This is a curated database of opensource ai models and automation. I’m trying to make fast-moving workflows easier to test without spending the evening fixing the environment.
Demo:
Website/download:
www.openfork.video
I’d love early testers, especially people who already run WAN/LTX/Hunyuan/HeartMuLa/Qwen/Z-Image style workflows and can tell me where this still feels rough.
If you try it, the most helpful feedback is:
- your GPU/VRAM
- Windows/Linux setup
- workflow you tried
- where it broke, confused you, or saved time
I’ll be around in the comments and I’ll turn the first real issues into fixes/docs.