I built a Dockerized way to run open-source AI media workflows without fighting local dependencies

Hey everyone,

I’m building OpenFork because I kept running into the same boring wall when testing new AI video/audio/image workflows: CUDA issues, Python packages fighting each other, custom-node conflicts, huge model downloads, and the fear of breaking a ComfyUI setup that already works.

OpenFork is my attempt to make that part calmer.

It’s an opensource desktop client and python client + web workspace that runs AI media workflows in prebuilt Docker containers. The goal is simple: pick a workflow, let the client pull the right image, run it on your NVIDIA GPU, and send the result back into your project. No configuration needed.

This is a curated database of opensource ai models and automation. I’m trying to make fast-moving workflows easier to test without spending the evening fixing the environment.

Demo:

Website/download:
www.openfork.video

I’d love early testers, especially people who already run WAN/LTX/Hunyuan/HeartMuLa/Qwen/Z-Image style workflows and can tell me where this still feels rough.

If you try it, the most helpful feedback is:

  • your GPU/VRAM
  • Windows/Linux setup
  • workflow you tried
  • where it broke, confused you, or saved time

I’ll be around in the comments and I’ll turn the first real issues into fixes/docs.

"Hey besch88, good luck with OpenFork. One practical tip for laptop GPUs (like RTX 3050/4050): focus on automated VRAM offloading. On mobile chips, the 6GB limit is hit instantly by LTX/Hunyuan. If your Docker containers don’t include a rigid memory cleanup protocol (like manual gc.collect() and cuda.empty_cache() after each inference), they will crash the whole system on budget hardware. Environment isolation is great, but memory management is king here."