Serverless Model Caching for Comfyui use.

Is there a clear guide or resource on setting up serverless model caching with Comfyui?
There's mixed information floating around on where the model caching directory goes so it's hard to direct a extra_paths.yaml to the correct directory when building a deployment image.

tahe this for example - https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI/blob/main/split_files/diffusion_models/qwen_image_fp8_e4m3fn.safetensors

If I wanted to model cache that and have comfyui detect it under models/unet - where would I set the detection path to?
Was this page helpful?