I’d bet my bald head that he means the model keeps training on chat-data instead of starting every c
I’d bet my bald head that he means the model keeps training on chat-data instead of starting every chat-session fresh (from base model) .
gemma-7b-it finetune at all ('file' should be of valid safetensors type), and I can successfully upload/run inference for mistral/mistral-7b-instruct-v0.2-lora, but the results are worse than the vanilla modelq_proj and v_projmistralai/mistral-7b-instruct-v0.2 using the AutoTrain_LLM notebook , then cURL to https://api.cloudflare.com/client/v4/accounts/${account_id}/ai/run/@cf/mistral/mistral-7b-instruct-v0.2-loraArgument of type '"@hf/thebloke/deepseek-coder-6.7b-instruct-awq"' is not assignable to parameter of type 'BaseAiImageToTextModels'. typescript (2769) [84, 37]// @ts-expect-error@cf/mistral/mistral-7b-instruct-v0.2-lora has a context (and total limit) of ~15k tokens and you don't need a LoRA to run it.@cf/lykon/dreamshaper-8-lcmdoesn't support passing an image. try @cf/runwayml/stable-diffusion-v1-5-img2img if you want to do image-to-image.gemma-7b-it'file' should be of valid safetensors typemistral/mistral-7b-instruct-v0.2-loraq_projv_projmistralai/mistral-7b-instruct-v0.2AutoTrain_LLMhttps://api.cloudflare.com/client/v4/accounts/${account_id}/ai/run/@cf/mistral/mistral-7b-instruct-v0.2-loraArgument of type '"@hf/thebloke/deepseek-coder-6.7b-instruct-awq"' is not assignable to parameter of type 'BaseAiImageToTextModels'. typescript (2769) [84, 37]// @ts-expect-error@cf/mistral/mistral-7b-instruct-v0.2-lora3010: Invalid or incomplete input for the model: model returned: [request id: 41fff8e6-617c-439f-a093-f02bfa2d45bb] unexpected shape for input 'image' for model 'dreamshaper-8-lcm'. Expected [1], got [1,1048265]. @cf/lykon/dreamshaper-8-lcm