how to disable build caching?
@Brody I hooked up your cron template about a month ago to redeploy my repo every 8 hours:
https://railway.app/template/fwH-l3
It has worked great for a month (thank you!), however, a day or two ago the build step in the periodic redeploys seems to be caching such that a build doesn't happen anymore. Has something changed? Is this a cron template issue, or has railway's build approach changed?
Solution:Jump to solution
It appears NIXPACKS_NO_CACHE solves the problem and freshly builds on deploy and redeploys. Thanks for the help guys!
88 Replies
Project ID:
022b45c2-7088-4863-ac9f-236d869b1a7a
022b45c2-7088-4863-ac9f-236d869b1a7a
railway caches the build layers, there's nothing my template or you can do about this because there's no way to change this behaviour at the moment
@Brody so how was this working for a month? Did they just change this behavior?
build layers have always been cached
@Brody then any idea why the build logs between 3 days ago and all builds since are radically different? The ones older than 3 days ago ran fresh everytime and now they seem cached.
i honestly have no clue, if that is the case then it was incorrect behavior, build layers should be cached when you are redeploying an already built image
are you perhaps thinking that redeploying the latest deployment grabs new code from github?
@Brody no. My build step grabs data from a remote db, I migrate that data into a different shape, then save a json file of it. When the app boots up it loads that json file into memory for the app's use. For a month this redeploy approach was building a fresh json file every deploy via that build step. Now it no longer does as of about 3 days ago.
technically it should not have worked, all build layers should have been cached, since there's no way for railway to know your build does something different each time unless files are changed
Would adding a hash onto the json file name that gets saved to file system trigger anything different?
nope, unless files change in the repo build layers are cached, but even so it doesn't help in this case since you are redeploying an already built container, the files in the repo aren't involved
I'm not seeing a resolution for you besides railway adding an environment variable to disable this behaviour, or you generating that file in a different way, like at runtime or have another service specially designed to generate the file and put it in some accessible storage where the service that needs it can access it
Man that's a huge assumption on railway's part to assume that the build step in the same branch/commit will always generate the exact same output. This prevents all kinds of build step possibilities.
yes it really does, fun fact even the team has ran into this issue
My team has been trying like mad to get off heroku and over to railway with our apps, but stuff like this make it near impossible.
In railway's UI what good does "Redeploy" do if it's all cached? Why would anyone ever redeploy something that is already deployed and cached if you won't get a fresh build?
new environment variables would be one of the reasons
True. Good point.
but yeah I really feel the difficulties here
It'd be great if they added a "Rebuild" UI action or something that could be dynamically triggered like a redeploy and then a "Restart" shortly after. Either that, or allow some way to Redeploy with a fresh build
@Angelo - we have some railway limitations here when trying to move from heroku
fwiw, you are not even remotely the first user to ask for a way to invalidate layer cache
@Angelo my org badly wants to pay railway instead of heroku. But limitations like this make it impossible to move over. I genuinely can't think of a significant non-invasive way to solve this build caching problem. Our build step is about a 45 sec fetch/migration of data. I can't add that into my start command because the app would fail to fire up, or throw on requests until 45+ secs later. So it seems my only other option is to manually push a new commit to my branch every 8 hours to trigger a new deploy (which I assume would cause a fresh build?). Super hack. Is there another way to force a fresh build?
So its not our intention to have cache and build layers mess up your deployment. One second, going to look internally.
side note, you can use a health check, that way railway won't switch in the deployment until it's ready to handle requests
Yea- I think this is the big issue we have from a lot of customers moving over from Heroku who really need a “migrate” step at the surface
Just had dinner and merged my PR. Looking at the project.
Not sure if you read our whole thread Angelo, but the only scenario we could imagine where someone would want to bypass a fresh build step on a redeploy is perhaps when changing some envars that don't effect the build step. However, often times envars are used in build steps (minification of assets, bundling nuances, etc.). Imagine this use case:
1. a guy tries to get his node app running on railway
2. he starts with a development envar being set
3. he gets his branch deploying
4. he then wants to switch to production mode (which will change his build step output)
5. does he have to commit a new commit just to get the branch sha to change such that a new fresh deploy is triggered? if so, that seems confusing and non ideal
I get that railway doesn't want to spend needless resources on pointless build steps that have already built, but like I said above, it is a huge assumption to assume that when the sha hasn't changed that the build output won't change either.
Have you tried setting
NIXPACKS_NO_CACHE
to 1 ?my understanding is that they use dockerfiles?
No, this is a Nixpacks build
I have not tried the NIXPACKS_NO_CACHE envar... Trying
No short action I can do- but I can promise you that this feedback is heard. We have some long term fixes to build experience that we wanna tackle. (Brody was actually made privvy to it)
my bad, I would have suggested the same environment variable, I could have sworn they where using Dockerfiles from having helped them in previous threads
Essentially we process 1.5M+ builds a month and when we have failures below 2 decimal places, that still means that a real-life workload gets impacted. So again, sorry there.
What else of yours is still on Heroku?
We have:
- a large ruby/graphQL api app
- a large fullstack rails app
- a large node app that runs off the graphQL api app
- a large postgres db
- a large redis db
- then a bunch of other misc addons for jobs/workers and things like that
an in container repl is needed for that project ^
Heard
Just edited the list above to be more elaborate
We begrudgingly spend several thousand/mo with them
Won't be the first I heard of that...
Where are you based? (timezone)
Org is in Minneapolis. Our team is all over U.S.
Do you want to book some time with me so we can help you out with that, finding these blockers is helpful for the team to light a fire under their to prioritize your requests.
Monday is a federal holiday so I will be off, but we can chat Tuesday?
That'd be great to meet/discuss. If it's good with you, my colleague @Ben Hutton would be a huge addition to the call. We've been on Heroku for ~15 years and he's well versed in what's preventing our full move over to railway specifically re: our ruby apps and postgres, and I'm more versed on node app needs.
The more the merrier!
Okay so I tried the NIXPACKS_NO_CACHE envar and it did in fact trigger a fresh build. Now if I were to hit the Redeploy action would it build another fresh build?
try it!
Solution
It appears NIXPACKS_NO_CACHE solves the problem and freshly builds on deploy and redeploys. Thanks for the help guys!
happy to hear it, though the issue still remains in other contexts, had you been using a dockerfile this wouldnt have solved your problem
I spoke too soon. It actually didn't solve it. It busts the cache the first deploy after, but not successive ones!
@Brody can you remove the marked solution?
answer overflow does not provide a way to unmark an answer actually
Jake recommended I try
NO_CACHE=1
but that doesn't work either.secret variable eh
like I said, I can't see this working unless railway provides a way for the user to disable layer cache, or you change up how your apps works, that's far from ideal
I'd be interested to know how this was done on heroku? do they have any build cache by default? if so, do they provide a way to stop that? do they charge for build?
Looking at the build logs before our call.
Okay- zooming way out: why do you need a redeploy every 8 hours? @steadymade
Ty, I am illiterate.
My sinking suspicion is that we just aren't respecting either var now which is weird. Going to tag Cooper in.
@Brody, heroku caches deps (although you can even prevent that via envars) but they don't cache the build step
I also want to apologize for testing your patience with the platform behavior, I can understand how massively frustrating this can be.
It's worth mentioning that when you first add a new envar that seems to run a fresh build step on next deploy, but on successive redeploys it does not.
if that's the case, for a short term solution I'd be more than happy to add an option to my cron service that will set a cache busting variable before re-deploying
No problem Angelo. I'm rooting for Railway and know that y'all are crazy busy with growth. Hopefully these kinds of issues help solve some fundamental features that can enable the mass exodus off heroku which is inevitable to the first competitor that can prove it's a seemless transition and comparable/better platform.
Brody, interesting idea! Hopefully it doesn't have to come to that.
agree, but if it has to, I'm down to add it
Fun platform quirk from old Railway from long ago- we would use variables as a way to bust cache before
NO_CACHE
, which would track with what we've done.
Anyway, I notified the team in our channel, hoping we get to this before our call today.Wait sorry are you expecting the thing to rebuild everytime the cron happens?
Because, we haven't done that for many months now :/. I'm very confused what's changed
Yes, I was expecting every redeploy to run the build step uncached. Which somehow worked for over a month. I've got the logs to prove it 😄
Can you link said logs? We changed that like, maybe 6mo+ ago
Doesn't make sense to me
yep, it's the link that's in our email thread:
https://railway.app/project/022b45c2-7088-4863-ac9f-236d869b1a7a/service/b0f468b3-78b1-46fe-8aee-dd5912b8f12c?id=8c76a1af-722b-42f9-b459-b680d08bfd54
somehow that one was fresh
You might have gotten lucky and landed on an instance that had no cache
k, perhaps it was intermittent such that we didn't know it was caching the build step for over a month, but was no caching it enough to where data was fresh. not sure.
GitHub
docker build --no-cache uses cache anyway :'( · Issue #4041 · docke...
Description I'm trying to build an image referencing a custom image that references maven. I'm using docker cli on linux Fedora 36 Here is the dockerfile FROM lanico/whanos-java:latest WORK...
however, is there no way to redeploy where the build step can be forced fresh to be uncached?
That's what the envvar does. I've never seen it not work
So I'm perplexed
what im seeing is, if you add/change and envar, the immediate redeploy after does run a fresh build step. but successive redeploys after do not
Does the cron template use the API?
yes
I'm not sure why there's a cron template frankly...
it allows you to restart the deployment instead of redeploying it
Well- a restart isn't going to trigger a new build, a restart essentially says, take this image and then restart it. But they want a redeploy...
well yes, it can do both, they are using it to redeploy
id hope...
Oh wait
AHHH it's a nixpacks build
NO_CACHE doesn't work for that ATM
Lemme PR that up
(Just an FYI I have no idea how this ever worked, but no matter, we shall soldier on :salute: )
you wouldnt be the first to assume they wernt using nixpacks
Pushed a PR, lemme get some eyes on it or at the very least let it build
PR merged. Deploying. Please hold
We're live. It SHOULD work on your next trigger
I see another deploy- can you check if everything is working nominally? I also sent Slack invites to all team members in the project but it seems that one of the emails is a catch-all domain and the other isn't set to the .org domain that is the same at the admin.
cc @steadymade
@Angelo, right after Cooper's PR merged I redeployed and it was a fresh build. The cron job just triggered and also redeployed with a fresh build. So it look likes it's working. I removed the
NIXPACKS_NO_CACHE
envvar in favor of NO_CACHE
since I think Cooper said it was more ideal. Ticket now resolved. Thanks for the help guys!There's no reference of NIXPACKS_NO_CACHE in the entire monorepo codebase
Where did you get that one from?
huh
Well yea that defs doesn't work on Railway
NO_CACHE will do it tho
the nixpacks no cache variable will stop nixpacks from using cache mounts in the dockerfile at least
mm
interesting
anyway
the final solution: set a service variable
NO_CACHE
to 1
and if also using nixpacks, set NIXPACKS_NO_CACHE
to 1
as well for good measure