Filament Performance Issue on Production (Even with OPcache + filament:optimize)
Hi everyone! I’m facing a performance issue with my Laravel + Filament v3 app on the live server.
The app runs fast locally (Docker), but it feels noticeably slower in production, especially on pages with Tables and Dashboards.
What I’ve already done:
• APP_ENV=production, APP_DEBUG=false
• Ran php artisan filament:optimize
• OPcache is fully enabled (opcache.enable=1, opcache.enable_cli=1, proper memory settings)
• Ran all standard optimizations: config:cache, route:cache, view:cache
• Tried all previously suggested solutions here in Discord related to this topic
Yet, I’m still experiencing slowness.
25 Replies
Here my network tab:

So is the database different?
Have you closured all the select filters and form options?
Is the database index'ed correctly?
@toeknee The Database execuation time

42ms... is nothing? Generally looks ok... Could be worth checking that servers load
@toeknee


I wonder if your ms is the current request... you sure debubar isn't pick up the latest network request/
I’m using Clockwork, and it shows that all views are loading in 0ms.
@toeknee I’m also using a custom column in my Filament Table, and I’m wondering, does Filament cache the custom column fields, or are they re-evaluated on every request?
The only performance issues we had when moving to production was some code that loaded more data than needed.
For example if you have a
Select::make()->options(Model::pluck('name', 'id')->toArray()
this will work fine with a few records but might end up blowing up in production.
Debugbar really help pick up obvious issues - eg we were hydrating 12k objects and only showing 30 of them 🤦
I would try to find one area that is noticably slower (eg one table) and try to figure out what's different in production.
FWIW our dev is usually slowerThanks @ChesterS that’s super helpful! I’ll definitely review areas where we might be loading too much data unnecessarily, like in selects or table queries. I’ll also use Clockwork to dig deeper since Debugbar actually caused my page to crash when I enabled it in production 😅
FWIW, our dev environment is also faster, so I’m focusing now on isolating a specific slow table to compare behavior.
Appreciate the tips!
WRT to debugbar - disable view collectors. (in
debugbar.php
)

Yeah that 16k models looks sus 😂
Also check your gates - you probably don't need all those checks if you limit the results in the query itself.
Anyway, gl hf 🙂
Sure, Thanks @ChesterS
@ChesterS That 16k+ model is currently being loaded by the Select filter in the table. Is there a way to limit or defer that loading to improve performance?
Either use a
->relationship()
or look into ->getSearchResultsUsing()
in the documentation.
That's what I do - I don't know if there's a better/easier wayCurrently I use
relationship
If you use relationship, it should not be pre-loading anything. Do you also have
->options()
?
can you share that select?
I just reduce it using
->preload(false)
Also ensure any ->options() are closures if you are defining them and it not a relationship.
@toeknee I just realized that each record has 9 actions, and when I removed them, the page load time dropped to 1.5 seconds. Is there a way to optimize this without removing the actions entirely?
What are the actions doing?
They open modal
Like adding notes, sending emails and changing status.
I would say ensure the actions have closures etc, but the more you add the more data that is added too. I have quite a few too and it's generally ok.
Could you explain why it needs to be a closure? I have several Select that are array.
Just array's should be ok, closures mean that each option isn't re-checked. If for example you had a single DB call to array, if it's not a closure each option is evaluated and causes a loop on the db queries