Database Functions vs Edge Functions
I've been learning how to write edge functions this last week, which has meant learning basic javascript and getting a CLI environment running.
I see the advantage of them for opening up APIs that I can call externally from some other app. However, all of the functions I'll want to use involve calling some other external api and pulling in fields. Given that I'm more comfortable with Python (and that postgres stored procedures support it), I'm wondering if there's any reason not to just write all those external api calls as postgres database functions? If the answer is that I should just use postgres functions, then the remaining advantages of having Supabase include: 1. Authentication 2. Native integration with WeWeb Is there any other advantage?
I see the advantage of them for opening up APIs that I can call externally from some other app. However, all of the functions I'll want to use involve calling some other external api and pulling in fields. Given that I'm more comfortable with Python (and that postgres stored procedures support it), I'm wondering if there's any reason not to just write all those external api calls as postgres database functions? If the answer is that I should just use postgres functions, then the remaining advantages of having Supabase include: 1. Authentication 2. Native integration with WeWeb Is there any other advantage?
9 Replies
Supabase hosted does not support python database functions as far as I know.
https://github.com/orgs/supabase/discussions/2239
Great answer thanks!
I've been running the local CLI version here for a week or so and it doesn't seem too troublesome. But then again I haven't deployed any actual production code.
Is there any reason to think that a self-hosted version (on Azure, for example) would be more troublesome than the local CLI version? I can imagine managing updates or deploying code (from local to self-hosted cloud) could be pesky. The point of this is saving time because I find Python easier to write than Javascript. If taking on the task of self-hosting will eat up time then it's not worth it. Any thoughts on this question?
Is there any reason to think that a self-hosted version (on Azure, for example) would be more troublesome than the local CLI version? I can imagine managing updates or deploying code (from local to self-hosted cloud) could be pesky. The point of this is saving time because I find Python easier to write than Javascript. If taking on the task of self-hosting will eat up time then it's not worth it. Any thoughts on this question?
I don't selfhost. You are running your own server and everything is set up in config files. More complex than CLI local dev but mainly just needing to deal with server security, ports, etc.
You installed the python language extension in the local CLI? Also that link was Postgres 15, not sure if 17 is any different to install it. Supabase has hired a dedicated Dev this month to work on making self-hosting easier. You can also look at the selfhosting tag here and get a feel for what goes on.
You installed the python language extension in the local CLI? Also that link was Postgres 15, not sure if 17 is any different to install it. Supabase has hired a dedicated Dev this month to work on making self-hosting easier. You can also look at the selfhosting tag here and get a feel for what goes on.

I suppose the natural follow=on question is--assuming I'm writing javascript to call an external API (rather than accept an API call)--why would I want to put that into an edge function rather than a normal database function?
Not going to be doing that anymore either...
But very, very few users use anything else other than the SQL or plpgsql functions.
Edge functions run near the user and are good if you have a lot of work around a few db call/s or other API's to pull from or storage files. They don't eat up database CPU/memory. Plus you have lots of repository code you can pull in.
Database functions are good if you have several DB requests that need to run or interact with each other. They are not good if you have to make an API request from them. The http extension is sync and locks up the connection to the DB until finished. Pg_net is async, but not good for fetching data or errors because... it is async... and does not wait for data reply.

It looks to be possible to run http jobs from plpgsql. For example this function (below) returns and stores results. But the json parsing functionality doesn't work as documented and I'm ready to give up on it. Without being able to parse further before storing, I end up setting down a huge json string that I'll have to parse later in SQL, which is a pain.
Thanks!
CREATE OR REPLACE FUNCTION get_hexnode_ios_app_results() RETURNS TRIGGER AS $$
DECLARE
http_option TEXT := 'GET';
authorization_header TEXT := '**';
base_url TEXT := 'https://mylizardmdm.hexnodemdm.com/api/v1/';
keyword TEXT :='facebook';
platform TEXT :='ios';
country TEXT :='us';
endpoint TEXT := base_url 'applications/searchapp/?keyword=' keyword '&platform=' platform '&country=' country;
request http_request;
response_body TEXT;
response http_response;
results json;
content text;
BEGIN
request := (
http_option,
endpoint,
ARRAY[('Authorization', authorization_header)]::http_header[],
null,
null
)::http_request;
response:= http(request);
results:=to_jsonb(response);
content:= results -> 'content' ;
UPDATE public.ios_apps
SET
h_app_name = results
WHERE h_app_name is null
AND app_lookup = NEW.app_lookup;
RETURN null;
END;
$$ LANGUAGE plpgsql;
The point you make about edge function latency makes sense to me, and is an interesting one. The primary latency issue I have is that users will send me screenshots of their list of apps, then I'll need to parse the screenshots for text and send a series of slow API calls to look up the categories of those apps. I'd been assuming they'd just have to sit there till that's done and they can work with those categories. But it seems possible that I could parse the images and send the API calls as the text comes in, so that the user's page can begin to populate with categories before they're all parsed/determined. I don't know yet how to set up a table data entry form where I pre-populate data before the data is available in the database, but it seems like that could be possible. For a user with a lot of apps it could save a minute or two. A problem for another day 🙂
Yes, as I said plpgsql can make http calls, BUT they are synchronous and you will lock up a connection to the DB with its overhead waiting for the response. This should be done sparingly and only if you have overwhelming benefit in using a DB function versus Edge function.
Right. Such as if you have downstream triggers that need to wait for the dependency.