Ash TypeScript: Rich Phoenix Frontends, Simplified

Code and Stuff
YouTube
Ash TypeScript: Rich Phoenix Frontends, Simplified
In this video, I explore Ash TypeScript, a new library that automatically generates TypeScript client code from your Ash resources. We'll build a stock explorer app using SEC data to demonstrate how this eliminates the complexity of designing APIs and building client libraries when you need JavaScript frontends for your Phoenix apps. What's co...
75 Replies
ZachDaniel
ZachDaniel3mo ago
LFGGGGGGGG Watching soon ❤️ I think the nested load not passing your types might be a bug actually @Torkan around minute 40
Torkan
Torkan3mo ago
I’ll check it out tomorrow! 😅
ZachDaniel
ZachDaniel3mo ago
Well done!! I watched on 1.25 since I'm in the car 😆 no popcorn for me this time 😭
RootCA
RootCAOP3mo ago
Yea sorry for the bug report via video 🙃
ZachDaniel
ZachDaniel3mo ago
Haha all good Means we have a fun update to post once fixed haha
RootCA
RootCAOP3mo ago
Storytelling and engagement! There’s always round two!
Torkan
Torkan3mo ago
Actually @Christian Alexander, the reason you're not getting autocomplete on the exchange, is because the exchange resource isn't exposed in the rpc block 😅 You're getting the data since there is no policy blocking you from seeing it We need both a better api and docs around this, open to suggestions, there are probably some conventions we should follow as well, maybe use the same pattern as ash_graphql with requiring resources to use the AshTypescript.Resource @Zach? Then we can reject any loading etc of resources that are not exposed entirely
RootCA
RootCAOP3mo ago
Honestly I loved how easy it was to access the data, in the same way I’d be able to from the Elixir side. I bet it would be hard to determine which fields and relationships are public, but it’d be incredible if that was the trigger for field type generation But if more ceremony would lead to a more secure system, I’d understand
ZachDaniel
ZachDaniel3mo ago
Yeah @Torkan it should have prevented loading anything who's destination didn't have a type configured or wasn't in the RPC I mean good call Posted the video on the orange site btw
Torkan
Torkan3mo ago
Yeah, ok, let’s do it the same as ash_graphql et al
Shahryar
Shahryar3mo ago
Thank you @Christian Alexander 🙏🏻♥️
RootCA
RootCAOP3mo ago
lol let’s see how much hate I get for throwing shade at Node backends
Torkan
Torkan3mo ago
Awesome video btw, forgot to mention that! 🙈❤️
ZachDaniel
ZachDaniel3mo ago
Its hacker news, hate abounds Sorry for subjecting you to it lol
RootCA
RootCAOP3mo ago
Haha no such thing as bad publicity
Torkan
Torkan3mo ago
Shouldn’t be that much work to short circuit the code generation for relationships that link to resources that aren’t exposed fortunately
ZachDaniel
ZachDaniel3mo ago
I think the code already isn't generated right?
Torkan
Torkan3mo ago
Yeah sorry, I meant the loading 😂
RootCA
RootCAOP3mo ago
By the way, YouTube is rolling out a new feature called Hype. Most users get 3 hypes per week, and it really helps new videos get discovered. If you have some hype points lying around, I’d appreciate your vote
Torkan
Torkan3mo ago
Hm, I looked around in the mobile app now, but I didn’t see anything about that feature, probably only out in the US for now?
ZachDaniel
ZachDaniel3mo ago
hyped it thrice
RootCA
RootCAOP3mo ago
I’m not sure where they’re at on the regional rollout, but thanks for looking! The Zach so nice, he hypes it thrice!
Shahryar
Shahryar3mo ago
Sorry i think it is not active out of US, i tested with island VPN, it is not active
Torkan
Torkan3mo ago
No hype to give then 😭 Ok, I've pushed a fix to main now for this, it'll give an unknown field error if you try to load a relationship that is not using the AshTypescript.Resource extension
RootCA
RootCAOP3mo ago
I suppose I’ll have to update the demo repo now that it’ll be broken on the latest ash_typescript. I get why it’s preferable to have it defined in the resource, but it definitely adds complexity.
Torkan
Torkan3mo ago
Yeah, that's a bummer But the sad part of life with immature libs
RootCA
RootCAOP3mo ago
I think it’s an example of the friction between rigid structures and desire paths, and an application of Hyrum’s Law
Torkan
Torkan3mo ago
Yeah, we already have existing conventions in place as well, so overall it's most likely better to have them all behave as similarly as possible But I do see the argument that this could be viewed as a policy thing though
RootCA
RootCAOP3mo ago
Is there something special that has to be done to resources for regular Ash code interface functions to load related resources and fields? Or is the existing convention more around extensions like the JSON API and GraphQL libraries?
Torkan
Torkan3mo ago
I'm sure the others in the core team has chewed on this dilemma quite a bit already, I guess there are just as many arguments both for and against just letting policies be the single source of truth No, if you're on the "inside", it's just the policies that will deny you access I'm assuming that at some point someone wanted to be able to say "we just want to expose these things and only these things through this external form of communication with our system", anything else should never leak
RootCA
RootCAOP3mo ago
Totally, I thought that’s what public was for
Torkan
Torkan3mo ago
Yeah, for fields that is always respected, there is no similar functionality for entire resources though
RootCA
RootCAOP3mo ago
public? exists on all relationship DSL options I could find (belongs_to, has_many, has_one, many_to_many)
Torkan
Torkan3mo ago
Ah yes, I meant even outside relationships, more like that resources themselves don't have a similar feature to public? for attributes And then there was probably a need for "we want these things to be accessible through our internal json api that is for admins, but not graphql which is meant for public consumption" etc Zach or any of the others that have been around the block a bit longer than me, can probably give some good reasons for how it came to be this way 😅 Ok, pushed the last thing now that is required for this to work, which is a verifier that all resources using the AshTypescript.Resource needs to pick a unique type_name as well Another thing that is also coming soon, is the option to expose a resource field under a different name, which is needed if a field has a name that cannot be used in Typescript, for example a calculation with the name is_admin? on a User resource, which is allowed in Elixir but doesn't work in TS That doesn't affect your demo though fortunately 😅
ZachDaniel
ZachDaniel3mo ago
The reasoning behind this is that it's one thing to decide to expose a resource and its fields And another thing for "adding one resource to an interface" to implicitly also add its entire graph of public relationships So the pattern is always "all public fields, and all public relationships whose destination is also explicitly added to this interface"
Geril
Geril2mo ago
In ash_graphql, when exposing a read action, it can be configured as either get or list, which allows the built-in read action to be utilized in both contexts. Is there an equivalent approach available in typescript_rpc?
Torkan
Torkan2mo ago
Not currently, but you can add a filter on the id, then you'd get a list with just the one entry or an empty list
Geril
Geril2mo ago
Thanks for clarifying! 🙏 Do you know if native support for this (similar to get/list in ash_graphql) is planned for typescript_rpc, or should I handle this distinction on the frontend for now?
ZachDaniel
ZachDaniel2mo ago
@Torkan we should support that 👍
Torkan
Torkan2mo ago
Yeah, adding it to my (ever-growing) todo-list 😂
Geril
Geril2mo ago
I also tried defining a simple get action for the resource:
read :get do
get? true
end
read :get do
get? true
end
and exposing it like this:
typescript_rpc do
resource Hatatitla.Spaces.Space do
rpc_action :space_by_id, :get
...
typescript_rpc do
resource Hatatitla.Spaces.Space do
rpc_action :space_by_id, :get
...
which generates the following client function:
export async function spaceById<Fields extends SpaceByIdFields>(
config: {
fields: Fields;
headers?: Record<string, string>;
fetchOptions?: RequestInit;
customFetch?: (input: RequestInfo | URL, init?: RequestInit) => Promise<Response>;
}
): Promise<SpaceByIdResult<Fields>> {
export async function spaceById<Fields extends SpaceByIdFields>(
config: {
fields: Fields;
headers?: Record<string, string>;
fetchOptions?: RequestInit;
customFetch?: (input: RequestInfo | URL, init?: RequestInit) => Promise<Response>;
}
): Promise<SpaceByIdResult<Fields>> {
From what I can tell, it doesn’t seem possible to actually use get actions in this setup, unless I’ve misunderstood how it’s supposed to work. Is this the case, or I misunderstood something?
Torkan
Torkan2mo ago
Ah right, you need to either use get_by [:id] or manually add argument :id, :uuid, allow_nil?: false for that action Then the typescript generation will see the arguments, and your function will start accepting input: {id: 'blabla'} as well 😎
Geril
Geril2mo ago
oh I see, added get_by [:id] and it added input: SpaceByIdInput; looking like:
export type SpaceByIdInput = {
id?: UUID;
};
export type SpaceByIdInput = {
id?: UUID;
};
into spaceById(...
Torkan
Torkan2mo ago
Yeah, just adding get? true doesn't add any way of filtering, you have to add that manually, since you might want to filter on something else than the id
ZachDaniel
ZachDaniel2mo ago
@Torkan we can also let someone else take a stab. "PRs welcome" 😂
Torkan
Torkan2mo ago
get_by is the convenience version, that also sets up the argument etc for you But like you noticed, get_by's argument are optional 😅 I don't think you can set them to mandatory, so if you want the id to be required on the front-end, you have to add the argument yourself and set it with allow_nil?: false Yeah, adding that particular feature is probably a bit complicated actually, it requires you to grok the entire logic flow in order to see where and how it should be done 😅
ZachDaniel
ZachDaniel2mo ago
We should make those arguments required... It doesn't make sense for them to be optional 🤔 it's a breaking change though unfortunately
Torkan
Torkan2mo ago
Yeah, I reckon it's because you can declare multiple args, and all of them might do the job on their own if you have several unique values on each record We can maybe start accepting a keyword list? get_by [:email, id: [allow_nil?: false]] for example Then apply the arg options if its a tuple when we're iterating
ZachDaniel
ZachDaniel2mo ago
Hm... Get by isn't multiple args that might do it on their own though is it? The idea is that both args together identify a resource
Torkan
Torkan2mo ago
Ah, that could be the case, I just assumed that was the rationale for why the args weren't configured with allow_nil? false 😅
Geril
Geril2mo ago
Circling back to your proposal of adding something to your "todo list" 😄 , I wanted to propose a few potential improvements that could make it even more flexible for frontend integrations. For my startup, we’re currently using Graphism, and I’ve built a custom GraphQL Codegen plugin that generates client code very similar to what ash_typescript does - but I am using graphql layer. Mentioned package is using gql-query-builder under the hood, so I already have a client with a separated UI layer and strongly typed functions for my “actions.” The fields syntax you’ve chosen in ash_typescript is almost identical to the one I implemented making it immediately feel natural and very appealing to give it a test. Right now, I’m testing ash for a new backend and trying out this package as part of my frontend client (a separate app written in Next.js). One challenge I’ve hit is that the generated client assumes a single static URL or a custom client per function. I’d love to have a way to dynamically resolve the base URL on the frontend, for example, by reading from environment variables. Something like the syntax used in ex_aws: {:system, "NAME"} could work well. A possible approach could be allowing configuration like:
base_url {:env, "BACKEND_URL"}
base_url {:env, "BACKEND_URL"}
which would then translate in the generated TypeScript client to something like:
process.env.BACKEND_URL
process.env.BACKEND_URL
Alternatively, a more flexible mechanism could be to allow a custom function for resolving the base URL (e.g., base_url: () => process.env.BACKEND_URL), which would give more control in multi-environment setups. Another idea is about error handling. It might be useful to introduce an optional config flag that throws on failed responses instead of returning { success: false }. I mean this section:
if (!response.ok) {
return {
success: false,
errors: [{ type: "network", message: response.statusText, details: {} }],
};
}
if (!response.ok) {
return {
success: false,
errors: [{ type: "network", message: response.statusText, details: {} }],
};
}
This behavior is fine for client-side React components, but in Next.js server components, throwing errors is often more natural and leads to better integration with React’s built-in error boundaries and type narrowing. A flag like throw_on_error: true could improve developer ergonomics in such cases. To take this a step further, perhaps a configurable “transformer” or “link” mechanism could be added, similar to how Apollo or TanStack Query use middleware. This would allow defining global request/response transformations or even injecting a custom fetch client at the configuration level. It would make it easier to maintain a shared “global” client in larger apps or to publish the generated SDK as a standalone NPM package, decoupled from the backend’s codebase. Finally, as the API grows, the generated TypeScript file can become quite large, which not only affects IDE responsiveness but can also slow down build and load times in larger codebases, especially when Babel or tree-shaking isn’t optimally configured. It might be helpful to introduce a flag to split the generated output by domain or resource, while still generating index.ts files for developers who prefer to import everything from a single global entry point. This approach would maintain backward compatibility and allow projects with proper tree-shaking to continue working efficiently, while also improving scalability for larger or more modular codebases. A possible folder structure could look like this:
sdk/
actions/
users.ts
projects.ts
spaces.ts
index.ts
types/
shared.ts
users.ts
projects.ts
spaces.ts
index.ts
validations/
users.ts
projects.ts
spaces.ts
index.ts
utils/
buildCSRFHeaders.ts
index.ts
index.ts
sdk/
actions/
users.ts
projects.ts
spaces.ts
index.ts
types/
shared.ts
users.ts
projects.ts
spaces.ts
index.ts
validations/
users.ts
projects.ts
spaces.ts
index.ts
utils/
buildCSRFHeaders.ts
index.ts
index.ts
This would make the SDK more modular and manageable for large codebases, while still keeping type safety and clear boundaries between domains. Just wanted to share these thoughts in case they align with your roadmap. I think having this flexibility would make ash_typescript incredibly useful not only for integrated apps but also for standalone SDK generation and multi-environment setups. Thanks again for all work. This package has huge potential, and I’m happy to contribute ideas and try some code - so far I have very limited knowledge of ash and elixir in general so I think it will take me some time to actually open some PR 🙁
Torkan
Torkan2mo ago
Yeah, escape hatches for both the endpoints and how to handle unsuccessful actions are good ideas 🤩 I’ll try to get them added soon, as functions you can write yourself, that the generated code can import and then use Regarding file-splitting, I’d be open to someone putting in the effort to make it happen, but right now it doesn’t pass the cost/value-check for me to start looking into 😅 Just added support in main now for adding your own custom functions for setting the endpoints, and for handling error responses Check out the updated README, I'd love some feedback if you take it for a spin 🙏
Geril
Geril2mo ago
I just tested the new dynamic endpoint configuration and it works great, great improvement, especially for standalone frontends! 🙌 The approach with {:imported_ts_func, ...} and import_into_generated feels flexible and clean. One small idea that came to mind, that might also be worth supporting a direct environment-variable based mechanism, like:
run_endpoint: {:env, "import.meta.env.VITE_RUN_ENDPOINT"}
# or for Next.js / generic node.js:
run_endpoint: {:env, "process.env.NEXT_PUBLIC_RUN_ENDPOINT"}
run_endpoint: {:env, "import.meta.env.VITE_RUN_ENDPOINT"}
# or for Next.js / generic node.js:
run_endpoint: {:env, "process.env.NEXT_PUBLIC_RUN_ENDPOINT"}
That would avoid the need for the extra TypeScript function and import_into_generated block when the value can just come straight from the environment. Not a blocker at all (I’m perfectly fine with the current setup!), just something that could simplify configs even further. I am messing up with a custom error handler function as well but still trying to wrap my head around types so will get back to that a bit later today
Torkan
Torkan2mo ago
Nice, glad to hear it worked 😎 Yeah, :env option also makes sense Technically as of right now, the error handler must either throw, or return {success: false, errors: <data matching the error type>[]}
Geril
Geril2mo ago
Regarding the error handling, I’ve been thinking a bit about how it works right now. At the moment, the generated RPC functions only handle “hard” network failures (response.ok === false). But backend errors like forbidden access, invalid arguments, etc., still bubble up from response.json() and aren’t caught by the rpc_error_response_handler. One idea I had was to have a global response handler that could be imported at the top of the generated file and used automatically by all RPC functions. That way: - Errors (network or API) could be thrown centrally, so frontend code wouldn’t need to check if (success) everywhere - The return type could be inferred from the handler, keeping things fully type-safe - It could also allow any additional processing - logging, transforming results, enriching errors - all in one place
Torkan
Torkan2mo ago
Yep, basically like a middleware that gets the rpc results, and returns the rpc results It wouldn't be able to alter the results in a way that breaks with the typing though Then you can throw if you want to in the error response handler, and then do whatever you want for all rpc results, both successful or not If it makes sense to throw if the rpc result is false due to being unauthorized for example, you can do that 😎 I pushed an update to main just now, instead of :imported_ts_func, you now have :runtime_expr that allows you to insert any code that will be evaluated, which allows for both function calls and process.env and so on Also updated usage instructions in the readme
Geril
Geril2mo ago
Using :runtime_expr instead of :imported_ts_func opens up a lot more possibilities - being able to directly evaluate expressions (like process.env access or inline function calls) makes this much more powerful and adaptable for different setups 💪 . Also, great job on the README update, the explanation is pretty clear and easy to follow, even for frontend devs setting this up on the backend 😄 I finally found some time to explore the concept of the "custom handler" a bit further and built a small example to see how flexible it could get. I started by defining two shared types (that could be used in generated code):
export type ApiSuccessResponse<T> = {
success: true
data: T
}

export type ApiErrorResponse = {
success: false
errors: {
type: string
message: string
fieldPath?: string
details: Record<string, string>
}[]
}

export type ApiResponse<T> = ApiSuccessResponse<T> | ApiErrorResponse
export type ApiSuccessResponse<T> = {
success: true
data: T
}

export type ApiErrorResponse = {
success: false
errors: {
type: string
message: string
fieldPath?: string
details: Record<string, string>
}[]
}

export type ApiResponse<T> = ApiSuccessResponse<T> | ApiErrorResponse
Then, in the generated function, instead of returning Promise<SpaceByIdResult<Fields>> I switched it to:
Promise<ReturnType<typeof apiClient.customHandler<InferSpaceByIdResult<Fields>>>>
Promise<ReturnType<typeof apiClient.customHandler<InferSpaceByIdResult<Fields>>>>
which lets me do:
const response = await fetchFunction(apiClient.getRunEndpoint(), fetchOptions)
return await apiClient.customHandler<InferSpaceByIdResult<Fields>>(response)
const response = await fetchFunction(apiClient.getRunEndpoint(), fetchOptions)
return await apiClient.customHandler<InferSpaceByIdResult<Fields>>(response)
The custom handler implementing current behaviour looks something like this:
export async function customHandler<T>(response: Response) {
if (!response.ok) {
return {
success: false,
errors: [{ type: 'network', message: response.statusText, details: {} }]
} as ApiErrorResponse
}

const result = (await response.json()) as ApiResponse<T>
return result
}
export async function customHandler<T>(response: Response) {
if (!response.ok) {
return {
success: false,
errors: [{ type: 'network', message: response.statusText, details: {} }]
} as ApiErrorResponse
}

const result = (await response.json()) as ApiResponse<T>
return result
}
So the component side can still do the usual check:
const response = await spaceById({ fields: ['id', 'name'], input: { id } })

if (response.success) {
console.log(response.data?.id)
} else {
console.log(response.errors)
}
const response = await spaceById({ fields: ['id', 'name'], input: { id } })

if (response.success) {
console.log(response.data?.id)
} else {
console.log(response.errors)
}
But when I update the handler to something like:
export async function customHandler<T>(response: Response) {
const result = (await response.json()) as ApiResponse<T>
if (response.ok && result.success) {
return result.data
}
throw new Error('RPC call failed')
}
export async function customHandler<T>(response: Response) {
const result = (await response.json()) as ApiResponse<T>
if (response.ok && result.success) {
return result.data
}
throw new Error('RPC call failed')
}
I can simplify the component usage to:
const response = await spaceById({ fields: ['id', 'name'], input: { id } })
console.log(response?.id)
const response = await spaceById({ fields: ['id', 'name'], input: { id } })
console.log(response?.id)
This way, the middleware (or handler) becomes a customization point, so each app or SDK setup can decide whether to return { success: false } or throw an error, depending on its preferred flow It should also be fully type-safe since the handler knows both the success and error shapes (hopefully I haven't missed something) Basically, this makes the middleware a great "post-processor", allowing developers to choose how they want RPC errors to be handled globally without changing the generated client code I was thinking about the custom handler idea a bit more last night and tried to push it further in terms of flexibility, mainly so the middleware can be partially controlled from the caller side (for example, to disable logging or pass through a request ID). I added a shared type for that:
export type ClientContext = Record<string, unknown>
export type ClientContext = Record<string, unknown>
Then I extended the generated function config to include a context, which automatically adapts its type based on the handler definition:
context?: Parameters<typeof apiClient.customHandler<InferSpaceByIdResult<Fields>>>[1] extends undefined
? ClientContext
: Parameters<typeof apiClient.customHandler<InferSpaceByIdResult<Fields>>>[1]
context?: Parameters<typeof apiClient.customHandler<InferSpaceByIdResult<Fields>>>[1] extends undefined
? ClientContext
: Parameters<typeof apiClient.customHandler<InferSpaceByIdResult<Fields>>>[1]
And in the generated call:
return await apiClient.customHandler<InferSpaceByIdResult<Fields>>(response, config.context)
return await apiClient.customHandler<InferSpaceByIdResult<Fields>>(response, config.context)
This way, the handler becomes a lot more flexible. For example, the generic version (that relies on generated "loosely typed" context):
export async function customHandler<T>(response: Response, ctx?: ClientContext) {
const result = (await response.json()) as ApiResponse<T>
if (response.ok && result.success) return result.data
throw new Error('RPC call failed')
}
export async function customHandler<T>(response: Response, ctx?: ClientContext) {
const result = (await response.json()) as ApiResponse<T>
if (response.ok && result.success) return result.data
throw new Error('RPC call failed')
}
Or a "stricter" version with a defined (per app/sdk) context type:
export type CustomHandlerContext = {
requestId?: string
disableLogging?: boolean
}

export async function customHandler<T>(response: Response, ctx?: CustomHandlerContext) {
const result = (await response.json()) as ApiResponse<T>
if (response.ok && result.success) {
if (!ctx?.disableLogging) console.log('Response received', ctx?.requestId)
return result.data
}
throw new Error('RPC call failed')
}
export type CustomHandlerContext = {
requestId?: string
disableLogging?: boolean
}

export async function customHandler<T>(response: Response, ctx?: CustomHandlerContext) {
const result = (await response.json()) as ApiResponse<T>
if (response.ok && result.success) {
if (!ctx?.disableLogging) console.log('Response received', ctx?.requestId)
return result.data
}
throw new Error('RPC call failed')
}
Now I can call it like this:
const response = await spaceById({
fields: ['id', 'name', 'isPublic'],
input: { id },
context: { requestId: 'my-request-id-123', disableLogging: true }
})
const response = await spaceById({
fields: ['id', 'name', 'isPublic'],
input: { id },
context: { requestId: 'my-request-id-123', disableLogging: true }
})
(I could call it also with a previous one but I have much more control over context type this way) This gives a nice level of runtime control while keeping things strongly typed. Apps using the SDK can fine-tune behavior per request without touching global logic. And since we’re here, it would also be great to allow a customFetch to be provided globally for the whole generated client (not just per function). Per-function overrides are still great, but having a client-wide default would make configuration much cleaner for most use cases.
Torkan
Torkan2mo ago
Yes, this is good stuff, agreed 😎 It's technically possible to add a global replacement for fetch, but several frameworks that have a custom fetch function for example doesn't expose that fetch function in a global way, you will often receive it as a function argument for the specific routes you define And then it is "pre-augmented" with stuff relevant for that particular route etc
Geril
Geril2mo ago
I would "vote" for having both options, to be able to provide it globally or per function as is done now
Torkan
Torkan2mo ago
But yeah, for those that actually have a straight-forward global replacement, then having the ability to configure it directly makes sense 😛 I probably won't have time to get started on this until sometime next week though, my life's backlog has been piling up a little lately 😂
Geril
Geril2mo ago
That sounds amazing (I mean the part about implementing it - not about the piling backlog 😄 ), I’d be super grateful if it even makes it in at some point 🙌 I’m genuinely impressed by how fast things around Ash are evolving, I have no idea when you guys even sleep if features keep popping up at this pace 😄
Torkan
Torkan2mo ago
Haha, in terms of AshTypescript, all the hard stuff (mapping ash resources, actions and fields, and generating the code for those things) is really done at this point And adding convenience functionality on top of it isn't that hard fortunately But yes, the overall velocity of progress in the ecosystem is very cool, even when it comes to harder challenges 😅
Geril
Geril2mo ago
I would bet writing types for field selection was a nightmare 😄
Torkan
Torkan2mo ago
Haha, it wasn't completely without its challenges 😅 The code is a little bit messy atm, I will organize it a bit better soon, but if you're curious most of the nitty-gritty stuff is inside the two codegen.ex-files in the repo
Geril
Geril2mo ago
I am already trying to follow changes that you're implementing at github to understand it a bit better 😄 Oh, and one more thing I just noticed that could be a nice little improvement 😄 It might be worth adding support for something like a default_scalar_type option (similar to what the GraphQL codegen supports). Right now, the generated client uses a lot of anys, which makes stricter setups (or picky ESLint configs 😅) complain "a bit". Allowing this to be configured e.g., defaulting to unknown would make it more flexible and keep the type safety tighter out of the box.
Torkan
Torkan2mo ago
Ah right, can you show me some concrete examples of what ends up typed as any? In terms of similar functionality as default_scalar_type, using the type_mapping_overrides should do the trick I think
Geril
Geril2mo ago
for exapmle inside of filters:
desc?: {
eq?: Record<string, **any**>;
notEq?: Record<string, **any**>;
in?: Array<Record<string, **any**>>;
};
desc?: {
eq?: Record<string, **any**>;
notEq?: Record<string, **any**>;
in?: Array<Record<string, **any**>>;
};
or:
type InferFieldValue<T extends TypedSchema, Field> = Field extends T['__primitiveFields']
? Field extends keyof T
? { [K in Field]: T[Field] }
: never
: Field extends Record<string, **any**>
type InferFieldValue<T extends TypedSchema, Field> = Field extends T['__primitiveFields']
? Field extends keyof T
? { [K in Field]: T[Field] }
: never
: Field extends Record<string, **any**>
Torkan
Torkan2mo ago
Right, how is the desc attribute defined in your ash resource?
Geril
Geril2mo ago
attribute :desc, :map do
description "Description stored as structured JSON"
allow_nil? true
public? true
end
attribute :desc, :map do
description "Description stored as structured JSON"
allow_nil? true
public? true
end
(for storing TipTap's json tree) I tried:
type_mapping_overrides: [
{"any", "unknown"}
]
type_mapping_overrides: [
{"any", "unknown"}
]
but still had any at same places
Torkan
Torkan2mo ago
Ah right, since that is a map without any field constraints, it will get set as any Or Record<string, any> to be precise TipTap uses some specific fields at the root if I recall correctly? So you can probably add some field constraints that matches the top-level fields The most idiomatic approach is to create a NewType that is a subtype of map, add the field constraint spec etc, and then add def typescript_type_name, do: "TipTopContent" for example
Geril
Geril2mo ago
yeah they got that "document" node there, but it's regular json.
{
"type": "doc",
"content": [
{
{
"type": "doc",
"content": [
{
but I am trying to go after all "any" types in generated code to be able to flip them to "unknown"
Torkan
Torkan2mo ago
Yes, that should be an option as well, that you can toggle between any & unknown, but for this case since you know the structure tiptap uses, adding a type would be the optimal way
ZachDaniel
ZachDaniel2mo ago
You guys should move over to #ash_typescript 🙂
Torkan
Torkan2mo ago
Yes, good point, this is useful for everyone checking out the extension 😅

Did you find this page helpful?