Exploring `ExecutionPlan` for Reliability and Correctness in Effects
Morning folks!
I'm playing around w/
-
- I'm sure in some cases this is still the right choice, say if you're prompt you send OpenAI is slightly different than the prompt you send Anthropic, for the same piece of logic.
- It adds reliability to the unreliable effect by working through a set of dependency strategies
- This probably shouldn't be used for business logic.
- One interesting thought I'm having is—what if the results of an LLM are returned, but incorrect? (Say you have a way to roughly verify.. e.g. "find me an article about TS not in this list of already read articles" - you can check if the article is in the list already read articles). Should I use
- I like how this is not in @effect/ai, but in effect. (It could be that I'm posting in the wrong channel.. if so, sorry!)
- One of the strengths of this is that it can be used more ergonomically with another AI framework, like Langchain, if someone wants to use Effect + Langchain.
- Details in the
on this one - I'm losing type safety on the 
I'm playing around w/
ExecutionPlan. I'd like to share how I'm thinking about it, and if anybody has any experience with it, I'd love to hear your thoughts.-
ExecutionPlan is for adding reliability to some effect which would otherwise be unreliable. You could add retries into it directly, but that would be more imperative and distract from the business logic.- I'm sure in some cases this is still the right choice, say if you're prompt you send OpenAI is slightly different than the prompt you send Anthropic, for the same piece of logic.
- It adds reliability to the unreliable effect by working through a set of dependency strategies
- This probably shouldn't be used for business logic.
- One interesting thought I'm having is—what if the results of an LLM are returned, but incorrect? (Say you have a way to roughly verify.. e.g. "find me an article about TS not in this list of already read articles" - you can check if the article is in the list already read articles). Should I use
ExecutionPlan to increase correctness, not just reliability? - I like how this is not in @effect/ai, but in effect. (It could be that I'm posting in the wrong channel.. if so, sorry!)
- One of the strengths of this is that it can be used more ergonomically with another AI framework, like Langchain, if someone wants to use Effect + Langchain.
- Details in the
while condition for the error parameter. I'm hoping this is a skill issue on my part 