AI Gateway + Vertex AI Context Caching
thanks for sharing so the ask here is for ai gateway to support tracking the costs of context caching through providers, including google
what does the response look like from google when using conext caching? Curious to see how it splits out input, output, and context caching because that is how we would track tokens to then calculate
what does the response look like from google when using conext caching? Curious to see how it splits out input, output, and context caching because that is how we would track tokens to then calculate

