-
Notifications
You must be signed in to change notification settings - Fork 12
Return detailed error output from Gemini API calls. #510
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. This structure looks good to me, feel free to proceed! Will leave to @vivtsai to merge.
@@ -310,17 +311,22 @@ export async function getAgentChatAPIResponse( | |||
promptConfig.structuredOutputConfig, | |||
); | |||
|
|||
if (response.status !== ModelResponseStatus.OK) { | |||
// TODO: Surface the error to the experimenter. | |||
return null; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some ways I could see this being surfaced:
- An error icon on both the participant (in the cohort view) and participant chat panel, with the latest error text on hover.
- Also, something in the alerts panel (perhaps a notification icon in the panel)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For starters, we'll include the error in a log message (and then yes, we can decide if we want an "error" log message to trigger some sort of visual or other behavior on the frontend).
@@ -49,10 +49,9 @@ function makeStructuredOutputSchema( | |||
}; | |||
const type = typeMap[schema.type]; | |||
if (!type) { | |||
console.error( | |||
throw new Error( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want to throw here or should we just return an empty response with a structured output error?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here and elsewhere
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We'll catch the error in getGeminiApiResponse() and return an error response from there. Directly returning the error from here would be awkward since this is just returning the schema, so throwing it seemed cleaner to me.
functions/src/api/gemini.api.ts
Outdated
if (!response.candidates) { | ||
return { | ||
status: ModelResponseStatus.UNKNOWN_ERROR, | ||
errorMessage: `Response unexpectedly had no candidates: ${response}`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Make this error more clear (experimenters might not know what candidates are)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about this? Experimenters probably won't be able to deal with this error anyway since it means the provider isn't returning the expected format, but hopefully this will make it clearer that this is probably a provider/platform problem.
Restructure the ModelResponse interface to return error information.
status
field contains an enum to indicate whether the response was OK.text
is populated only if the model returned a text response. (It will be empty for most error cases, but some errors will leave some text populated, like running out of output tokens.)errorMessage
field contains the error message.parsedResponse
field isn't populated yet, but it'll let us move structured output parsing closer to the model call, and return a parsing error if the model returns badly-formatted data.Errors are reported through the prompt testing endpoint, but otherwise not used yet.
Only the Gemini API is fully supported so far. The OpenAI/compatible API will report UNKNOWN_ERROR for some errors, and the Ollama API will always report OK.