-
Notifications
You must be signed in to change notification settings - Fork 181
add gemini support and remove default models #20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
2025-04-15_23-01-47_automated_concept_sae_eval_attempt_0_reflection1.pdf |
Thank you, would it be possible to make this a more minimal addition focused on adding Gemini support but changing nothing else? |
Adding gemini support means you have to generalise it more - because in the backend search it checks for either openai or claude models and that is all - so it will fail for llama, deepseek or other examples given in the llm list here. |
@conglu1997 I assume you mean to remove examples? I deleted everything but gemini part and some bugs I found while running e2e @RichardScottOZ Yes, there are some hardcoded models/clients in the code. I think the best way would be to add something like https://github.com/BerriAI/litellm but that was out of scope, I just wanted to test one run |
Yes, or the llama ideation uses OpenRouter already.. so another option for something like litellm |
Anyway, minor conveniences aside, AI Scientist v2 is very impressive. Well done. |
ptal @conglu1997 |
See #37 |
@conglu1997 that PR only changes models that were defined in llm.py, in your codebase you're also using just random clients defined in several places lol |
Would it be possible to make a similar style PR? I'm concerned loads of new env variables are being used, new imports, etc. The ideal would be the minimal change that makes this work. |
Tried running it end-to-end using gemini, ran into so many errors and hidden default models, tried to fix them here. Can split/remove things like example project, ptal!