You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Communicate with qwen3-32b via streaming, when need to call a tool without arguments, if the model returns the argument as '', wait until the completion of the tool call and then return to the model will report an error.
I think there are 3 main reasons for this problem:
the ability of the model itself (no such phenomenon when using deepseek-chat API, but it will be triggered when using locally deployed qwen3-32b)
the use of streaming (not triggered when stream=False)
tool does not require parameters
Steps to Reproduce
select qwen3-32b as model
use stream=True
selec a tool does not require parameters
Agent Configuration (if applicable)
Minimum Implementation Code:
importrandomfromagno.agentimportAgentfromagno.models.openai.likeimportOpenAILikefromagno.toolsimporttool@tool(show_result=True)defget_number() ->str:
"""Get the number."""# In a real implementation, this would call a number APIrandom_number=random.randint(1, 100)
returnf"The number is {random_number}."model=OpenAILike(
id="qwen3-32b", base_url="http://127.0.0.1:8080, api_key="sk-xxxxx"
)
agent=Agent(model=model, tools=[get_number], debug_mode=True)
agent.print_response("Give me a random number.", stream=True)
Expected Behavior
Call the tool and output the results
Actual Behavior
PS D:\Code\agent\agno_bug>& D:/Code/agent/agno_bug/.venv/Scripts/python.exe d:/Code/agent/agno_bug/main.py
DEBUG ******************************************* Agent ID: d70c8231-afc4-4770-8ff7-812fc329179d *******************************************
DEBUG ****************************************** Session ID: 4a93145c-b17e-4f0a-aace-08780e3fe147 ******************************************
DEBUG *************************************** Agent Run Start: 4d03b13d-f073-4f03-b938-ffa1b1a242e9 ****************************************
DEBUG Processing tools for model
DEBUG Added tool get_number
DEBUG ---------------------------------------------------- OpenAI Response Stream Start ----------------------------------------------------
DEBUG ---------------------------------------------------------- Model: qwen3-32b ----------------------------------------------------------
DEBUG ================================================================ user ================================================================
DEBUG Give me a random number.
DEBUG messages in stream: [{'role': 'user','content': 'Give me a random number.'}]
DEBUG ============================================================= assistant ==============================================================
DEBUG <think>
Okay, the user is asking for a random number. Let me check the tools available. There's a function called get_number with no parameters required. Since the user wants a random number, I should call that function. But wait, the function's description just says "Get the number." Does it generate
a random one or return a fixed value? The parameters are empty, so maybe it's designed to return a random number by default. I'll proceed to call
get_number without any arguments. I need to make sure the tool call is correctly formatted in JSON within the XML tags. Alright, that should do it.
</think>
DEBUG Tool Calls:
- ID: 'chatcmpl-tool-1077bda4db6941d1936130b9f9d57102'
Name: 'get_number'
DEBUG ************************************************************* METRICS **************************************************************
DEBUG * Tokens: input=139, output=144, total=283
DEBUG * Time: 3.3514s
DEBUG * Tokens per second: 42.9673 tokens/s
DEBUG * Time to first token: 0.9840s
DEBUG ************************************************************* METRICS **************************************************************
DEBUG Getting function get_number
DEBUG Running: get_number()
DEBUG ================================================================ tool ================================================================
DEBUG Tool call Id: chatcmpl-tool-1077bda4db6941d1936130b9f9d57102
DEBUG The number is 67.
DEBUG *********************************************************** TOOL METRICS ***********************************************************
DEBUG * Time: 0.0022s
DEBUG *********************************************************** TOOL METRICS ***********************************************************
DEBUG messages in stream: [{'role': 'user','content': 'Give me a random number.'}, {'role': 'assistant','content': '<think>\nOkay, the user is asking for a random number. Let me check the tools available. There\'s a function called get_number with no parameters required. Since the user wants a
random number, I should call that function. But wait, the function\'s description just says "Get the number." Does it generate a random one or return a fixed value? The parameters are empty, so maybe it\'s designed to return a random number by default. I\'ll proceed to call get_number without any arguments. I need to make sure the tool call is correctly formatted in JSON within the XML tags. Alright, that should do it.\n</think>\n\n','tool_calls': [{'id': 'chatcmpl-tool-1077bda4db6941d1936130b9f9d57102','type': 'function','function': {'name': 'get_number','arguments': ''}}]}, {'role': 'tool','content': 'The number is 67.','tool_call_id': 'chatcmpl-tool-1077bda4db6941d1936130b9f9d57102'}]
ERROR API status error from OpenAI API: Error code: 400- {'object': 'error','message': 'Expecting value: line 1 column 1 (char 0)','type':
'BadRequestError','param': None,'code': 400}
▰▱▱▱▱▱▱ Thinking...
┏━ Message ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ ┃
┃ Give me a random number. ┃
┃ ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
┏━ Tool Calls ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ ┃
┃ • get_number() ┃
┃ ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
┏━ Response (3.4s) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ ┃
┃ <think> ┃
┃ Okay, the user is asking for a random number. Let me check the tools available. There's a function called get_number with no parameters required. ┃┃ Since the user wants a random number, I should call that function. But wait, the function's description just says "Get the number." Does it generate a ┃
┃ random one or return a fixed value? The parameters are empty, so maybe it's designed to return a random number by default. I'll proceed to call ┃
┃ get_number without any arguments. I need to make sure the tool call is correctly formatted in JSON within the XML tags. Alright, that should do it. ┃
┃ </think> ┃
┃ ┃
┃ The number is 67. ┃
┃ ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
Traceback (most recent call last):
File "D:\Code\agent\agno_bug\.venv\Lib\site-packages\agno\models\openai\chat.py", line 455,in invoke_stream
yield from self.get_client().chat.completions.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Code\agent\agno_bug\.venv\Lib\site-packages\openai\_utils\_utils.py", line 287,in wrapper
return func(*args,**kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\Code\agent\agno_bug\.venv\Lib\site-packages\openai\resources\chat\completions\completions.py", line 925,in create
return self._post(
^^^^^^^^^^^
File "D:\Code\agent\agno_bug\.venv\Lib\site-packages\openai\_base_client.py", line 1239,in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Code\agent\agno_bug\.venv\Lib\site-packages\openai\_base_client.py", line 1034,in request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400- {'object': 'error','message': 'Expecting value: line 1 column 1 (char 0)','type': 'BadRequestError','param': None,'code': 400}
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "d:\Code\agent\agno_bug\main.py", line 24,in<module>
agent.print_response("Give me a random number.", stream=True)
File "D:\Code\agent\agno_bug\.venv\Lib\site-packages\agno\agent\agent.py", line 4545,in print_response
for resp in self.run(
^^^^^^^^^
File "D:\Code\agent\agno_bug\.venv\Lib\site-packages\agno\agent\agent.py", line 655,in _run
for model_response_chunk in self.model.response_stream(messages=run_messages.messages):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Code\agent\agno_bug\.venv\Lib\site-packages\agno\models\base.py", line 520,in response_stream
yield from self.process_response_stream(
File "D:\Code\agent\agno_bug\.venv\Lib\site-packages\agno\models\base.py", line 492,in process_response_stream
for response_delta in self.invoke_stream(messages=messages):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Code\agent\agno_bug\.venv\Lib\site-packages\agno\models\openai\chat.py", line 490,in invoke_stream
raise ModelProviderError(
agno.exceptions.ModelProviderError: Unknown model error
Screenshots or Logs (if applicable)
Environment
OS: Windows 11
Agno Version: v1.4.5
External Dependency Versions: openai-v1.78.0
Additional Environment Details: Python 3.12.8
Possible Solutions (optional)
# add code at _format_messages in agno.model.opnai.chat.OpenAIChatif"tool_calls"inmessage_dict:
fortoolinmessage_dict["tool_calls"]:
iftool["type"] =="function":
ifnottool["function"]["arguments"]:
tool["function"]["arguments"] ='{}'
Additional Context
A similar problem arises in pydantic-ai, and they think should fix in agent framework.
The text was updated successfully, but these errors were encountered:
Hello @wusskk ! Looks like what is happening here is an incompatibility between the model qwen3-32b and the OpenAI spec for function calling.
OpenAI expects the arguments for a tool call with no params to be {} whereas qwen3-32b returns '' resulting in the error. Your suggestion works but is tied to just qwen3-32b. I think a better solution would be to have a separate Qwen model class if this behaviour is consistent across all qwen models. Can you please share how you are hosting this model?
Hi @ysolanky , I am using the model qwen3-32b, but it is an API provided by others. As far as I know, they use vllm for deployment, and I am not clear about the specific commands.
Now I have switched to other API providers (siliconflow and Aliyun), using the qwen3 series models for testing, and found that this issue did not occur. Therefore, I suspect it might be related to the deployment method.
Moreover, only when the streaming output is enabled, the qwen3 deployed by vllm will encounter this issue; it runs normally when the streaming output is disabled.
Description
Communicate with qwen3-32b via streaming, when need to call a tool without arguments, if the model returns the argument as '', wait until the completion of the tool call and then return to the model will report an error.
I think there are 3 main reasons for this problem:
Steps to Reproduce
Agent Configuration (if applicable)
Minimum Implementation Code:
Expected Behavior
Call the tool and output the results
Actual Behavior
Screenshots or Logs (if applicable)
Environment
Possible Solutions (optional)
Additional Context
A similar problem arises in pydantic-ai, and they think should fix in agent framework.
The text was updated successfully, but these errors were encountered: