vllm.entrypoints.openai.parser.harmony_utils ¶
MCP_BUILTIN_TOOLS module-attribute ¶
REASONING_EFFORT module-attribute ¶
_parse_browser_tool_call ¶
_parse_browser_tool_call(
message: Message, recipient: str
) -> ResponseOutputItem
Parse browser tool calls (search, open, find) into web search items.
Source code in vllm/entrypoints/openai/parser/harmony_utils.py
_parse_final_message ¶
Parse final channel messages into output message items.
Source code in vllm/entrypoints/openai/parser/harmony_utils.py
_parse_function_call ¶
Parse function calls into function tool call items.
Source code in vllm/entrypoints/openai/parser/harmony_utils.py
_parse_mcp_call ¶
Parse MCP calls into MCP call items.
Source code in vllm/entrypoints/openai/parser/harmony_utils.py
_parse_mcp_recipient ¶
Parse MCP recipient into (server_label, tool_name).
For dotted recipients like "repo_browser.list": - server_label: "repo_browser" (namespace/server) - tool_name: "list" (specific tool)
For simple recipients like "filesystem": - server_label: "filesystem" - tool_name: "filesystem"
Source code in vllm/entrypoints/openai/parser/harmony_utils.py
_parse_reasoning_content ¶
_parse_reasoning_content(
message: Message,
) -> list[ResponseOutputItem]
Parse reasoning/analysis content into reasoning items.
Source code in vllm/entrypoints/openai/parser/harmony_utils.py
auto_drop_analysis_messages ¶
Harmony models expect the analysis messages (representing raw chain of thought) to be dropped after an assistant message to the final channel is produced from the reasoning of those messages.
The openai-harmony library does this if the very last assistant message is to the final channel, but it does not handle the case where we're in longer multi-turn conversations and the client gave us reasoning content from previous turns of the conversation with multiple assistant messages to the final channel in the conversation.
So, we find the index of the last assistant message to the final channel and drop all analysis messages that precede it, leaving only the analysis messages that are relevant to the current part of the conversation.
Source code in vllm/entrypoints/openai/parser/harmony_utils.py
construct_harmony_previous_input_messages ¶
construct_harmony_previous_input_messages(
request: ResponsesRequest,
) -> list[Message]
Source code in vllm/entrypoints/openai/parser/harmony_utils.py
create_tool_definition ¶
create_tool_definition(
tool: ChatCompletionToolsParam | Tool,
)
Source code in vllm/entrypoints/openai/parser/harmony_utils.py
flatten_chat_text_content ¶
Extract the text parts from a chat message content field and flatten them into a single string.
Source code in vllm/entrypoints/openai/parser/harmony_utils.py
get_developer_message ¶
get_developer_message(
instructions: str | None = None,
tools: list[Tool | ChatCompletionToolsParam]
| None = None,
) -> Message
Source code in vllm/entrypoints/openai/parser/harmony_utils.py
get_encoding ¶
get_stop_tokens_for_assistant_actions ¶
get_streamable_parser_for_assistant ¶
get_system_message ¶
get_system_message(
model_identity: str | None = None,
reasoning_effort: Literal["high", "medium", "low"]
| None = None,
start_date: str | None = None,
browser_description: str | None = None,
python_description: str | None = None,
container_description: str | None = None,
instructions: str | None = None,
with_custom_tools: bool = False,
) -> Message
Source code in vllm/entrypoints/openai/parser/harmony_utils.py
has_custom_tools ¶
Checks if the given tool types are custom tools (i.e. any tool other than MCP buildin tools)
parse_chat_input_to_harmony_message ¶
parse_chat_input_to_harmony_message(
chat_msg, tool_id_names: dict[str, str] | None = None
) -> list[Message]
Parse a message from request.messages in the Chat Completion API to Harmony messages.
Source code in vllm/entrypoints/openai/parser/harmony_utils.py
303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 | |
parse_chat_inputs_to_harmony_messages ¶
Parse a list of messages from request.messages in the Chat Completion API to Harmony messages.
Source code in vllm/entrypoints/openai/parser/harmony_utils.py
parse_chat_output ¶
Parse the output of a Harmony chat completion into reasoning and final content. Note that when the openai tool parser is used, serving_chat only uses this for the reasoning content and gets the final content from the tool call parser.
When the openai tool parser is not enabled, or when GptOssReasoningParser is in use,this needs to return the final content without any tool calls parsed.
Empty reasoning or final content is returned as None instead of an empty string.
Source code in vllm/entrypoints/openai/parser/harmony_utils.py
parse_input_to_harmony_message ¶
parse_input_to_harmony_message(chat_msg) -> list[Message]
Parse a message from request.previous_input_messages in the Responsees API to Harmony messages.
Source code in vllm/entrypoints/openai/parser/harmony_utils.py
parse_output_into_messages ¶
parse_output_message ¶
parse_output_message(
message: Message,
) -> list[ResponseOutputItem]
Parse a Harmony message into a list of output response items.
Source code in vllm/entrypoints/openai/parser/harmony_utils.py
parse_remaining_state ¶
parse_remaining_state(
parser: StreamableParser,
) -> list[ResponseOutputItem]
Source code in vllm/entrypoints/openai/parser/harmony_utils.py
678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 | |
parse_response_input ¶
parse_response_input(
response_msg: ResponseInputOutputItem,
prev_responses: list[
ResponseOutputItem | ResponseReasoningItem
],
) -> Message