Compare commits

...

18 Commits

Author SHA1 Message Date
abhinav-aegis 1130eb3eec
Merge c8bdd31936 into 71a4eaedf9 2025-04-15 11:05:33 +05:30
Sungjun.Kim 71a4eaedf9
Bump up json-schema-to-pydantic from v0.2.3 to v0.2.4 (#6300)
---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-04-14 21:50:49 -07:00
Victor Dibia 95b1ed5d81
Update AutoGen dependency range in AGS (#6298) 2025-04-15 04:26:30 +00:00
Eric Zhu c75515990e
Update website for 0.5.2 (#6299) 2025-04-14 20:51:45 -07:00
Eric Zhu 3500170be1
update version 0.5.2 (#6296)
Update version
2025-04-14 18:03:44 -07:00
Eric Zhu c8bdd31936
Merge branch 'main' into aegis-structure-message 2025-04-14 13:50:19 -07:00
abhinav-aegis 6d5cb42a97 Support for Structured Message Factory in Assistant agent and GroupChat 2025-04-14 05:12:54 +00:00
abhinav-aegis a93e463ad6 Added test for combining MessageFactory with StructuredMessageFactory 2025-04-14 02:10:42 +00:00
abhinav-aegis 3bf46e031a Typo fixed in doc string 2025-04-14 01:45:20 +00:00
abhinav-aegis 65569f89a1 Fixed doc strings, change _to_config and _from_config to dump_component and load_component 2025-04-14 00:29:32 +00:00
abhinav-aegis 8699b27830 Fixed Typing hints. Added additional tests for format strings after fixing typing strings 2025-04-14 00:05:45 +00:00
larry 3425d7dc2c
Fix typo in multi-agent-debate.ipynb (#6288) 2025-04-13 03:59:37 +00:00
Eric Zhu ec6f19c329
Merge branch 'main' into aegis-structure-message 2025-04-12 20:39:38 -07:00
masquerlin eca80ff663
Update discover.md with adding email agent package (#6274)
adding email agent

## Why are these changes needed?

This PR introduces an AI-powered email assistant that can generate
images, attach files, draft reports, and send emails to multiple
recipients or specific users based on their queries. This feature is
highly beneficial for customer management and email marketing, enhancing
automation and improving efficiency.

## Related issue number

Open #6228 
## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
2025-04-11 09:35:18 -04:00
Ricky Loynd 92df415edf
Expose TCM TypedDict classes for apps to use (#6269)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

An app can pass untyped dicts to set configuration options of various
Task-Centric Memory classes. But tools like pyright can complain about
the loose typing. This PR exposes 4 TypedDict classes that apps can
optionally use.

<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

<!-- For example: "Closes #1234" -->

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
2025-04-10 15:55:21 -07:00
湛露先生 973774b27f
Fix publish_message-method() notes (#6250)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

<!-- For example: "Closes #1234" -->

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.

Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
2025-04-10 18:51:05 +00:00
Shyam Sathish d70cdf8223
Fix ValueError: Dataclass has a union type error (#6266)
Closes #6265

Convert the `Message` and `Resource` dataclasses to Pydantic models in
the `llamaindex-agent` cookbook.

* Replace `dataclass` with `BaseModel` for `Message` and `Resource`
classes.
* Update imports to use `BaseModel` from `pydantic`

Co-authored-by: Victor Dibia <victordibia@microsoft.com>
2025-04-10 11:38:13 -07:00
Macon Pegram 196be34cb6
[Bugfix] Fix for Issue #6241 - ChromaDB removed IncludeEnum (#6260)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

`IncludeEnum` was removed in ChromaDB when it was updated to `1.0.0`.
This caused issues when using `ChromaDBVectorMemory`. This PR fixes
those issues

## Related issue number

Closes #6241

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.

---------

Co-authored-by: Victor Dibia <victordibia@microsoft.com>
2025-04-10 09:41:41 -07:00
19 changed files with 528 additions and 144 deletions

View File

@ -90,6 +90,7 @@ body:
multiple: false multiple: false
options: options:
- "Python dev (main branch)" - "Python dev (main branch)"
- "Python 0.5.2"
- "Python 0.5.1" - "Python 0.5.1"
- "Python 0.4.9" - "Python 0.4.9"
- "Python 0.4.8" - "Python 0.4.8"

View File

@ -33,7 +33,7 @@ jobs:
[ [
# For main use the workflow target # For main use the workflow target
{ ref: "${{github.ref}}", dest-dir: dev, uv-version: "0.5.13", sphinx-release-override: "dev" }, { ref: "${{github.ref}}", dest-dir: dev, uv-version: "0.5.13", sphinx-release-override: "dev" },
{ ref: "python-v0.5.1", dest-dir: stable, uv-version: "0.5.13", sphinx-release-override: "stable" }, { ref: "python-v0.5.2", dest-dir: stable, uv-version: "0.5.13", sphinx-release-override: "stable" },
{ ref: "v0.4.0.post1", dest-dir: "0.4.0", uv-version: "0.5.13", sphinx-release-override: "" }, { ref: "v0.4.0.post1", dest-dir: "0.4.0", uv-version: "0.5.13", sphinx-release-override: "" },
{ ref: "v0.4.1", dest-dir: "0.4.1", uv-version: "0.5.13", sphinx-release-override: "" }, { ref: "v0.4.1", dest-dir: "0.4.1", uv-version: "0.5.13", sphinx-release-override: "" },
{ ref: "v0.4.2", dest-dir: "0.4.2", uv-version: "0.5.13", sphinx-release-override: "" }, { ref: "v0.4.2", dest-dir: "0.4.2", uv-version: "0.5.13", sphinx-release-override: "" },
@ -45,6 +45,7 @@ jobs:
{ ref: "python-v0.4.8", dest-dir: "0.4.8", uv-version: "0.5.13", sphinx-release-override: "" }, { ref: "python-v0.4.8", dest-dir: "0.4.8", uv-version: "0.5.13", sphinx-release-override: "" },
{ ref: "python-v0.4.9-website", dest-dir: "0.4.9", uv-version: "0.5.13", sphinx-release-override: "" }, { ref: "python-v0.4.9-website", dest-dir: "0.4.9", uv-version: "0.5.13", sphinx-release-override: "" },
{ ref: "python-v0.5.1", dest-dir: "0.5.1", uv-version: "0.5.13", sphinx-release-override: "" }, { ref: "python-v0.5.1", dest-dir: "0.5.1", uv-version: "0.5.13", sphinx-release-override: "" },
{ ref: "python-v0.5.2", dest-dir: "0.5.2", uv-version: "0.5.13", sphinx-release-override: "" },
] ]
steps: steps:
- name: Checkout - name: Checkout

View File

@ -5,11 +5,16 @@
"url": "/autogen/dev/" "url": "/autogen/dev/"
}, },
{ {
"name": "0.5.1 (stable)", "name": "0.5.2 (stable)",
"version": "stable", "version": "stable",
"url": "/autogen/stable/", "url": "/autogen/stable/",
"preferred": true "preferred": true
}, },
{
"name": "0.5.1",
"version": "0.5.1",
"url": "/autogen/0.5.1/"
},
{ {
"name": "0.4.9", "name": "0.4.9",
"version": "0.4.9", "version": "0.4.9",

View File

@ -4,7 +4,7 @@ build-backend = "hatchling.build"
[project] [project]
name = "autogen-agentchat" name = "autogen-agentchat"
version = "0.5.1" version = "0.5.2"
license = {file = "LICENSE-CODE"} license = {file = "LICENSE-CODE"}
description = "AutoGen agents and teams library" description = "AutoGen agents and teams library"
readme = "README.md" readme = "README.md"
@ -15,7 +15,7 @@ classifiers = [
"Operating System :: OS Independent", "Operating System :: OS Independent",
] ]
dependencies = [ dependencies = [
"autogen-core==0.5.1", "autogen-core==0.5.2",
] ]
[tool.ruff] [tool.ruff]

View File

@ -46,6 +46,7 @@ from ..messages import (
MemoryQueryEvent, MemoryQueryEvent,
ModelClientStreamingChunkEvent, ModelClientStreamingChunkEvent,
StructuredMessage, StructuredMessage,
StructuredMessageFactory,
TextMessage, TextMessage,
ThoughtEvent, ThoughtEvent,
ToolCallExecutionEvent, ToolCallExecutionEvent,
@ -74,6 +75,7 @@ class AssistantAgentConfig(BaseModel):
reflect_on_tool_use: bool reflect_on_tool_use: bool
tool_call_summary_format: str tool_call_summary_format: str
metadata: Dict[str, str] | None = None metadata: Dict[str, str] | None = None
structured_message_factory: ComponentModel | None = None
class AssistantAgent(BaseChatAgent, Component[AssistantAgentConfig]): class AssistantAgent(BaseChatAgent, Component[AssistantAgentConfig]):
@ -183,6 +185,7 @@ class AssistantAgent(BaseChatAgent, Component[AssistantAgentConfig]):
This will be used with the model client to generate structured output. This will be used with the model client to generate structured output.
If this is set, the agent will respond with a :class:`~autogen_agentchat.messages.StructuredMessage` instead of a :class:`~autogen_agentchat.messages.TextMessage` If this is set, the agent will respond with a :class:`~autogen_agentchat.messages.StructuredMessage` instead of a :class:`~autogen_agentchat.messages.TextMessage`
in the final response, unless `reflect_on_tool_use` is `False` and a tool call is made. in the final response, unless `reflect_on_tool_use` is `False` and a tool call is made.
format_string (str | None, optional): The format string used to create the content for a :class:`~autogen_agentchat.messages.StructuredMessage` response.
tool_call_summary_format (str, optional): The format string used to create the content for a :class:`~autogen_agentchat.messages.ToolCallSummaryMessage` response. tool_call_summary_format (str, optional): The format string used to create the content for a :class:`~autogen_agentchat.messages.ToolCallSummaryMessage` response.
The format string is used to format the tool call summary for every tool call result. The format string is used to format the tool call summary for every tool call result.
Defaults to "{result}". Defaults to "{result}".
@ -635,6 +638,7 @@ class AssistantAgent(BaseChatAgent, Component[AssistantAgentConfig]):
reflect_on_tool_use: bool | None = None, reflect_on_tool_use: bool | None = None,
tool_call_summary_format: str = "{result}", tool_call_summary_format: str = "{result}",
output_content_type: type[BaseModel] | None = None, output_content_type: type[BaseModel] | None = None,
format_string: str | None = None,
memory: Sequence[Memory] | None = None, memory: Sequence[Memory] | None = None,
metadata: Dict[str, str] | None = None, metadata: Dict[str, str] | None = None,
): ):
@ -643,6 +647,13 @@ class AssistantAgent(BaseChatAgent, Component[AssistantAgentConfig]):
self._model_client = model_client self._model_client = model_client
self._model_client_stream = model_client_stream self._model_client_stream = model_client_stream
self._output_content_type: type[BaseModel] | None = output_content_type self._output_content_type: type[BaseModel] | None = output_content_type
self._format_string = format_string
self._structured_message_factory: StructuredMessageFactory | None = None
if output_content_type is not None:
self._structured_message_factory = StructuredMessageFactory(
input_model=output_content_type, format_string=format_string
)
self._memory = None self._memory = None
if memory is not None: if memory is not None:
if isinstance(memory, list): if isinstance(memory, list):
@ -771,6 +782,7 @@ class AssistantAgent(BaseChatAgent, Component[AssistantAgentConfig]):
reflect_on_tool_use = self._reflect_on_tool_use reflect_on_tool_use = self._reflect_on_tool_use
tool_call_summary_format = self._tool_call_summary_format tool_call_summary_format = self._tool_call_summary_format
output_content_type = self._output_content_type output_content_type = self._output_content_type
format_string = self._format_string
# STEP 1: Add new user/handoff messages to the model context # STEP 1: Add new user/handoff messages to the model context
await self._add_messages_to_context( await self._add_messages_to_context(
@ -840,6 +852,7 @@ class AssistantAgent(BaseChatAgent, Component[AssistantAgentConfig]):
reflect_on_tool_use=reflect_on_tool_use, reflect_on_tool_use=reflect_on_tool_use,
tool_call_summary_format=tool_call_summary_format, tool_call_summary_format=tool_call_summary_format,
output_content_type=output_content_type, output_content_type=output_content_type,
format_string=format_string,
): ):
yield output_event yield output_event
@ -942,6 +955,7 @@ class AssistantAgent(BaseChatAgent, Component[AssistantAgentConfig]):
reflect_on_tool_use: bool, reflect_on_tool_use: bool,
tool_call_summary_format: str, tool_call_summary_format: str,
output_content_type: type[BaseModel] | None, output_content_type: type[BaseModel] | None,
format_string: str | None = None,
) -> AsyncGenerator[BaseAgentEvent | BaseChatMessage | Response, None]: ) -> AsyncGenerator[BaseAgentEvent | BaseChatMessage | Response, None]:
""" """
Handle final or partial responses from model_result, including tool calls, handoffs, Handle final or partial responses from model_result, including tool calls, handoffs,
@ -957,6 +971,7 @@ class AssistantAgent(BaseChatAgent, Component[AssistantAgentConfig]):
content=content, content=content,
source=agent_name, source=agent_name,
models_usage=model_result.usage, models_usage=model_result.usage,
format_string=format_string,
), ),
inner_messages=inner_messages, inner_messages=inner_messages,
) )
@ -1277,9 +1292,6 @@ class AssistantAgent(BaseChatAgent, Component[AssistantAgentConfig]):
def _to_config(self) -> AssistantAgentConfig: def _to_config(self) -> AssistantAgentConfig:
"""Convert the assistant agent to a declarative config.""" """Convert the assistant agent to a declarative config."""
if self._output_content_type:
raise ValueError("AssistantAgent with output_content_type does not support declarative config.")
return AssistantAgentConfig( return AssistantAgentConfig(
name=self.name, name=self.name,
model_client=self._model_client.dump_component(), model_client=self._model_client.dump_component(),
@ -1294,12 +1306,23 @@ class AssistantAgent(BaseChatAgent, Component[AssistantAgentConfig]):
model_client_stream=self._model_client_stream, model_client_stream=self._model_client_stream,
reflect_on_tool_use=self._reflect_on_tool_use, reflect_on_tool_use=self._reflect_on_tool_use,
tool_call_summary_format=self._tool_call_summary_format, tool_call_summary_format=self._tool_call_summary_format,
structured_message_factory=self._structured_message_factory.dump_component()
if self._structured_message_factory
else None,
metadata=self._metadata, metadata=self._metadata,
) )
@classmethod @classmethod
def _from_config(cls, config: AssistantAgentConfig) -> Self: def _from_config(cls, config: AssistantAgentConfig) -> Self:
"""Create an assistant agent from a declarative config.""" """Create an assistant agent from a declarative config."""
if config.structured_message_factory:
structured_message_factory = StructuredMessageFactory.load_component(config.structured_message_factory)
format_string = structured_message_factory.format_string
output_content_type = structured_message_factory.ContentModel
else:
format_string = None
output_content_type = None
return cls( return cls(
name=config.name, name=config.name,
@ -1313,5 +1336,7 @@ class AssistantAgent(BaseChatAgent, Component[AssistantAgentConfig]):
model_client_stream=config.model_client_stream, model_client_stream=config.model_client_stream,
reflect_on_tool_use=config.reflect_on_tool_use, reflect_on_tool_use=config.reflect_on_tool_use,
tool_call_summary_format=config.tool_call_summary_format, tool_call_summary_format=config.tool_call_summary_format,
output_content_type=output_content_type,
format_string=format_string,
metadata=config.metadata, metadata=config.metadata,
) )

View File

@ -5,15 +5,15 @@ class and includes specific fields relevant to the type of message being sent.
""" """
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from typing import Any, Dict, Generic, List, Literal, Mapping, TypeVar, Optional, Type from typing import Any, Dict, Generic, List, Literal, Mapping, Optional, Type, TypeVar
from autogen_core import FunctionCall, Image from autogen_core import Component, ComponentBase, FunctionCall, Image
from autogen_core.memory import MemoryContent from autogen_core.memory import MemoryContent
from autogen_core.models import FunctionExecutionResult, LLMMessage, RequestUsage, UserMessage from autogen_core.models import FunctionExecutionResult, LLMMessage, RequestUsage, UserMessage
from autogen_core.utils import schema_to_pydantic_model
from pydantic import BaseModel, Field, computed_field from pydantic import BaseModel, Field, computed_field
from typing_extensions import Annotated, Self from typing_extensions import Annotated, Self
from autogen_core import Component, ComponentBase
from autogen_core.utils import schema_to_pydantic_model
class BaseMessage(BaseModel, ABC): class BaseMessage(BaseModel, ABC):
"""Abstract base class for all message types in AgentChat. """Abstract base class for all message types in AgentChat.
@ -180,6 +180,7 @@ class StructuredMessage(BaseChatMessage, Generic[StructuredContentType]):
from pydantic import BaseModel from pydantic import BaseModel
from autogen_agentchat.messages import StructuredMessage from autogen_agentchat.messages import StructuredMessage
class MyMessageContent(BaseModel): class MyMessageContent(BaseModel):
text: str text: str
number: int number: int
@ -199,6 +200,13 @@ class StructuredMessage(BaseChatMessage, Generic[StructuredContentType]):
`Pydantic BaseModel <https://docs.pydantic.dev/latest/concepts/models/>`_.""" `Pydantic BaseModel <https://docs.pydantic.dev/latest/concepts/models/>`_."""
format_string: Optional[str] = None format_string: Optional[str] = None
"""
An optional format string to render the content into a human-readable format.
The format string can use the fields of the content model as placeholders.
For example, if the content model has a field `name`, you can use
`{name}` in the format string to include the value of that field.
The format string is used in the :meth:`to_text` method to create a
human-readable representation of the message."""
@computed_field @computed_field
def type(self) -> str: def type(self) -> str:
@ -222,13 +230,16 @@ class StructuredMessage(BaseChatMessage, Generic[StructuredContentType]):
source=self.source, source=self.source,
) )
class StructureMessageConfig(BaseModel): class StructureMessageConfig(BaseModel):
"""The declarative configuration for the structured input.""" """The declarative configuration for the structured input."""
json_schema: dict
format_string: Optional[str] = None
content_model_name: str
class StructuredMessageComponent(ComponentBase[StructureMessageConfig], Component[StructureMessageConfig]): json_schema: Dict[str, Any]
format_string: Optional[str] = None
content_model_name: str
class StructuredMessageFactory(ComponentBase[StructureMessageConfig], Component[StructureMessageConfig]):
""" """
A component that creates structured chat messages from Pydantic models or JSON schemas. A component that creates structured chat messages from Pydantic models or JSON schemas.
@ -243,28 +254,33 @@ class StructuredMessageComponent(ComponentBase[StructureMessageConfig], Componen
.. code-block:: python .. code-block:: python
from pydantic import BaseModel from pydantic import BaseModel
from autogen_agentchat.messages import StructuredMessageComponent from autogen_agentchat.messages import StructuredMessageFactory
class TestContent(BaseModel): class TestContent(BaseModel):
field1: str field1: str
field2: int field2: int
format_string = "This is a string {field1} and this is an int {field2}" format_string = "This is a string {field1} and this is an int {field2}"
sm_component = StructuredMessageComponent(input_model=TestContent, format_string=format_string) sm_component = StructuredMessageFactory(input_model=TestContent, format_string=format_string)
message = sm_component.StructuredMessage( message = sm_component.StructuredMessage(
source="test_agent", source="test_agent", content=TestContent(field1="Hello", field2=42), format_string=format_string
content=TestContent(field1="Hello", field2=42),
format_string=format_string
) )
print(message.to_model_text()) # Output: This is a string Hello and this is an int 42 print(message.to_model_text()) # Output: This is a string Hello and this is an int 42
config = sm_component._to_config() config = sm_component.dump_component()
s_m_dyn = StructuredMessageComponent._from_config(config)
message = s_m_dyn.StructuredMessage(source="test_agent", content=s_m_dyn.ContentModel(field1="dyn agent", field2=43), format_string=s_m_dyn.format_string) s_m_dyn = StructuredMessageFactory.load_component(config)
message = s_m_dyn.StructuredMessage(
source="test_agent",
content=s_m_dyn.ContentModel(field1="dyn agent", field2=43),
format_string=s_m_dyn.format_string,
)
print(type(message)) # StructuredMessage[GeneratedModel] print(type(message)) # StructuredMessage[GeneratedModel]
print(message.to_model_text()) # Output: This is a string dyn agent and this is an int 43 print(message.to_model_text()) # Output: This is a string dyn agent and this is an int 43
Attributes: Attributes:
component_config_schema (StructureMessageConfig): Defines the configuration structure for this component. component_config_schema (StructureMessageConfig): Defines the configuration structure for this component.
@ -282,35 +298,42 @@ class StructuredMessageComponent(ComponentBase[StructureMessageConfig], Componen
""" """
component_config_schema = StructureMessageConfig component_config_schema = StructureMessageConfig
component_provider_override = "autogen_agentchat.messages.StructuredMessageComponent" component_provider_override = "autogen_agentchat.messages.StructuredMessageFactory"
component_type = "structured_message" component_type = "structured_message"
def __init__(self, json_schema: Optional[str]=None, input_model: Optional[Type[BaseModel]] = None, format_string: Optional[str] = None, content_model_name: Optional[str] = None) -> None: def __init__(
self,
json_schema: Optional[Dict[str, Any]] = None,
input_model: Optional[Type[BaseModel]] = None,
format_string: Optional[str] = None,
content_model_name: Optional[str] = None,
) -> None:
self.format_string = format_string self.format_string = format_string
if not json_schema and not input_model: if json_schema:
raise ValueError("Either `input_json_schema` or `input_model` must be provided.") self.ContentModel = schema_to_pydantic_model(
json_schema, model_name=content_model_name or "GeneratedContentModel"
if input_model: )
elif input_model:
self.ContentModel = input_model self.ContentModel = input_model
else: else:
self.ContentModel = schema_to_pydantic_model(json_schema, model_name=content_model_name or "GeneratedContentModel") raise ValueError("Either `json_schema` or `input_model` must be provided.")
self.StructuredMessage = StructuredMessage[self.ContentModel] self.StructuredMessage = StructuredMessage[self.ContentModel] # type: ignore[name-defined]
def _to_config(self) -> StructureMessageConfig: def _to_config(self) -> StructureMessageConfig:
return StructureMessageConfig( return StructureMessageConfig(
json_schema=self.ContentModel.model_json_schema(), json_schema=self.ContentModel.model_json_schema(),
format_string=self.format_string, format_string=self.format_string,
content_model_name=self.ContentModel.__name__ content_model_name=self.ContentModel.__name__,
) )
@classmethod @classmethod
def _from_config(cls, config: StructureMessageConfig) -> "StructuredMessageComponent": def _from_config(cls, config: StructureMessageConfig) -> "StructuredMessageFactory":
return cls( return cls(
json_schema=config.json_schema, json_schema=config.json_schema,
format_string=config.format_string, format_string=config.format_string,
content_model_name=config.content_model_name content_model_name=config.content_model_name,
) )

View File

@ -21,6 +21,7 @@ from ...messages import (
MessageFactory, MessageFactory,
ModelClientStreamingChunkEvent, ModelClientStreamingChunkEvent,
StopMessage, StopMessage,
StructuredMessage,
TextMessage, TextMessage,
) )
from ...state import TeamState from ...state import TeamState
@ -68,6 +69,15 @@ class BaseGroupChat(Team, ABC, ComponentBase[BaseModel]):
for message_type in custom_message_types: for message_type in custom_message_types:
self._message_factory.register(message_type) self._message_factory.register(message_type)
for agent in participants:
for message_type in agent.produced_message_types:
try:
if issubclass(message_type, StructuredMessage):
self._message_factory.register(message_type) # type: ignore[reportUnknownArgumentType]
except TypeError:
# Not a class or not a valid subclassable type (skip)
pass
# The team ID is a UUID that is used to identify the team and its participants # The team ID is a UUID that is used to identify the team and its participants
# in the agent runtime. It is used to create unique topic types for each participant. # in the agent runtime. It is used to create unique topic types for each participant.
# Currently, team ID is binded to an object instance of the group chat class. # Currently, team ID is binded to an object instance of the group chat class.

View File

@ -36,7 +36,7 @@ from autogen_core.models._model_client import ModelFamily
from autogen_core.tools import BaseTool, FunctionTool from autogen_core.tools import BaseTool, FunctionTool
from autogen_ext.models.openai import OpenAIChatCompletionClient from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.models.replay import ReplayChatCompletionClient from autogen_ext.models.replay import ReplayChatCompletionClient
from pydantic import BaseModel from pydantic import BaseModel, ValidationError
from utils import FileLogHandler from utils import FileLogHandler
logger = logging.getLogger(EVENT_LOGGER_NAME) logger = logging.getLogger(EVENT_LOGGER_NAME)
@ -1104,3 +1104,103 @@ async def test_model_client_stream_with_tool_calls() -> None:
elif isinstance(message, ModelClientStreamingChunkEvent): elif isinstance(message, ModelClientStreamingChunkEvent):
chunks.append(message.content) chunks.append(message.content)
assert "".join(chunks) == "Example response 2 to task" assert "".join(chunks) == "Example response 2 to task"
@pytest.mark.asyncio
async def test_invalid_structured_output_format() -> None:
class AgentResponse(BaseModel):
response: str
status: str
model_client = ReplayChatCompletionClient(
[
CreateResult(
finish_reason="stop",
content='{"response": "Hello"}',
usage=RequestUsage(prompt_tokens=10, completion_tokens=5),
cached=False,
),
]
)
agent = AssistantAgent(
name="assistant",
model_client=model_client,
output_content_type=AgentResponse,
)
with pytest.raises(ValidationError):
await agent.run()
@pytest.mark.asyncio
async def test_structured_message_factory_serialization() -> None:
class AgentResponse(BaseModel):
result: str
status: str
model_client = ReplayChatCompletionClient(
[
CreateResult(
finish_reason="stop",
content=AgentResponse(result="All good", status="ok").model_dump_json(),
usage=RequestUsage(prompt_tokens=10, completion_tokens=5),
cached=False,
)
]
)
agent = AssistantAgent(
name="structured_agent",
model_client=model_client,
output_content_type=AgentResponse,
format_string="{result} - {status}",
)
dumped = agent.dump_component()
restored_agent = AssistantAgent.load_component(dumped)
result = await restored_agent.run()
assert isinstance(result.messages[0], StructuredMessage)
assert result.messages[0].content.result == "All good" # type: ignore[reportUnknownMemberType]
assert result.messages[0].content.status == "ok" # type: ignore[reportUnknownMemberType]
@pytest.mark.asyncio
async def test_structured_message_format_string() -> None:
class AgentResponse(BaseModel):
field1: str
field2: str
expected = AgentResponse(field1="foo", field2="bar")
model_client = ReplayChatCompletionClient(
[
CreateResult(
finish_reason="stop",
content=expected.model_dump_json(),
usage=RequestUsage(prompt_tokens=10, completion_tokens=5),
cached=False,
)
]
)
agent = AssistantAgent(
name="formatted_agent",
model_client=model_client,
output_content_type=AgentResponse,
format_string="{field1} - {field2}",
)
result = await agent.run()
assert len(result.messages) == 1
message = result.messages[0]
# Check that it's a StructuredMessage with the correct content model
assert isinstance(message, StructuredMessage)
assert isinstance(message.content, AgentResponse) # type: ignore[reportUnknownMemberType]
assert message.content == expected
# Check that the format_string was applied correctly
assert message.to_model_text() == "foo - bar"

View File

@ -1441,3 +1441,87 @@ async def test_declarative_groupchats_with_config(runtime: AgentRuntime | None)
assert selector.dump_component().provider == "autogen_agentchat.teams.SelectorGroupChat" assert selector.dump_component().provider == "autogen_agentchat.teams.SelectorGroupChat"
assert swarm.dump_component().provider == "autogen_agentchat.teams.Swarm" assert swarm.dump_component().provider == "autogen_agentchat.teams.Swarm"
assert magentic.dump_component().provider == "autogen_agentchat.teams.MagenticOneGroupChat" assert magentic.dump_component().provider == "autogen_agentchat.teams.MagenticOneGroupChat"
class _StructuredContent(BaseModel):
message: str
class _StructuredAgent(BaseChatAgent):
def __init__(self, name: str, description: str) -> None:
super().__init__(name, description)
self._message = _StructuredContent(message="Structured hello")
@property
def produced_message_types(self) -> Sequence[type[BaseChatMessage]]:
return (StructuredMessage[_StructuredContent],)
async def on_messages(self, messages: Sequence[BaseChatMessage], cancellation_token: CancellationToken) -> Response:
return Response(
chat_message=StructuredMessage[_StructuredContent](
source=self.name,
content=self._message,
format_string="Structured says: {message}",
)
)
async def on_reset(self, cancellation_token: CancellationToken) -> None:
pass
@pytest.mark.asyncio
async def test_message_type_auto_registration(runtime: AgentRuntime | None) -> None:
agent1 = _StructuredAgent("structured", description="emits structured messages")
agent2 = _EchoAgent("echo", description="echoes input")
team = RoundRobinGroupChat(participants=[agent1, agent2], max_turns=2, runtime=runtime)
result = await team.run(task="Say something structured")
assert len(result.messages) == 3
assert isinstance(result.messages[0], TextMessage)
assert isinstance(result.messages[1], StructuredMessage)
assert isinstance(result.messages[2], TextMessage)
assert result.messages[1].to_text() == "Structured says: Structured hello"
@pytest.mark.asyncio
async def test_structured_message_state_roundtrip(runtime: AgentRuntime | None) -> None:
agent1 = _StructuredAgent("structured", description="sends structured")
agent2 = _EchoAgent("echo", description="echoes")
team1 = RoundRobinGroupChat(
participants=[agent1, agent2],
termination_condition=MaxMessageTermination(2),
runtime=runtime,
)
await team1.run(task="Say something structured")
state1 = await team1.save_state()
# Recreate team without needing custom_message_types
agent3 = _StructuredAgent("structured", description="sends structured")
agent4 = _EchoAgent("echo", description="echoes")
team2 = RoundRobinGroupChat(
participants=[agent3, agent4],
termination_condition=MaxMessageTermination(2),
runtime=runtime,
)
await team2.load_state(state1)
state2 = await team2.save_state()
# Assert full state equality
assert state1 == state2
# Assert message thread content match
manager1 = await team1._runtime.try_get_underlying_agent_instance( # pyright: ignore
AgentId(f"{team1._group_chat_manager_name}_{team1._team_id}", team1._team_id), # pyright: ignore
RoundRobinGroupChatManager,
)
manager2 = await team2._runtime.try_get_underlying_agent_instance( # pyright: ignore
AgentId(f"{team2._group_chat_manager_name}_{team2._team_id}", team2._team_id), # pyright: ignore
RoundRobinGroupChatManager,
)
assert manager1._message_thread == manager2._message_thread # pyright: ignore

View File

@ -11,7 +11,7 @@ from autogen_agentchat.messages import (
MultiModalMessage, MultiModalMessage,
StopMessage, StopMessage,
StructuredMessage, StructuredMessage,
StructuredMessageComponent, StructuredMessageFactory,
TextMessage, TextMessage,
ToolCallExecutionEvent, ToolCallExecutionEvent,
ToolCallRequestEvent, ToolCallRequestEvent,
@ -52,18 +52,21 @@ def test_structured_message() -> None:
assert dumped_message["content"]["field2"] == 42 assert dumped_message["content"]["field2"] == 42
assert dumped_message["type"] == "StructuredMessage[TestContent]" assert dumped_message["type"] == "StructuredMessage[TestContent]"
def test_structured_message_component() -> None: def test_structured_message_component() -> None:
# Create a structured message with the test contentformat_string="this is a string {field1} and this is an int {field2}" # Create a structured message with the test contentformat_string="this is a string {field1} and this is an int {field2}"
format_string="this is a string {field1} and this is an int {field2}" format_string = "this is a string {field1} and this is an int {field2}"
s_m = StructuredMessageComponent(input_model=TestContent, format_string=format_string) s_m = StructuredMessageFactory(input_model=TestContent, format_string=format_string)
config = s_m._to_config() config = s_m.dump_component()
s_m_dyn = StructuredMessageComponent._from_config(config) s_m_dyn = StructuredMessageFactory.load_component(config)
message = s_m_dyn.StructuredMessage(source="test_agent", content=s_m_dyn.ContentModel(field1="test", field2=42), format_string=s_m_dyn.format_string) message = s_m_dyn.StructuredMessage(
source="test_agent", content=s_m_dyn.ContentModel(field1="test", field2=42), format_string=s_m_dyn.format_string
)
assert isinstance(message.content, s_m_dyn.ContentModel) assert isinstance(message.content, s_m_dyn.ContentModel)
assert not isinstance(message.content, TestContent) assert not isinstance(message.content, TestContent)
assert message.content.field1 == "test" assert message.content.field1 == "test" # type: ignore[attr-defined]
assert message.content.field2 == 42 assert message.content.field2 == 42 # type: ignore[attr-defined]
dumped_message = message.model_dump() dumped_message = message.model_dump()
assert dumped_message["source"] == "test_agent" assert dumped_message["source"] == "test_agent"
@ -71,6 +74,7 @@ def test_structured_message_component() -> None:
assert dumped_message["content"]["field2"] == 42 assert dumped_message["content"]["field2"] == 42
assert message.to_model_text() == format_string.format(field1="test", field2=42) assert message.to_model_text() == format_string.format(field1="test", field2=42)
def test_message_factory() -> None: def test_message_factory() -> None:
factory = MessageFactory() factory = MessageFactory()
@ -128,6 +132,22 @@ def test_message_factory() -> None:
assert structured_message.content.field2 == 42 assert structured_message.content.field2 == 42
assert structured_message.type == "StructuredMessage[TestContent]" # type: ignore[comparison-overlap] assert structured_message.type == "StructuredMessage[TestContent]" # type: ignore[comparison-overlap]
sm_factory = StructuredMessageFactory(input_model=TestContent, format_string=None, content_model_name="TestContent")
config = sm_factory.dump_component()
config.config["content_model_name"] = "DynamicTestContent"
sm_factory_dynamic = StructuredMessageFactory.load_component(config)
factory.register(sm_factory_dynamic.StructuredMessage)
msg = sm_factory_dynamic.StructuredMessage(
content=sm_factory_dynamic.ContentModel(field1="static", field2=123), source="static_agent"
)
restored = factory.create(msg.dump())
assert isinstance(restored, StructuredMessage)
assert isinstance(restored.content, sm_factory_dynamic.ContentModel) # type: ignore[reportUnkownMemberType]
assert restored.source == "static_agent"
assert restored.content.field1 == "static" # type: ignore[attr-defined]
assert restored.content.field2 == 123 # type: ignore[attr-defined]
class TestContainer(BaseModel): class TestContainer(BaseModel):
chat_messages: List[ChatMessage] chat_messages: List[ChatMessage]

View File

@ -38,7 +38,6 @@
"outputs": [], "outputs": [],
"source": [ "source": [
"import os\n", "import os\n",
"from pydantic import BaseModel\n",
"from typing import List, Optional\n", "from typing import List, Optional\n",
"\n", "\n",
"from autogen_core import AgentId, MessageContext, RoutedAgent, SingleThreadedAgentRuntime, message_handler\n", "from autogen_core import AgentId, MessageContext, RoutedAgent, SingleThreadedAgentRuntime, message_handler\n",
@ -57,7 +56,8 @@
"from llama_index.embeddings.openai import OpenAIEmbedding\n", "from llama_index.embeddings.openai import OpenAIEmbedding\n",
"from llama_index.llms.azure_openai import AzureOpenAI\n", "from llama_index.llms.azure_openai import AzureOpenAI\n",
"from llama_index.llms.openai import OpenAI\n", "from llama_index.llms.openai import OpenAI\n",
"from llama_index.tools.wikipedia import WikipediaToolSpec" "from llama_index.tools.wikipedia import WikipediaToolSpec\n",
"from pydantic import BaseModel"
] ]
}, },
{ {

View File

@ -291,7 +291,7 @@
"A --- B\n", "A --- B\n",
"| |\n", "| |\n",
"| |\n", "| |\n",
"C --- D\n", "D --- C\n",
"```\n", "```\n",
"\n", "\n",
"Each solver agent is connected to two other solver agents. \n", "Each solver agent is connected to two other solver agents. \n",

View File

@ -4,7 +4,7 @@ build-backend = "hatchling.build"
[project] [project]
name = "autogen-core" name = "autogen-core"
version = "0.5.1" version = "0.5.2"
license = {file = "LICENSE-CODE"} license = {file = "LICENSE-CODE"}
description = "Foundational interfaces and agent runtime implementation for AutoGen" description = "Foundational interfaces and agent runtime implementation for AutoGen"
readme = "README.md" readme = "README.md"
@ -69,7 +69,7 @@ dev = [
"pygments", "pygments",
"sphinxext-rediraffe", "sphinxext-rediraffe",
"autogen_ext==0.5.1", "autogen_ext==0.5.2",
# Documentation tooling # Documentation tooling
"diskcache", "diskcache",

View File

@ -1,5 +1,6 @@
import datetime import datetime
from typing import Annotated, Any, Dict, ForwardRef, List, Literal, Optional, Type, Union from ipaddress import IPv4Address, IPv6Address
from typing import Annotated, Any, Dict, ForwardRef, List, Literal, Optional, Type, Union, cast
from pydantic import ( from pydantic import (
UUID1, UUID1,
@ -10,7 +11,6 @@ from pydantic import (
BaseModel, BaseModel,
EmailStr, EmailStr,
Field, Field,
IPvAnyAddress,
conbytes, conbytes,
confloat, confloat,
conint, conint,
@ -18,6 +18,7 @@ from pydantic import (
constr, constr,
create_model, create_model,
) )
from pydantic.fields import FieldInfo
class SchemaConversionError(Exception): class SchemaConversionError(Exception):
@ -44,14 +45,14 @@ class UnsupportedKeywordError(SchemaConversionError):
pass pass
TYPE_MAPPING: Dict[str, Any] = { TYPE_MAPPING: Dict[str, Type[Any]] = {
"string": str, "string": str,
"integer": int, "integer": int,
"boolean": bool, "boolean": bool,
"number": float, "number": float,
"array": List, "array": List,
"object": dict, "object": dict,
"null": None, "null": type(None),
} }
FORMAT_MAPPING: Dict[str, Any] = { FORMAT_MAPPING: Dict[str, Any] = {
@ -64,10 +65,10 @@ FORMAT_MAPPING: Dict[str, Any] = {
"email": EmailStr, "email": EmailStr,
"uri": AnyUrl, "uri": AnyUrl,
"hostname": constr(strict=True), "hostname": constr(strict=True),
"ipv4": IPvAnyAddress, "ipv4": IPv4Address,
"ipv6": IPvAnyAddress, "ipv6": IPv6Address,
"ipv4-network": IPvAnyAddress, "ipv4-network": IPv4Address,
"ipv6-network": IPvAnyAddress, "ipv6-network": IPv6Address,
"date-time": datetime.datetime, "date-time": datetime.datetime,
"date": datetime.date, "date": datetime.date,
"time": datetime.time, "time": datetime.time,
@ -84,13 +85,28 @@ FORMAT_MAPPING: Dict[str, Any] = {
} }
def _make_field(
default: Any,
*,
title: Optional[str] = None,
description: Optional[str] = None,
) -> Any:
"""Construct a Pydantic Field with proper typing."""
field_kwargs: Dict[str, Any] = {}
if title is not None:
field_kwargs["title"] = title
if description is not None:
field_kwargs["description"] = description
return Field(default, **field_kwargs)
class _JSONSchemaToPydantic: class _JSONSchemaToPydantic:
def __init__(self): def __init__(self) -> None:
self._model_cache = {} self._model_cache: Dict[str, Optional[Union[Type[BaseModel], ForwardRef]]] = {}
def _resolve_ref(self, ref: str, schema: Dict[str, Any]) -> Dict[str, Any]: def _resolve_ref(self, ref: str, schema: Dict[str, Any]) -> Dict[str, Any]:
ref_key = ref.split("/")[-1] ref_key = ref.split("/")[-1]
definitions = schema.get("$defs", {}) definitions = cast(dict[str, dict[str, Any]], schema.get("$defs", {}))
if ref_key not in definitions: if ref_key not in definitions:
raise ReferenceNotFoundError( raise ReferenceNotFoundError(
@ -110,7 +126,7 @@ class _JSONSchemaToPydantic:
return self._model_cache[ref_name] return self._model_cache[ref_name]
def _process_definitions(self, root_schema: Dict[str, Any]): def _process_definitions(self, root_schema: Dict[str, Any]) -> None:
if "$defs" in root_schema: if "$defs" in root_schema:
for model_name in root_schema["$defs"]: for model_name in root_schema["$defs"]:
if model_name not in self._model_cache: if model_name not in self._model_cache:
@ -132,7 +148,7 @@ class _JSONSchemaToPydantic:
schema = {**resolved, **{k: v for k, v in schema.items() if k != "$ref"}} schema = {**resolved, **{k: v for k, v in schema.items() if k != "$ref"}}
if "allOf" in schema: if "allOf" in schema:
merged = {"type": "object", "properties": {}, "required": []} merged: Dict[str, Any] = {"type": "object", "properties": {}, "required": []}
for s in schema["allOf"]: for s in schema["allOf"]:
part = self._resolve_ref(s["$ref"], root_schema) if "$ref" in s else s part = self._resolve_ref(s["$ref"], root_schema) if "$ref" in s else s
merged["properties"].update(part.get("properties", {})) merged["properties"].update(part.get("properties", {}))
@ -146,7 +162,7 @@ class _JSONSchemaToPydantic:
return self._json_schema_to_model(schema, model_name, root_schema) return self._json_schema_to_model(schema, model_name, root_schema)
def _resolve_union_types(self, schemas: List[Dict[str, Any]]) -> List[Any]: def _resolve_union_types(self, schemas: List[Dict[str, Any]]) -> List[Any]:
types = [] types: List[Any] = []
for s in schemas: for s in schemas:
if "$ref" in s: if "$ref" in s:
types.append(self.get_ref(s["$ref"].split("/")[-1])) types.append(self.get_ref(s["$ref"].split("/")[-1]))
@ -167,7 +183,7 @@ class _JSONSchemaToPydantic:
) )
base_type = TYPE_MAPPING[json_type] base_type = TYPE_MAPPING[json_type]
constraints = {} constraints: Dict[str, Any] = {}
if json_type == "string": if json_type == "string":
if "minLength" in value: if "minLength" in value:
@ -219,7 +235,7 @@ class _JSONSchemaToPydantic:
) )
item_type = TYPE_MAPPING[item_type_name] item_type = TYPE_MAPPING[item_type_name]
base_type = conlist(item_type, **constraints) if constraints else List[item_type] base_type = conlist(item_type, **constraints) if constraints else List[item_type] # type: ignore[valid-type]
if "format" in value: if "format" in value:
format_type = FORMAT_MAPPING.get(value["format"]) format_type = FORMAT_MAPPING.get(value["format"])
@ -237,7 +253,7 @@ class _JSONSchemaToPydantic:
self, schema: Dict[str, Any], model_name: str, root_schema: Dict[str, Any] self, schema: Dict[str, Any], model_name: str, root_schema: Dict[str, Any]
) -> Type[BaseModel]: ) -> Type[BaseModel]:
if "allOf" in schema: if "allOf" in schema:
merged = {"type": "object", "properties": {}, "required": []} merged: Dict[str, Any] = {"type": "object", "properties": {}, "required": []}
for s in schema["allOf"]: for s in schema["allOf"]:
part = self._resolve_ref(s["$ref"], root_schema) if "$ref" in s else s part = self._resolve_ref(s["$ref"], root_schema) if "$ref" in s else s
merged["properties"].update(part.get("properties", {})) merged["properties"].update(part.get("properties", {}))
@ -248,7 +264,7 @@ class _JSONSchemaToPydantic:
merged["required"] = list(set(merged["required"])) merged["required"] = list(set(merged["required"]))
schema = merged schema = merged
fields = {} fields: Dict[str, tuple[Any, FieldInfo]] = {}
required_fields = set(schema.get("required", [])) required_fields = set(schema.get("required", []))
for key, value in schema.get("properties", {}).items(): for key, value in schema.get("properties", {}).items():
@ -299,9 +315,16 @@ class _JSONSchemaToPydantic:
if "description" in value: if "description" in value:
field_args["description"] = value["description"] field_args["description"] = value["description"]
fields[key] = (field_type, Field(**field_args)) fields[key] = (
field_type,
_make_field(
default_value if not is_required else ...,
title=value.get("title"),
description=value.get("description"),
),
)
model = create_model(model_name, **fields) model: Type[BaseModel] = create_model(model_name, **cast(dict[str, Any], fields))
model.model_rebuild() model.model_rebuild()
return model return model

View File

@ -1,12 +1,15 @@
from typing import Any, Dict, List, Literal, Optional import types
from typing import Any, Dict, List, Literal, Optional, Type, get_args, get_origin
from uuid import UUID, uuid4 from uuid import UUID, uuid4
import pytest import pytest
from autogen_core.utils._json_to_pydantic import ( from autogen_core.utils._json_to_pydantic import (
FORMAT_MAPPING,
TYPE_MAPPING,
FormatNotSupportedError, FormatNotSupportedError,
ReferenceNotFoundError, ReferenceNotFoundError,
UnsupportedKeywordError, UnsupportedKeywordError,
_JSONSchemaToPydantic, _JSONSchemaToPydantic, # pyright: ignore[reportPrivateUsage]
) )
from pydantic import BaseModel, EmailStr, Field, ValidationError from pydantic import BaseModel, EmailStr, Field, ValidationError
@ -44,31 +47,31 @@ class ComplexModel(BaseModel):
@pytest.fixture @pytest.fixture
def converter(): def converter() -> _JSONSchemaToPydantic:
"""Fixture to create a fresh instance of JSONSchemaToPydantic for every test.""" """Fixture to create a fresh instance of JSONSchemaToPydantic for every test."""
return _JSONSchemaToPydantic() return _JSONSchemaToPydantic()
@pytest.fixture @pytest.fixture
def sample_json_schema(): def sample_json_schema() -> Dict[str, Any]:
"""Fixture that returns a JSON schema dynamically using model_json_schema().""" """Fixture that returns a JSON schema dynamically using model_json_schema()."""
return User.model_json_schema() return User.model_json_schema()
@pytest.fixture @pytest.fixture
def sample_json_schema_recursive(): def sample_json_schema_recursive() -> Dict[str, Any]:
"""Fixture that returns a self-referencing JSON schema.""" """Fixture that returns a self-referencing JSON schema."""
return Employee.model_json_schema() return Employee.model_json_schema()
@pytest.fixture @pytest.fixture
def sample_json_schema_nested(): def sample_json_schema_nested() -> Dict[str, Any]:
"""Fixture that returns a nested schema with arrays of objects.""" """Fixture that returns a nested schema with arrays of objects."""
return Department.model_json_schema() return Department.model_json_schema()
@pytest.fixture @pytest.fixture
def sample_json_schema_complex(): def sample_json_schema_complex() -> Dict[str, Any]:
"""Fixture that returns a complex schema with multiple structures.""" """Fixture that returns a complex schema with multiple structures."""
return ComplexModel.model_json_schema() return ComplexModel.model_json_schema()
@ -82,7 +85,13 @@ def sample_json_schema_complex():
(sample_json_schema_complex, "ComplexModel", ["user", "extra_info", "sub_items"]), (sample_json_schema_complex, "ComplexModel", ["user", "extra_info", "sub_items"]),
], ],
) )
def test_json_schema_to_pydantic(converter, schema_fixture, model_name, expected_fields, request): def test_json_schema_to_pydantic(
converter: _JSONSchemaToPydantic,
schema_fixture: Any,
model_name: str,
expected_fields: List[str],
request: Any,
) -> None:
"""Test conversion of JSON Schema to Pydantic model using the class instance.""" """Test conversion of JSON Schema to Pydantic model using the class instance."""
schema = request.getfixturevalue(schema_fixture.__name__) schema = request.getfixturevalue(schema_fixture.__name__)
Model = converter.json_schema_to_pydantic(schema, model_name) Model = converter.json_schema_to_pydantic(schema, model_name)
@ -155,7 +164,13 @@ def test_json_schema_to_pydantic(converter, schema_fixture, model_name, expected
), ),
], ],
) )
def test_valid_data_model(converter, schema_fixture, model_name, valid_data, request): def test_valid_data_model(
converter: _JSONSchemaToPydantic,
schema_fixture: Any,
model_name: str,
valid_data: Dict[str, Any],
request: Any,
) -> None:
"""Test that valid data is accepted by the generated model.""" """Test that valid data is accepted by the generated model."""
schema = request.getfixturevalue(schema_fixture.__name__) schema = request.getfixturevalue(schema_fixture.__name__)
Model = converter.json_schema_to_pydantic(schema, model_name) Model = converter.json_schema_to_pydantic(schema, model_name)
@ -230,7 +245,13 @@ def test_valid_data_model(converter, schema_fixture, model_name, valid_data, req
), ),
], ],
) )
def test_invalid_data_model(converter, schema_fixture, model_name, invalid_data, request): def test_invalid_data_model(
converter: _JSONSchemaToPydantic,
schema_fixture: Any,
model_name: str,
invalid_data: Dict[str, Any],
request: Any,
) -> None:
"""Test that invalid data raises ValidationError.""" """Test that invalid data raises ValidationError."""
schema = request.getfixturevalue(schema_fixture.__name__) schema = request.getfixturevalue(schema_fixture.__name__)
Model = converter.json_schema_to_pydantic(schema, model_name) Model = converter.json_schema_to_pydantic(schema, model_name)
@ -258,19 +279,19 @@ class NestedListModel(BaseModel):
@pytest.fixture @pytest.fixture
def sample_json_schema_list_dict(): def sample_json_schema_list_dict() -> Dict[str, Any]:
"""Fixture for `List[Dict[str, Any]]`""" """Fixture for `List[Dict[str, Any]]`"""
return ListDictModel.model_json_schema() return ListDictModel.model_json_schema()
@pytest.fixture @pytest.fixture
def sample_json_schema_dict_list(): def sample_json_schema_dict_list() -> Dict[str, Any]:
"""Fixture for `Dict[str, List[Any]]`""" """Fixture for `Dict[str, List[Any]]`"""
return DictListModel.model_json_schema() return DictListModel.model_json_schema()
@pytest.fixture @pytest.fixture
def sample_json_schema_nested_list(): def sample_json_schema_nested_list() -> Dict[str, Any]:
"""Fixture for `List[List[str]]`""" """Fixture for `List[List[str]]`"""
return NestedListModel.model_json_schema() return NestedListModel.model_json_schema()
@ -283,7 +304,13 @@ def sample_json_schema_nested_list():
(sample_json_schema_nested_list, "NestedListModel", ["matrix"]), (sample_json_schema_nested_list, "NestedListModel", ["matrix"]),
], ],
) )
def test_json_schema_to_pydantic_nested(converter, schema_fixture, model_name, expected_fields, request): def test_json_schema_to_pydantic_nested(
converter: _JSONSchemaToPydantic,
schema_fixture: Any,
model_name: str,
expected_fields: list[str],
request: Any,
) -> None:
"""Test conversion of JSON Schema to Pydantic model using the class instance.""" """Test conversion of JSON Schema to Pydantic model using the class instance."""
schema = request.getfixturevalue(schema_fixture.__name__) schema = request.getfixturevalue(schema_fixture.__name__)
Model = converter.json_schema_to_pydantic(schema, model_name) Model = converter.json_schema_to_pydantic(schema, model_name)
@ -324,7 +351,13 @@ def test_json_schema_to_pydantic_nested(converter, schema_fixture, model_name, e
), ),
], ],
) )
def test_valid_data_model_nested(converter, schema_fixture, model_name, valid_data, request): def test_valid_data_model_nested(
converter: _JSONSchemaToPydantic,
schema_fixture: Any,
model_name: str,
valid_data: Dict[str, Any],
request: Any,
) -> None:
"""Test that valid data is accepted by the generated model.""" """Test that valid data is accepted by the generated model."""
schema = request.getfixturevalue(schema_fixture.__name__) schema = request.getfixturevalue(schema_fixture.__name__)
Model = converter.json_schema_to_pydantic(schema, model_name) Model = converter.json_schema_to_pydantic(schema, model_name)
@ -364,7 +397,13 @@ def test_valid_data_model_nested(converter, schema_fixture, model_name, valid_da
), ),
], ],
) )
def test_invalid_data_model_nested(converter, schema_fixture, model_name, invalid_data, request): def test_invalid_data_model_nested(
converter: _JSONSchemaToPydantic,
schema_fixture: Any,
model_name: str,
invalid_data: Dict[str, Any],
request: Any,
) -> None:
"""Test that invalid data raises ValidationError.""" """Test that invalid data raises ValidationError."""
schema = request.getfixturevalue(schema_fixture.__name__) schema = request.getfixturevalue(schema_fixture.__name__)
Model = converter.json_schema_to_pydantic(schema, model_name) Model = converter.json_schema_to_pydantic(schema, model_name)
@ -373,25 +412,25 @@ def test_invalid_data_model_nested(converter, schema_fixture, model_name, invali
Model(**invalid_data) Model(**invalid_data)
def test_reference_not_found(converter): def test_reference_not_found(converter: _JSONSchemaToPydantic) -> None:
schema = {"type": "object", "properties": {"manager": {"$ref": "#/$defs/MissingRef"}}} schema = {"type": "object", "properties": {"manager": {"$ref": "#/$defs/MissingRef"}}}
with pytest.raises(ReferenceNotFoundError): with pytest.raises(ReferenceNotFoundError):
converter.json_schema_to_pydantic(schema, "MissingRefModel") converter.json_schema_to_pydantic(schema, "MissingRefModel")
def test_format_not_supported(converter): def test_format_not_supported(converter: _JSONSchemaToPydantic) -> None:
schema = {"type": "object", "properties": {"custom_field": {"type": "string", "format": "unsupported-format"}}} schema = {"type": "object", "properties": {"custom_field": {"type": "string", "format": "unsupported-format"}}}
with pytest.raises(FormatNotSupportedError): with pytest.raises(FormatNotSupportedError):
converter.json_schema_to_pydantic(schema, "UnsupportedFormatModel") converter.json_schema_to_pydantic(schema, "UnsupportedFormatModel")
def test_unsupported_keyword(converter): def test_unsupported_keyword(converter: _JSONSchemaToPydantic) -> None:
schema = {"type": "object", "properties": {"broken_field": {"title": "Missing type"}}} schema = {"type": "object", "properties": {"broken_field": {"title": "Missing type"}}}
with pytest.raises(UnsupportedKeywordError): with pytest.raises(UnsupportedKeywordError):
converter.json_schema_to_pydantic(schema, "MissingTypeModel") converter.json_schema_to_pydantic(schema, "MissingTypeModel")
def test_enum_field_schema(): def test_enum_field_schema() -> None:
schema = { schema = {
"type": "object", "type": "object",
"properties": { "properties": {
@ -401,18 +440,24 @@ def test_enum_field_schema():
"required": ["status"], "required": ["status"],
} }
converter = _JSONSchemaToPydantic() converter: _JSONSchemaToPydantic = _JSONSchemaToPydantic()
Model = converter.json_schema_to_pydantic(schema, "Task") Model = converter.json_schema_to_pydantic(schema, "Task")
assert Model.model_fields["status"].annotation == Literal["pending", "approved", "rejected"] status_ann = Model.model_fields["status"].annotation
assert Model.model_fields["priority"].annotation == Optional[Literal[1, 2, 3]] assert get_origin(status_ann) is Literal
assert set(get_args(status_ann)) == {"pending", "approved", "rejected"}
priority_ann = Model.model_fields["priority"].annotation
args = get_args(priority_ann)
assert type(None) in args
assert Literal[1, 2, 3] in args
instance = Model(status="approved", priority=2) instance = Model(status="approved", priority=2)
assert instance.status == "approved" assert instance.status == "approved" # type: ignore[attr-defined]
assert instance.priority == 2 assert instance.priority == 2 # type: ignore[attr-defined]
def test_metadata_title_description(converter): def test_metadata_title_description(converter: _JSONSchemaToPydantic) -> None:
schema = { schema = {
"title": "CustomerProfile", "title": "CustomerProfile",
"description": "A profile containing personal and contact info", "description": "A profile containing personal and contact info",
@ -437,7 +482,7 @@ def test_metadata_title_description(converter):
"required": ["first_name"], "required": ["first_name"],
} }
Model = converter.json_schema_to_pydantic(schema, "CustomerProfile") Model: Type[BaseModel] = converter.json_schema_to_pydantic(schema, "CustomerProfile")
generated_schema = Model.model_json_schema() generated_schema = Model.model_json_schema()
assert generated_schema["title"] == "CustomerProfile" assert generated_schema["title"] == "CustomerProfile"
@ -460,7 +505,7 @@ def test_metadata_title_description(converter):
assert email["description"] == "Primary email" assert email["description"] == "Primary email"
def test_oneof_with_discriminator(converter): def test_oneof_with_discriminator(converter: _JSONSchemaToPydantic) -> None:
schema = { schema = {
"title": "PetWrapper", "title": "PetWrapper",
"type": "object", "type": "object",
@ -491,11 +536,11 @@ def test_oneof_with_discriminator(converter):
# Instantiate with a Cat # Instantiate with a Cat
cat = Model(pet={"pet_type": "cat", "hunting_skill": "expert"}) cat = Model(pet={"pet_type": "cat", "hunting_skill": "expert"})
assert cat.pet.pet_type == "cat" assert cat.pet.pet_type == "cat" # type: ignore[attr-defined]
# Instantiate with a Dog # Instantiate with a Dog
dog = Model(pet={"pet_type": "dog", "pack_size": 4}) dog = Model(pet={"pet_type": "dog", "pack_size": 4})
assert dog.pet.pet_type == "dog" assert dog.pet.pet_type == "dog" # type: ignore[attr-defined]
# Check round-trip schema includes discriminator # Check round-trip schema includes discriminator
model_schema = Model.model_json_schema() model_schema = Model.model_json_schema()
@ -503,7 +548,7 @@ def test_oneof_with_discriminator(converter):
assert model_schema["properties"]["pet"]["discriminator"]["propertyName"] == "pet_type" assert model_schema["properties"]["pet"]["discriminator"]["propertyName"] == "pet_type"
def test_allof_merging_with_refs(converter): def test_allof_merging_with_refs(converter: _JSONSchemaToPydantic) -> None:
schema = { schema = {
"title": "EmployeeWithDepartment", "title": "EmployeeWithDepartment",
"allOf": [{"$ref": "#/$defs/Employee"}, {"$ref": "#/$defs/Department"}], "allOf": [{"$ref": "#/$defs/Employee"}, {"$ref": "#/$defs/Department"}],
@ -525,15 +570,15 @@ def test_allof_merging_with_refs(converter):
Model = converter.json_schema_to_pydantic(schema, "EmployeeWithDepartment") Model = converter.json_schema_to_pydantic(schema, "EmployeeWithDepartment")
instance = Model(id="123", name="Alice", department="Engineering") instance = Model(id="123", name="Alice", department="Engineering")
assert instance.id == "123" assert instance.id == "123" # type: ignore[attr-defined]
assert instance.name == "Alice" assert instance.name == "Alice" # type: ignore[attr-defined]
assert instance.department == "Engineering" assert instance.department == "Engineering" # type: ignore[attr-defined]
dumped = instance.model_dump() dumped = instance.model_dump()
assert dumped == {"id": "123", "name": "Alice", "department": "Engineering"} assert dumped == {"id": "123", "name": "Alice", "department": "Engineering"}
def test_nested_allof_merging(converter): def test_nested_allof_merging(converter: _JSONSchemaToPydantic) -> None:
schema = { schema = {
"title": "ContainerModel", "title": "ContainerModel",
"type": "object", "type": "object",
@ -565,8 +610,8 @@ def test_nested_allof_merging(converter):
Model = converter.json_schema_to_pydantic(schema, "ContainerModel") Model = converter.json_schema_to_pydantic(schema, "ContainerModel")
instance = Model(nested={"data": {"base_field": "abc", "extra": "xyz"}}) instance = Model(nested={"data": {"base_field": "abc", "extra": "xyz"}})
assert instance.nested.data.base_field == "abc" assert instance.nested.data.base_field == "abc" # type: ignore[attr-defined]
assert instance.nested.data.extra == "xyz" assert instance.nested.data.extra == "xyz" # type: ignore[attr-defined]
@pytest.mark.parametrize( @pytest.mark.parametrize(
@ -620,12 +665,15 @@ def test_nested_allof_merging(converter):
), ),
], ],
) )
def test_field_constraints(schema, field_name, valid_values, invalid_values): def test_field_constraints(
schema: Dict[str, Any],
field_name: str,
valid_values: List[Any],
invalid_values: List[Any],
) -> None:
converter = _JSONSchemaToPydantic() converter = _JSONSchemaToPydantic()
Model = converter.json_schema_to_pydantic(schema, "ConstraintModel") Model = converter.json_schema_to_pydantic(schema, "ConstraintModel")
import json
for value in valid_values: for value in valid_values:
instance = Model(**{field_name: value}) instance = Model(**{field_name: value})
assert getattr(instance, field_name) == value assert getattr(instance, field_name) == value
@ -650,7 +698,60 @@ def test_field_constraints(schema, field_name, valid_values, invalid_values):
}, },
], ],
) )
def test_unknown_type_raises(schema): def test_unknown_type_raises(schema: Dict[str, Any]) -> None:
converter = _JSONSchemaToPydantic() converter = _JSONSchemaToPydantic()
with pytest.raises(UnsupportedKeywordError): with pytest.raises(UnsupportedKeywordError):
converter.json_schema_to_pydantic(schema, "UnknownTypeModel") converter.json_schema_to_pydantic(schema, "UnknownTypeModel")
@pytest.mark.parametrize("json_type, expected_type", list(TYPE_MAPPING.items()))
def test_basic_type_mapping(json_type: str, expected_type: type) -> None:
schema = {
"type": "object",
"properties": {"field": {"type": json_type}},
"required": ["field"],
}
converter = _JSONSchemaToPydantic()
Model = converter.json_schema_to_pydantic(schema, f"{json_type.capitalize()}Model")
assert "field" in Model.__annotations__
field_type = Model.__annotations__["field"]
# For array/object/null we check the outer type only
if json_type == "null":
assert field_type is type(None)
elif json_type == "array":
assert getattr(field_type, "__origin__", None) is list
elif json_type == "object":
assert field_type in (dict, Dict) or getattr(field_type, "__origin__", None) in (dict, Dict)
else:
assert field_type == expected_type
@pytest.mark.parametrize("format_name, expected_type", list(FORMAT_MAPPING.items()))
def test_format_mapping(format_name: str, expected_type: Any) -> None:
schema = {
"type": "object",
"properties": {"field": {"type": "string", "format": format_name}},
"required": ["field"],
}
converter = _JSONSchemaToPydantic()
Model = converter.json_schema_to_pydantic(schema, f"{format_name.capitalize()}Model")
assert "field" in Model.__annotations__
field_type = Model.__annotations__["field"]
if isinstance(expected_type, types.FunctionType): # if it's a constrained constructor (e.g., conint)
assert callable(field_type)
else:
assert field_type == expected_type
def test_unknown_format_raises() -> None:
schema = {
"type": "object",
"properties": {"bad_field": {"type": "string", "format": "definitely-not-a-format"}},
}
converter = _JSONSchemaToPydantic()
with pytest.raises(FormatNotSupportedError):
converter.json_schema_to_pydantic(schema, "UnknownFormatModel")

View File

@ -4,7 +4,7 @@ build-backend = "hatchling.build"
[project] [project]
name = "autogen-ext" name = "autogen-ext"
version = "0.5.1" version = "0.5.2"
license = {file = "LICENSE-CODE"} license = {file = "LICENSE-CODE"}
description = "AutoGen extensions library" description = "AutoGen extensions library"
readme = "README.md" readme = "README.md"
@ -15,7 +15,7 @@ classifiers = [
"Operating System :: OS Independent", "Operating System :: OS Independent",
] ]
dependencies = [ dependencies = [
"autogen-core==0.5.1", "autogen-core==0.5.2",
] ]
[project.optional-dependencies] [project.optional-dependencies]
@ -31,7 +31,7 @@ docker = ["docker~=7.0", "asyncio_atexit>=1.0.1"]
ollama = ["ollama>=0.4.7", "tiktoken>=0.8.0"] ollama = ["ollama>=0.4.7", "tiktoken>=0.8.0"]
openai = ["openai>=1.66.5", "tiktoken>=0.8.0", "aiofiles"] openai = ["openai>=1.66.5", "tiktoken>=0.8.0", "aiofiles"]
file-surfer = [ file-surfer = [
"autogen-agentchat==0.5.1", "autogen-agentchat==0.5.2",
"magika>=0.6.1rc2", "magika>=0.6.1rc2",
"markitdown[all]~=0.1.0a3", "markitdown[all]~=0.1.0a3",
] ]
@ -43,21 +43,21 @@ llama-cpp = [
graphrag = ["graphrag>=1.0.1"] graphrag = ["graphrag>=1.0.1"]
chromadb = ["chromadb>=1.0.0"] chromadb = ["chromadb>=1.0.0"]
web-surfer = [ web-surfer = [
"autogen-agentchat==0.5.1", "autogen-agentchat==0.5.2",
"playwright>=1.48.0", "playwright>=1.48.0",
"pillow>=11.0.0", "pillow>=11.0.0",
"magika>=0.6.1rc2", "magika>=0.6.1rc2",
"markitdown[all]~=0.1.0a3", "markitdown[all]~=0.1.0a3",
] ]
magentic-one = [ magentic-one = [
"autogen-agentchat==0.5.1", "autogen-agentchat==0.5.2",
"magika>=0.6.1rc2", "magika>=0.6.1rc2",
"markitdown[all]~=0.1.0a3", "markitdown[all]~=0.1.0a3",
"playwright>=1.48.0", "playwright>=1.48.0",
"pillow>=11.0.0", "pillow>=11.0.0",
] ]
video-surfer = [ video-surfer = [
"autogen-agentchat==0.5.1", "autogen-agentchat==0.5.2",
"opencv-python>=4.5", "opencv-python>=4.5",
"ffmpeg-python", "ffmpeg-python",
"openai-whisper", "openai-whisper",
@ -137,7 +137,7 @@ rich = ["rich>=13.9.4"]
mcp = [ mcp = [
"mcp>=1.6.0", "mcp>=1.6.0",
"json-schema-to-pydantic>=0.2.3" "json-schema-to-pydantic>=0.2.4"
] ]
[tool.hatch.build.targets.wheel] [tool.hatch.build.targets.wheel]

View File

@ -1,4 +1,4 @@
from .memory_controller import MemoryController, MemoryControllerConfig
from ._memory_bank import MemoryBankConfig from ._memory_bank import MemoryBankConfig
from .memory_controller import MemoryController, MemoryControllerConfig
__all__ = ["MemoryController", "MemoryControllerConfig", "MemoryBankConfig"] __all__ = ["MemoryController", "MemoryControllerConfig", "MemoryBankConfig"]

View File

@ -32,9 +32,9 @@ dependencies = [
"loguru", "loguru",
"pyyaml", "pyyaml",
"html2text", "html2text",
"autogen-core>=0.4.9.2,<0.5", "autogen-core>=0.4.9.2,<0.6",
"autogen-agentchat>=0.4.9.2,<0.5", "autogen-agentchat>=0.4.9.2,<0.6",
"autogen-ext[magentic-one, openai, azure]>=0.4.2,<0.5", "autogen-ext[magentic-one, openai, azure]>=0.4.2,<0.6",
"anthropic", "anthropic",
] ]
optional-dependencies = {web = ["fastapi", "uvicorn"], database = ["psycopg"]} optional-dependencies = {web = ["fastapi", "uvicorn"], database = ["psycopg"]}

View File

@ -1,4 +1,5 @@
version = 1 version = 1
revision = 1
requires-python = ">=3.10, <3.13" requires-python = ">=3.10, <3.13"
resolution-markers = [ resolution-markers = [
"python_full_version >= '3.12.4' and sys_platform == 'darwin'", "python_full_version >= '3.12.4' and sys_platform == 'darwin'",
@ -90,7 +91,6 @@ wheels = [
[[package]] [[package]]
name = "agbench" name = "agbench"
version = "0.0.1a1"
source = { editable = "packages/agbench" } source = { editable = "packages/agbench" }
dependencies = [ dependencies = [
{ name = "azure-identity" }, { name = "azure-identity" },
@ -452,7 +452,7 @@ wheels = [
[[package]] [[package]]
name = "autogen-agentchat" name = "autogen-agentchat"
version = "0.5.1" version = "0.5.2"
source = { editable = "packages/autogen-agentchat" } source = { editable = "packages/autogen-agentchat" }
dependencies = [ dependencies = [
{ name = "autogen-core" }, { name = "autogen-core" },
@ -463,7 +463,7 @@ requires-dist = [{ name = "autogen-core", editable = "packages/autogen-core" }]
[[package]] [[package]]
name = "autogen-core" name = "autogen-core"
version = "0.5.1" version = "0.5.2"
source = { editable = "packages/autogen-core" } source = { editable = "packages/autogen-core" }
dependencies = [ dependencies = [
{ name = "jsonref" }, { name = "jsonref" },
@ -582,7 +582,7 @@ dev = [
[[package]] [[package]]
name = "autogen-ext" name = "autogen-ext"
version = "0.5.1" version = "0.5.2"
source = { editable = "packages/autogen-ext" } source = { editable = "packages/autogen-ext" }
dependencies = [ dependencies = [
{ name = "autogen-core" }, { name = "autogen-core" },
@ -745,7 +745,7 @@ requires-dist = [
{ name = "httpx", marker = "extra == 'http-tool'", specifier = ">=0.27.0" }, { name = "httpx", marker = "extra == 'http-tool'", specifier = ">=0.27.0" },
{ name = "ipykernel", marker = "extra == 'jupyter-executor'", specifier = ">=6.29.5" }, { name = "ipykernel", marker = "extra == 'jupyter-executor'", specifier = ">=6.29.5" },
{ name = "json-schema-to-pydantic", marker = "extra == 'http-tool'", specifier = ">=0.2.0" }, { name = "json-schema-to-pydantic", marker = "extra == 'http-tool'", specifier = ">=0.2.0" },
{ name = "json-schema-to-pydantic", marker = "extra == 'mcp'", specifier = ">=0.2.3" }, { name = "json-schema-to-pydantic", marker = "extra == 'mcp'", specifier = ">=0.2.4" },
{ name = "langchain-core", marker = "extra == 'langchain'", specifier = "~=0.3.3" }, { name = "langchain-core", marker = "extra == 'langchain'", specifier = "~=0.3.3" },
{ name = "llama-cpp-python", marker = "extra == 'llama-cpp'", specifier = ">=0.3.8" }, { name = "llama-cpp-python", marker = "extra == 'llama-cpp'", specifier = ">=0.3.8" },
{ name = "magika", marker = "extra == 'file-surfer'", specifier = ">=0.6.1rc2" }, { name = "magika", marker = "extra == 'file-surfer'", specifier = ">=0.6.1rc2" },
@ -780,6 +780,7 @@ requires-dist = [
{ name = "tiktoken", marker = "extra == 'ollama'", specifier = ">=0.8.0" }, { name = "tiktoken", marker = "extra == 'ollama'", specifier = ">=0.8.0" },
{ name = "tiktoken", marker = "extra == 'openai'", specifier = ">=0.8.0" }, { name = "tiktoken", marker = "extra == 'openai'", specifier = ">=0.8.0" },
] ]
provides-extras = ["anthropic", "langchain", "azure", "docker", "ollama", "openai", "file-surfer", "llama-cpp", "graphrag", "chromadb", "web-surfer", "magentic-one", "video-surfer", "diskcache", "redis", "grpc", "jupyter-executor", "task-centric-memory", "semantic-kernel-core", "gemini", "semantic-kernel-google", "semantic-kernel-hugging-face", "semantic-kernel-mistralai", "semantic-kernel-ollama", "semantic-kernel-onnx", "semantic-kernel-anthropic", "semantic-kernel-pandas", "semantic-kernel-aws", "semantic-kernel-dapr", "http-tool", "semantic-kernel-all", "rich", "mcp"]
[package.metadata.requires-dev] [package.metadata.requires-dev]
dev = [ dev = [
@ -808,7 +809,6 @@ requires-dist = [
[[package]] [[package]]
name = "autogenstudio" name = "autogenstudio"
version = "0.4.2"
source = { editable = "packages/autogen-studio" } source = { editable = "packages/autogen-studio" }
dependencies = [ dependencies = [
{ name = "aiofiles" }, { name = "aiofiles" },
@ -862,6 +862,7 @@ requires-dist = [
{ name = "uvicorn", marker = "extra == 'web'" }, { name = "uvicorn", marker = "extra == 'web'" },
{ name = "websockets" }, { name = "websockets" },
] ]
provides-extras = ["web", "database"]
[[package]] [[package]]
name = "autograd" name = "autograd"
@ -3044,14 +3045,14 @@ wheels = [
[[package]] [[package]]
name = "json-schema-to-pydantic" name = "json-schema-to-pydantic"
version = "0.2.3" version = "0.2.4"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
dependencies = [ dependencies = [
{ name = "pydantic" }, { name = "pydantic" },
] ]
sdist = { url = "https://files.pythonhosted.org/packages/f2/8d/da0e791baf63a957ff67e0706d59386b72ab87858e616b6fcfc9b58cd910/json_schema_to_pydantic-0.2.3.tar.gz", hash = "sha256:c76db1f6001996895328e7aa174aae201d85d1f5e79d592c272ea03c8586e453", size = 35305 } sdist = { url = "https://files.pythonhosted.org/packages/0e/5a/82ce52917b4b021e739dc02384bb3257b5ddd04e40211eacdc32c88bdda5/json_schema_to_pydantic-0.2.4.tar.gz", hash = "sha256:c24060aa7694ae7be0465ce11339a6d1cc8a72cd8f4378c889d19722fa7da1ee", size = 37816 }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/4a/55/81bbfbc806aab8dc4a21ad1c9c7fd61f94f2b4076ea64f1730a0368831a2/json_schema_to_pydantic-0.2.3-py3-none-any.whl", hash = "sha256:fe0c04357aa8d27ad5a46e54c2d6a8f35ca6c10b36e76a95c39827e38397f427", size = 11699 }, { url = "https://files.pythonhosted.org/packages/2e/86/35135e8e4b1da50e6e8ed2afcacce589e576f3460c892d5e616390a4eb71/json_schema_to_pydantic-0.2.4-py3-none-any.whl", hash = "sha256:5c46675df0ab2685d92ed805da38348a34488654cb95ceb1a564dda23dcc3a89", size = 11940 },
] ]
[[package]] [[package]]
@ -4690,7 +4691,6 @@ name = "nvidia-cublas-cu12"
version = "12.4.5.8" version = "12.4.5.8"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/7f/7f/7fbae15a3982dc9595e49ce0f19332423b260045d0a6afe93cdbe2f1f624/nvidia_cublas_cu12-12.4.5.8-py3-none-manylinux2014_aarch64.whl", hash = "sha256:0f8aa1706812e00b9f19dfe0cdb3999b092ccb8ca168c0db5b8ea712456fd9b3", size = 363333771 },
{ url = "https://files.pythonhosted.org/packages/ae/71/1c91302526c45ab494c23f61c7a84aa568b8c1f9d196efa5993957faf906/nvidia_cublas_cu12-12.4.5.8-py3-none-manylinux2014_x86_64.whl", hash = "sha256:2fc8da60df463fdefa81e323eef2e36489e1c94335b5358bcb38360adf75ac9b", size = 363438805 }, { url = "https://files.pythonhosted.org/packages/ae/71/1c91302526c45ab494c23f61c7a84aa568b8c1f9d196efa5993957faf906/nvidia_cublas_cu12-12.4.5.8-py3-none-manylinux2014_x86_64.whl", hash = "sha256:2fc8da60df463fdefa81e323eef2e36489e1c94335b5358bcb38360adf75ac9b", size = 363438805 },
] ]
@ -4699,7 +4699,6 @@ name = "nvidia-cuda-cupti-cu12"
version = "12.4.127" version = "12.4.127"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/93/b5/9fb3d00386d3361b03874246190dfec7b206fd74e6e287b26a8fcb359d95/nvidia_cuda_cupti_cu12-12.4.127-py3-none-manylinux2014_aarch64.whl", hash = "sha256:79279b35cf6f91da114182a5ce1864997fd52294a87a16179ce275773799458a", size = 12354556 },
{ url = "https://files.pythonhosted.org/packages/67/42/f4f60238e8194a3106d06a058d494b18e006c10bb2b915655bd9f6ea4cb1/nvidia_cuda_cupti_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl", hash = "sha256:9dec60f5ac126f7bb551c055072b69d85392b13311fcc1bcda2202d172df30fb", size = 13813957 }, { url = "https://files.pythonhosted.org/packages/67/42/f4f60238e8194a3106d06a058d494b18e006c10bb2b915655bd9f6ea4cb1/nvidia_cuda_cupti_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl", hash = "sha256:9dec60f5ac126f7bb551c055072b69d85392b13311fcc1bcda2202d172df30fb", size = 13813957 },
] ]
@ -4708,7 +4707,6 @@ name = "nvidia-cuda-nvrtc-cu12"
version = "12.4.127" version = "12.4.127"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/77/aa/083b01c427e963ad0b314040565ea396f914349914c298556484f799e61b/nvidia_cuda_nvrtc_cu12-12.4.127-py3-none-manylinux2014_aarch64.whl", hash = "sha256:0eedf14185e04b76aa05b1fea04133e59f465b6f960c0cbf4e37c3cb6b0ea198", size = 24133372 },
{ url = "https://files.pythonhosted.org/packages/2c/14/91ae57cd4db3f9ef7aa99f4019cfa8d54cb4caa7e00975df6467e9725a9f/nvidia_cuda_nvrtc_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl", hash = "sha256:a178759ebb095827bd30ef56598ec182b85547f1508941a3d560eb7ea1fbf338", size = 24640306 }, { url = "https://files.pythonhosted.org/packages/2c/14/91ae57cd4db3f9ef7aa99f4019cfa8d54cb4caa7e00975df6467e9725a9f/nvidia_cuda_nvrtc_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl", hash = "sha256:a178759ebb095827bd30ef56598ec182b85547f1508941a3d560eb7ea1fbf338", size = 24640306 },
] ]
@ -4717,7 +4715,6 @@ name = "nvidia-cuda-runtime-cu12"
version = "12.4.127" version = "12.4.127"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/a1/aa/b656d755f474e2084971e9a297def515938d56b466ab39624012070cb773/nvidia_cuda_runtime_cu12-12.4.127-py3-none-manylinux2014_aarch64.whl", hash = "sha256:961fe0e2e716a2a1d967aab7caee97512f71767f852f67432d572e36cb3a11f3", size = 894177 },
{ url = "https://files.pythonhosted.org/packages/ea/27/1795d86fe88ef397885f2e580ac37628ed058a92ed2c39dc8eac3adf0619/nvidia_cuda_runtime_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl", hash = "sha256:64403288fa2136ee8e467cdc9c9427e0434110899d07c779f25b5c068934faa5", size = 883737 }, { url = "https://files.pythonhosted.org/packages/ea/27/1795d86fe88ef397885f2e580ac37628ed058a92ed2c39dc8eac3adf0619/nvidia_cuda_runtime_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl", hash = "sha256:64403288fa2136ee8e467cdc9c9427e0434110899d07c779f25b5c068934faa5", size = 883737 },
] ]
@ -4740,7 +4737,6 @@ dependencies = [
{ name = "nvidia-nvjitlink-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" }, { name = "nvidia-nvjitlink-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
] ]
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/7a/8a/0e728f749baca3fbeffad762738276e5df60851958be7783af121a7221e7/nvidia_cufft_cu12-11.2.1.3-py3-none-manylinux2014_aarch64.whl", hash = "sha256:5dad8008fc7f92f5ddfa2101430917ce2ffacd86824914c82e28990ad7f00399", size = 211422548 },
{ url = "https://files.pythonhosted.org/packages/27/94/3266821f65b92b3138631e9c8e7fe1fb513804ac934485a8d05776e1dd43/nvidia_cufft_cu12-11.2.1.3-py3-none-manylinux2014_x86_64.whl", hash = "sha256:f083fc24912aa410be21fa16d157fed2055dab1cc4b6934a0e03cba69eb242b9", size = 211459117 }, { url = "https://files.pythonhosted.org/packages/27/94/3266821f65b92b3138631e9c8e7fe1fb513804ac934485a8d05776e1dd43/nvidia_cufft_cu12-11.2.1.3-py3-none-manylinux2014_x86_64.whl", hash = "sha256:f083fc24912aa410be21fa16d157fed2055dab1cc4b6934a0e03cba69eb242b9", size = 211459117 },
] ]
@ -4749,7 +4745,6 @@ name = "nvidia-curand-cu12"
version = "10.3.5.147" version = "10.3.5.147"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/80/9c/a79180e4d70995fdf030c6946991d0171555c6edf95c265c6b2bf7011112/nvidia_curand_cu12-10.3.5.147-py3-none-manylinux2014_aarch64.whl", hash = "sha256:1f173f09e3e3c76ab084aba0de819c49e56614feae5c12f69883f4ae9bb5fad9", size = 56314811 },
{ url = "https://files.pythonhosted.org/packages/8a/6d/44ad094874c6f1b9c654f8ed939590bdc408349f137f9b98a3a23ccec411/nvidia_curand_cu12-10.3.5.147-py3-none-manylinux2014_x86_64.whl", hash = "sha256:a88f583d4e0bb643c49743469964103aa59f7f708d862c3ddb0fc07f851e3b8b", size = 56305206 }, { url = "https://files.pythonhosted.org/packages/8a/6d/44ad094874c6f1b9c654f8ed939590bdc408349f137f9b98a3a23ccec411/nvidia_curand_cu12-10.3.5.147-py3-none-manylinux2014_x86_64.whl", hash = "sha256:a88f583d4e0bb643c49743469964103aa59f7f708d862c3ddb0fc07f851e3b8b", size = 56305206 },
] ]
@ -4763,7 +4758,6 @@ dependencies = [
{ name = "nvidia-nvjitlink-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" }, { name = "nvidia-nvjitlink-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
] ]
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/46/6b/a5c33cf16af09166845345275c34ad2190944bcc6026797a39f8e0a282e0/nvidia_cusolver_cu12-11.6.1.9-py3-none-manylinux2014_aarch64.whl", hash = "sha256:d338f155f174f90724bbde3758b7ac375a70ce8e706d70b018dd3375545fc84e", size = 127634111 },
{ url = "https://files.pythonhosted.org/packages/3a/e1/5b9089a4b2a4790dfdea8b3a006052cfecff58139d5a4e34cb1a51df8d6f/nvidia_cusolver_cu12-11.6.1.9-py3-none-manylinux2014_x86_64.whl", hash = "sha256:19e33fa442bcfd085b3086c4ebf7e8debc07cfe01e11513cc6d332fd918ac260", size = 127936057 }, { url = "https://files.pythonhosted.org/packages/3a/e1/5b9089a4b2a4790dfdea8b3a006052cfecff58139d5a4e34cb1a51df8d6f/nvidia_cusolver_cu12-11.6.1.9-py3-none-manylinux2014_x86_64.whl", hash = "sha256:19e33fa442bcfd085b3086c4ebf7e8debc07cfe01e11513cc6d332fd918ac260", size = 127936057 },
] ]
@ -4775,7 +4769,6 @@ dependencies = [
{ name = "nvidia-nvjitlink-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" }, { name = "nvidia-nvjitlink-cu12", marker = "(platform_machine != 'aarch64' and sys_platform == 'linux') or (sys_platform != 'darwin' and sys_platform != 'linux')" },
] ]
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/96/a9/c0d2f83a53d40a4a41be14cea6a0bf9e668ffcf8b004bd65633f433050c0/nvidia_cusparse_cu12-12.3.1.170-py3-none-manylinux2014_aarch64.whl", hash = "sha256:9d32f62896231ebe0480efd8a7f702e143c98cfaa0e8a76df3386c1ba2b54df3", size = 207381987 },
{ url = "https://files.pythonhosted.org/packages/db/f7/97a9ea26ed4bbbfc2d470994b8b4f338ef663be97b8f677519ac195e113d/nvidia_cusparse_cu12-12.3.1.170-py3-none-manylinux2014_x86_64.whl", hash = "sha256:ea4f11a2904e2a8dc4b1833cc1b5181cde564edd0d5cd33e3c168eff2d1863f1", size = 207454763 }, { url = "https://files.pythonhosted.org/packages/db/f7/97a9ea26ed4bbbfc2d470994b8b4f338ef663be97b8f677519ac195e113d/nvidia_cusparse_cu12-12.3.1.170-py3-none-manylinux2014_x86_64.whl", hash = "sha256:ea4f11a2904e2a8dc4b1833cc1b5181cde564edd0d5cd33e3c168eff2d1863f1", size = 207454763 },
] ]
@ -4792,7 +4785,6 @@ name = "nvidia-nvjitlink-cu12"
version = "12.4.127" version = "12.4.127"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/02/45/239d52c05074898a80a900f49b1615d81c07fceadd5ad6c4f86a987c0bc4/nvidia_nvjitlink_cu12-12.4.127-py3-none-manylinux2014_aarch64.whl", hash = "sha256:4abe7fef64914ccfa909bc2ba39739670ecc9e820c83ccc7a6ed414122599b83", size = 20552510 },
{ url = "https://files.pythonhosted.org/packages/ff/ff/847841bacfbefc97a00036e0fce5a0f086b640756dc38caea5e1bb002655/nvidia_nvjitlink_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl", hash = "sha256:06b3b9b25bf3f8af351d664978ca26a16d2c5127dbd53c0497e28d1fb9611d57", size = 21066810 }, { url = "https://files.pythonhosted.org/packages/ff/ff/847841bacfbefc97a00036e0fce5a0f086b640756dc38caea5e1bb002655/nvidia_nvjitlink_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl", hash = "sha256:06b3b9b25bf3f8af351d664978ca26a16d2c5127dbd53c0497e28d1fb9611d57", size = 21066810 },
] ]
@ -4801,7 +4793,6 @@ name = "nvidia-nvtx-cu12"
version = "12.4.127" version = "12.4.127"
source = { registry = "https://pypi.org/simple" } source = { registry = "https://pypi.org/simple" }
wheels = [ wheels = [
{ url = "https://files.pythonhosted.org/packages/06/39/471f581edbb7804b39e8063d92fc8305bdc7a80ae5c07dbe6ea5c50d14a5/nvidia_nvtx_cu12-12.4.127-py3-none-manylinux2014_aarch64.whl", hash = "sha256:7959ad635db13edf4fc65c06a6e9f9e55fc2f92596db928d169c0bb031e88ef3", size = 100417 },
{ url = "https://files.pythonhosted.org/packages/87/20/199b8713428322a2f22b722c62b8cc278cc53dffa9705d744484b5035ee9/nvidia_nvtx_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl", hash = "sha256:781e950d9b9f60d8241ccea575b32f5105a5baf4c2351cab5256a24869f12a1a", size = 99144 }, { url = "https://files.pythonhosted.org/packages/87/20/199b8713428322a2f22b722c62b8cc278cc53dffa9705d744484b5035ee9/nvidia_nvtx_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl", hash = "sha256:781e950d9b9f60d8241ccea575b32f5105a5baf4c2351cab5256a24869f12a1a", size = 99144 },
] ]