## Description
This PR pins opentelemetry-proto version to >=1.28.0, which uses
protobuf > 5.0, < 6.0 to generate protobuf files.
## Related issue number
Closes#6304
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
An app can pass untyped dicts to set configuration options of various
Task-Centric Memory classes. But tools like pyright can complain about
the loose typing. This PR exposes 4 TypedDict classes that apps can
optionally use.
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
`IncludeEnum` was removed in ChromaDB when it was updated to `1.0.0`.
This caused issues when using `ChromaDBVectorMemory`. This PR fixes
those issues
## Related issue number
Closes#6241
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
---------
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
## Why are these changes needed?
bug fix : add get_embedding() implementation
## Related issue number
"Closes #6240 " -->
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [X] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [X] I've made sure all auto checks have passed.
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
- Exposes a few optional memory controller parameters for more detailed
control and evaluation.
- Fixes a couple formatting issues in the documentation.
## Related issue number
None
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
This pull request introduces a new feature to the
`DockerCommandLineCodeExecutor` class, which allows temporary files
generated by code execution to be deleted after code execution. The most
important changes include adding a new configuration option, updating
the class to handle this option, and adding tests to verify the new
functionality.
### New Feature: Temporary File Deletion
*
[`python/packages/autogen-ext/src/autogen_ext/code_executors/docker/_docker_code_executor.py`](diffhunk://#diff-8ef47c21141ed8b0a757b0e6f9d1491561fc31684756d22ed0253edbfcfcdf91R81):
Added `delete_tmp_files` attribute to the
`DockerCommandLineCodeExecutorConfig` class and updated the
`DockerCommandLineCodeExecutor` class to handle this attribute. This
includes initializing the attribute, adding it to the configuration
methods, and implementing the file deletion logic in the
`_execute_code_dont_check_setup` method.
[[1]](diffhunk://#diff-8ef47c21141ed8b0a757b0e6f9d1491561fc31684756d22ed0253edbfcfcdf91R81)
[[2]](diffhunk://#diff-8ef47c21141ed8b0a757b0e6f9d1491561fc31684756d22ed0253edbfcfcdf91R128)
[[3]](diffhunk://#diff-8ef47c21141ed8b0a757b0e6f9d1491561fc31684756d22ed0253edbfcfcdf91R177)
[[4]](diffhunk://#diff-8ef47c21141ed8b0a757b0e6f9d1491561fc31684756d22ed0253edbfcfcdf91R231)
[[5]](diffhunk://#diff-8ef47c21141ed8b0a757b0e6f9d1491561fc31684756d22ed0253edbfcfcdf91R318)
[[6]](diffhunk://#diff-8ef47c21141ed8b0a757b0e6f9d1491561fc31684756d22ed0253edbfcfcdf91R346-R352)
[[7]](diffhunk://#diff-8ef47c21141ed8b0a757b0e6f9d1491561fc31684756d22ed0253edbfcfcdf91R527)
[[8]](diffhunk://#diff-8ef47c21141ed8b0a757b0e6f9d1491561fc31684756d22ed0253edbfcfcdf91R547)
### Testing
*
[`python/packages/autogen-ext/tests/code_executors/test_docker_commandline_code_executor.py`](diffhunk://#diff-635dbdcdeca161e620283399d5cd43ca756ec0f88d4429f059ee4f6b346874e4R318-R363):
Added a new test `test_delete_tmp_files` to verify the behavior of the
`delete_tmp_files` attribute. This test checks that temporary files are
correctly deleted or retained based on the configuration.<!-- Thank you
for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
This PR improves fallback safety when an invalid `model_family` is
supplied to `get_transformer()`. Previously, if a user passed an
arbitrary or incorrect `family` string in `model_info`, the lookup could
fail without falling back to `ModelFamily.UNKNOWN`.
Now, we explicitly check whether `model_family` is a valid value in
`ModelFamily.ANY`. If not, we fallback to `_find_model_family()` as
intended.
## Related issue number
Related #6011#issuecomment-2779957730
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Messages sent as part of `LLMCallEvent` for Anthropic were not fully serializable
The example below shows TextBlock and ToolUseBlocks inside the content of messages - these throw downsteam errors in apps like AGS (or event sinks) that expect serializable dicts inside the LLMCallEvent.
```
[
{'role': 'user', 'content': 'What is the weather in New York?'},
{'role': 'assistant', 'content': [TextBlock(citations=None, text='I can help you find the weather in New York. Let me check that for you.', type='text'), ToolUseBlock(id='toolu_016W8g55GejYGBzRRrcsnt7M', input={'city': 'New York'}, name='get_weather', type='tool_use')]},
{'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_016W8g55GejYGBzRRrcsnt7M', 'content': 'The weather in New York is 73 degrees and Sunny.'}]}
]
```
This PR attempts to first serialize content of anthropic messages before they are passed to `LLMCallEvent`
```
[
{'role': 'user', 'content': 'What is the weather in New York?'},
{'role': 'assistant', 'content': [{'citations': None, 'text': 'I can help you find the weather in New York. Let me check that for you.', 'type': 'text'}, {'id': 'toolu_016W8g55GejYGBzRRrcsnt7M', 'input': {'city': 'New York'}, 'name': 'get_weather', 'type': 'tool_use'}]},
{'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_016W8g55GejYGBzRRrcsnt7M', 'content': 'The weather in New York is 73 degrees and Sunny.'}]}
]
```
# Azure AI Search Tool Implementation
This PR adds a new tool for Azure AI Search integration to autogen-ext,
enabling agents to search and retrieve information from Azure AI Search
indexes.
## Why Are These Changes Needed?
AutoGen currently lacks native integration with Azure AI Search, which
is a powerful enterprise search service that supports semantic, vector,
and hybrid search capabilities. This integration enables agents to:
1. Retrieve relevant information from large document collections
2. Perform semantic search with AI-powered ranking
3. Execute vector similarity search using embeddings
4. Combine text and vector approaches for optimal results
This tool complements existing retrieval capabilities and provides a
seamless way to integrate with Azure's search infrastructure.
## Features
- **Multiple Search Types**: Support for text, semantic, vector, and
hybrid search
- **Flexible Configuration**: Customizable search parameters and fields
- **Robust Error Handling**: User-friendly error messages with
actionable guidance
- **Performance Optimizations**: Configurable caching and retry
mechanisms
- **Vector Search Support**: Built-in embedding generation with
extensibility
## Usage Example
```python
from autogen_ext.tools.azure import AzureAISearchTool
from azure.core.credentials import AzureKeyCredential
from autogen import AssistantAgent, UserProxyAgent
# Create the search tool
search_tool = AzureAISearchTool.load_component({
"provider": "autogen_ext.tools.azure.AzureAISearchTool",
"config": {
"name": "DocumentSearch",
"description": "Search for information in the knowledge base",
"endpoint": "https://your-service.search.windows.net",
"index_name": "your-index",
"credential": {"api_key": "your-api-key"},
"query_type": "semantic",
"semantic_config_name": "default"
}
})
# Create an agent with the search tool
assistant = AssistantAgent(
"assistant",
llm_config={"tools": [search_tool]}
)
# Create a user proxy agent
user_proxy = UserProxyAgent(
"user_proxy",
human_input_mode="TERMINATE",
max_consecutive_auto_reply=10,
code_execution_config={"work_dir": "coding"}
)
# Start the conversation
user_proxy.initiate_chat(
assistant,
message="What information do we have about quantum computing in our knowledge base?"
)
```
## Testing
- Added unit tests for all search types (text, semantic, vector, hybrid)
- Added tests for error handling and cancellation
- All tests pass locally
## Documentation
- Added comprehensive docstrings with examples
- Included warnings about placeholder embedding implementation
- Added links to Azure AI Search documentation
## Related issue number
Closes#5419
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Just simple change.
```python
messages: List[LLMMessage] = [UserMessage(content="Call the pass tool with input 'task'", source="user")]
```
to
```python
messages: List[LLMMessage] = [UserMessage(content="Call the pass tool with input 'task' and talk result", source="user")]
```
And, now.
Anthropic model could pass that test case
`test_model_client_with_function_calling`.
-> Yup. Before, claude could not pass that test case.
With this change, Claude (Anthropic) models are now able to pass the
test case successfully.
Before this fix, Claude failed to interpret the intent correctly. Now,
it can infer both tool usage and follow-up generation.
This change is backward-compatible with other models (e.g., GPT-4) and
improves cross-model consistency for function-calling tests.
This PR improves how model_family is resolved when selecting a
transformer from the registry.
Previously, model families were inferred using a simple prefix-based
match like:
```
if model.startswith(family): ...
```
This works for cleanly prefixed models (e.g., `gpt-4o`, `claude-3`) but
fails for models like `mistral-large-latest`, `codestral-latest`, etc.,
where prefix-based matching is ambiguous or misleading.
To address this:
• model_family can now be passed explicitly (e.g., via ModelInfo)
• _find_model_family() is only used as a fallback when the value is
"unknown"
• Transformer lookup is now more robust and predictable
• Example integration in to_oai_type() demonstrates this pattern using
self._model_info["family"]
This change is required for safe support of models like Mistral and
other future models that do not follow standard naming conventions.
Linked to discussion in
[#6151](https://github.com/microsoft/autogen/issues/6151)
Related : #6011
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
The initializer for ACADynamicSessionsCodeExecutor creates a new GUID to
use as the session ID for dynamic sessions.
In some scenarios it is desirable to be able to re-create the agent
group chat from saved state. In this case, the
ACADynamicSessionsCodeExecutor needs to be associated with a previous
instance (so that any execution state is still valid)
This PR adds a new argument to the initializer to allow a session ID to
be passed in (defaulting to the current behaviour of creating a GUID if
absent).
Closes#6119
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
## Why are these changes needed?
This PR fixes a `400 - invalid_request_error` that occurs when using
Anthropic models and the **final message is from the assistant and ends
with trailing whitespace**.
Example error:
```
Error code: 400 - {'error': {'code': 'invalid_request_error', 'message': 'messages: final assistant content cannot end with trailing whitespace', ...}}
```
To unblock ongoing internal usage, this patch introduces an **ad-hoc
fix** that strips trailing whitespace if the model is Anthropic and the
last message is from the assistant.
## Related issue number
Ad-hoc fix for issue discussed here:
https://github.com/microsoft/autogen/issues/6167
Follow-up structural proposal here:
https://github.com/microsoft/autogen/issues/6167https://github.com/microsoft/autogen/issues/6167#issuecomment-2768592840
## Why are these changes needed?
Changed default working directory of code executors, from the current
directory `"."` to Python's
[`tempfile`](https://docs.python.org/3/library/tempfile.html#tempfile.TemporaryDirectory).
These changes simplify file cleanup and prevent the model from accessing
code files or other sensitive data that should not be accessible.
These changes simplify file cleanup and prevent the model from accessing
code files or other sensitive data that should not be accessible.
Changes made:
- The default `work_dir` parameter in code executors is changed to
`None`; when invoking the `start()` method, if not `work_dir` was
specified (`None`) a temporary directory is created.
- The `start()` and `stop()` methods of code executors handle the
creation and cleanup of the working directory, for the default temporary
directory.
- For maintaining backward compatibility:
- A `DeprecationWarning` is emitted when the current dir, `"."`, is used
as `work_dir` as it is in the current code executor implementation. The
deprecation warning is tested in `test_deprecated_warning()`.
- For existing implementation that do not call the `start()` method and
do not specify a `work_dir`, the executors will continue using the
current directory `"."` as the working directory, mantaining backward
compatibility.
- Updated test suites:
- Added tests to confirm that by default code executors use a temporary
directory as their working directory: `test_default_work_dir_is_temp()`;
- Implemented test to ensure that a `DeprecationWarning` is raised when
the current directory is used as the default directory:
`test_deprecated_warning()`;
- Added tests to ensure that errors arise when invalid paths (doesn't
exist or user has not the right permissions) are provided:
`test_error_wrong_path()`.
Feel free to suggest any additions or improvements!
## Related issue number
Close#6041
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
This PR adds a module-level docstring to `_message_transform.py`, as
requested in the review for [PR
#6063](https://github.com/microsoft/autogen/pull/6063).
The documentation includes:
- Background and motivation behind the modular transformer design
- Key concepts such as transformer functions, pipelines, and maps
- Examples of how to define, register, and use transformers
- Design principles to guide future contributions and extensions
By embedding this explanation directly into the module, contributors and
maintainers can more easily understand the structure, purpose, and usage
of the transformer pipeline without needing to refer to external
documents.
## Related issue number
Follow-up to [PR #6063](https://github.com/microsoft/autogen/pull/6063)
## Why are these changes needed?
This change addresses a compatibility issue when using Google Gemini
models with AutoGen. Specifically, Gemini returns a 400 INVALID_ARGUMENT
error when receiving a response with an empty "text" parameter.
The root cause is that Gemini does not accept empty string values (e.g.,
"") as valid inputs in the history of the conversation.
To fix this, if the content field is falsy (e.g., None, "", etc.), it is
explicitly replaced with a single whitespace (" "), which prevents the
Gemini model from rejecting the request.
- **Gemini API compatibility:** Gemini models reject empty assistant
messages (e.g., `""`), causing runtime errors. This PR ensures such
messages are safely replaced with whitespace where appropriate.
- **Avoiding regressions:** Applying the empty content workaround **only
to Gemini**, and **only to valid message types**, avoids breaking OpenAI
or other models.
- **Reducing duplication:** Previously, message transformation logic was
scattered and repeated across different message types and models.
Modularizing this pipeline removes that redundancy.
- **Improved maintainability:** With future model variants likely to
introduce more constraints, this modular structure makes it easier to
adapt transformations without writing ad-hoc code each time.
- **Testing for correctness:** The new structure is verified with tests,
ensuring the bug fix is effective and non-intrusive.
## Summary
This PR introduces a **modular transformer pipeline** for message
conversion and **fixes a Gemini-specific bug** related to empty
assistant message content.
### Key Changes
- **[Refactor]** Extracted message transformation logic into a unified
pipeline to:
- Reduce code duplication
- Improve maintainability
- Simplify debugging and extension for future model-specific logic
- **[BugFix]** Gemini models do not accept empty assistant message
content.
- Introduced `_set_empty_to_whitespace` transformer to replace empty
strings with `" "` only where needed
- Applied it **only** to `"text"` and `"thought"` message types, not to
`"tools"` to avoid serialization errors
- **Improved structure for model-specific handling**
- Transformer functions are now grouped and conditionally applied based
on message type and model family
- This design makes it easier to support future models or combinations
(e.g., Gemini + R1)
- **Test coverage added**
- Added dedicated tests to verify that empty assistant content causes
errors for Gemini
- Ensured the fix resolves the issue without affecting OpenAI models
---
## Motivation
Originally, Gemini-compatible endpoints would fail when receiving
assistant messages with empty content (`""`).
This issue required special handling without introducing brittle, ad-hoc
patches.
In addressing this, I also saw an opportunity to **modularize** the
message transformation logic across models.
This improves clarity, avoids duplication, and simplifies future
adaptations (e.g., different constraints across model families).
---
## 📘 AutoGen Modular Message Transformer: Design & Usage Guide
This document introduces the **new modular transformer system** used in
AutoGen for converting `LLMMessage` instances to SDK-specific message
formats (e.g., OpenAI-style `ChatCompletionMessageParam`).
The design improves **reusability, extensibility**, and
**maintainability** across different model families.
---
### 🚀 Overview
Instead of scattering model-specific message conversion logic across the
codebase, the new design introduces:
- Modular transformer **functions** for each message type
- Per-model **transformer maps** (e.g., for OpenAI-compatible models)
- Optional **conditional transformers** for multimodal/text hybrid
models
- Clear separation between **message adaptation logic** and
**SDK-specific builder** (e.g., `ChatCompletionUserMessageParam`)
---
### 🧱 1. Define Transform Functions
Each transformer function takes:
- `LLMMessage`: a structured AutoGen message
- `context: dict`: metadata passed through the builder pipeline
And returns:
- A dictionary of keyword arguments for the target message constructor
(e.g., `{"content": ..., "name": ..., "role": ...}`)
```python
def _set_thought_as_content_gemini(message: LLMMessage, context: Dict[str, Any]) -> Dict[str, str | None]:
assert isinstance(message, AssistantMessage)
return {"content": message.thought or " "}
```
---
### 🪢 2. Compose Transformer Pipelines
Multiple transformer functions are composed into a pipeline using
`build_transformer_func()`:
```python
base_user_transformer_funcs: List[Callable[[LLMMessage, Dict[str, Any]], Dict[str, Any]]] = [
_assert_valid_name,
_set_name,
_set_role("user"),
]
user_transformer = build_transformer_func(
funcs=base_user_transformer_funcs,
message_param_func=ChatCompletionUserMessageParam
)
```
- The `message_param_func` is the actual constructor for the target
message class (usually from the SDK).
- The pipeline is **ordered** — each function adds or overrides keys in
the builder kwargs.
---
### 🗂️ 3. Register Transformer Map
Each model family maintains a `TransformerMap`, which maps `LLMMessage`
types to transformers:
```python
__BASE_TRANSFORMER_MAP: TransformerMap = {
SystemMessage: system_transformer,
UserMessage: user_transformer,
AssistantMessage: assistant_transformer,
}
register_transformer("openai", model_name_or_family, __BASE_TRANSFORMER_MAP)
```
- `"openai"` is currently required (as only OpenAI-compatible format is
supported now).
- Registration ensures AutoGen knows how to transform each message type
for that model.
---
### 🔁 4. Conditional Transformers (Optional)
When message construction depends on runtime conditions (e.g., `"text"`
vs. `"multimodal"`), use:
```python
conditional_transformer = build_conditional_transformer_func(
funcs_map=user_transformer_funcs_claude,
message_param_func_map=user_transformer_constructors,
condition_func=user_condition,
)
```
Where:
- `funcs_map`: maps condition label → list of transformer functions
```python
user_transformer_funcs_claude = {
"text": text_transformers + [_set_empty_to_whitespace],
"multimodal": multimodal_transformers + [_set_empty_to_whitespace],
}
```
- `message_param_func_map`: maps condition label → message builder
```python
user_transformer_constructors = {
"text": ChatCompletionUserMessageParam,
"multimodal": ChatCompletionUserMessageParam,
}
```
- `condition_func`: determines which transformer to apply at runtime
```python
def user_condition(message: LLMMessage, context: Dict[str, Any]) -> str:
if isinstance(message.content, str):
return "text"
return "multimodal"
```
---
### 🧪 Example Flow
```python
llm_message = AssistantMessage(name="a", thought="let’s go")
model_family = "openai"
model_name = "claude-3-opus"
transformer = get_transformer(model_family, model_name, type(llm_message))
sdk_message = transformer(llm_message, context={})
```
---
### 🎯 Design Benefits
| Feature | Benefit |
|--------|---------|
| 🧱 Function-based modular design | Easy to compose and test |
| 🧩 Per-model registry | Clean separation across model families |
| ⚖️ Conditional support | Allows multimodal / dynamic adaptation |
| 🔄 Reuse-friendly | Shared logic (e.g., `_set_name`) is DRY |
| 📦 SDK-specific | Keeps message adaptation aligned to builder interface
|
---
### 🔮 Future Direction
- Support more SDKs and formats by introducing new message_param_func
- Global registry integration (currently `"openai"`-scoped)
- Class-based transformer variant if complexity grows
---
## Related issue number
Closes#5762
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ v ] I've made sure all auto checks have passed.
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Rename the `ChatMessage` and `AgentEvent` base classes to `BaseChatMessage` and `BaseAgentEvent`.
Bring back the `ChatMessage` and `AgentEvent` as union of built-in concrete types to avoid breaking existing applications that depends on Pydantic serialization.
Why?
Many existing code uses containers like this:
```python
class AppMessage(BaseModel):
name: str
message: ChatMessage
# Serialization is this:
m = AppMessage(...)
m.model_dump_json()
# Fields like HandoffMessage.target will be lost because it is now treated as a base class without content or target fields.
```
The assumption on `ChatMessage` or `AgentEvent` to be a union of concrete types could be in many existing code bases. So this PR brings back the union types, while keep method type hints such as those on `on_messages` to use the `BaseChatMessage` and `BaseAgentEvent` base classes for flexibility.
Anthropic SDK could not takes multiple system messages.
However some autogen Agent(e.g. SocietyOfMindAgent) makes multiple
system messages.
And... Gemini with OpenaiSDK do not take error. However is not working
mulitple system messages.
(Just last one is working)
So, I simple change of, "merge multiple system message" at these cases.
## Related issue number
Closes#6116Closes#6117
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
When using the `ACADynamicSessionsCodeExecutor` it includes the stdout
from the execution but also the `results` property from the call to
dynamic sessions. In some situations, when the executed code results in
a file being saved this is included in the result:
```console
Plot saved as 'results_by_date.png'
{'type': 'image', 'format': 'png', 'base64_data': 'iVBORw0KGgoAAAANSUhEUgAAA90AAAJOCAYAAACqS2TfAAAAOXRFWHRTb2Z0d2FyZQ...
```
In some situations, this additional output is not desirable:
- when displaying the code output to a user - in this case, the stdout
content is dwarfed by the base64 encoded file content
- when an LLM agent is going to evaluate the code output to determine
next steps - in this case, the base64 content will be included in the
message history sent to the LLM increasing the prompt token cost
To handle these cases, this PR adds a new (optional) argument to the
`ACADynamicSessionsCodeExecutor` constructor that would allow
suppressing the result content (but default to False to preserve the
current behaviour in the default case)
(from #6042)
Closes#6042
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
This PR adds missing model entries for OpenAI-compatible endpoints,
including gpt-4.5-turbo, gpt-4.5-turbo-preview, and claude-3.5-sonnet.
This improves coverage and avoids potential fallback or mismatch issues
when initializing clients.
This PR refactored `AgentEvent` and `ChatMessage` union types to
abstract base classes. This allows for user-defined message types that
subclass one of the base classes to be used in AgentChat.
To support a unified interface for working with the messages, the base
classes added abstract methods for:
- Convert content to string
- Convert content to a `UserMessage` for model client
- Convert content for rendering in console.
- Dump into a dictionary
- Load and create a new instance from a dictionary
This way, all agents such as `AssistantAgent` and `SocietyOfMindAgent`
can utilize the unified interface to work with any built-in and
user-defined message type.
This PR also introduces a new message type, `StructuredMessage` for
AgentChat (Resolves#5131), which is a generic type that requires a
user-specified content type.
You can create a `StructuredMessage` as follow:
```python
class MessageType(BaseModel):
data: str
references: List[str]
message = StructuredMessage[MessageType](content=MessageType(data="data", references=["a", "b"]), source="user")
# message.content is of type `MessageType`.
```
This PR addresses the receving side of this message type. To produce
this message type from `AssistantAgent`, the work continue in #5934.
Added unit tests to verify this message type works with agents and
teams.
So the behavior of hosted R1 model is the same as locally hosted R1 model.
Addresses: #5989
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Add utf encoding to file reading.
Without this, a default system encoding will be used. On Windows
machines this can default to any local encoding causing errors.
```python
with open(
os.path.join(os.path.abspath(os.path.dirname(__file__)), "page_script.js"), "rt", encoding="utf-8"
) as fh:
```
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#6093
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
added support for the thought process in tool calls for
`OpenAIChatCompletionClient`, allowing additional text produced by a
model alongside tool calls to be preserved in the thought field of
`CreateResult`. This PR extends the same functionality to
`AzureAIChatCompletionClient` for consistency across model clients.
#5650
Co-authored-by: Jay Prakash Thakur <jathakur@microsoft.com>
## Why are these changes needed?
This PR fixes a `TypeError: Cannot instantiate typing.Union` that occurs
when using the `MultimodalWebSurfer_agent` with Anthropic models. The
error was caused by the incorrect usage of `typing.Union` as a class
constructor instead of a type hint within the `_anthropic_client.py`
file. The code was attempting to instantiate `typing.Union`, which is
not allowed. The fix correctly uses `typing.Union` within type hints,
and uses the correct `Base64ImageSourceParam` type. It also updates the
`pyproject.toml` dependency.
## Related issue number
Closes#6035
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [v] I've made sure all auto checks have passed.
---------
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
Optionally limit what files and folders FileSurfer can access
(constraining it to a subtree of the FS).
This is not a replacement for Docker sandboxing, but can be used in
conjunction with sandboxing to help prevent FileSurfer from accessing
sensitive files.
## Why are these changes needed?
when I want to create a ACASessionsExecutor instance and execute some
code, the default library imported does not work. It always returns:
"ClientAuthenticationError: Authentication failed: AADSTS70011: The
provided request must include a 'scope' input parameter. The provided
value for the input parameter 'scope' is not valid. The scope
https://dynamicsessions.io/ is not valid. Trace ID:
d75efa58-8be7-44ef-8839-aacfdc850600 Correlation ID:
a8e4d859-92da-4fbe-a8e0-05116323ab55 Timestamp: 2025-03-14 14:15:09Z"
After changing the scope in _ensure_access_token to be
"https://dynamicsessions.io/.default" rather than
""https://dynamicsessions.io/" and it worked.
## Related issue number
issue #5946
## Checks
- [Y ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ Y] I've made sure all auto checks have passed.
Co-authored-by: edwinwu <edwin@Edwin-MBA.local>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Resolves#5982
This PR adds support for `json_schema` as a `response_format` type in
`OpenAIChatCompletionClient`. This is necessary because it allows the
client to be serialized along with the schema. If user use
`response_format=SomeBaseModel`, the client cannot be serialized.
Usage:
```python
# Structured output response, with a pre-defined JSON schema.
OpenAIChatCompletionClient(...,
response_format = {
"type": "json_schema",
"json_schema": {
"name": "name of the schema, must be an identifier.",
"description": "description for the model.",
# You can convert a Pydantic (v2) model to JSON schema
# using the `model_json_schema()` method.
"schema": "<the JSON schema itself>",
# Whether to enable strict schema adherence when
# generating the output. If set to true, the model will
# always follow the exact schema defined in the
# `schema` field. Only a subset of JSON Schema is
# supported when `strict` is `true`.
# To learn more, read
# https://platform.openai.com/docs/guides/structured-outputs.
"strict": False, # or True
},
},
)
````
R1 reasoning tokens from hosted R1 model were not parsed correctly for the openai client
Resolves#5941
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>