## Description
This PR pins opentelemetry-proto version to >=1.28.0, which uses
protobuf > 5.0, < 6.0 to generate protobuf files.
## Related issue number
Closes#6304
## Why are these changes needed?
- To add support for code generation, execution and reflection to
`CodeExecutorAgent`.
## Related issue number
Closes#5824
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
---------
Signed-off-by: Abhijeetsingh Meena <abhijeet040403@gmail.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
adding email agent
## Why are these changes needed?
This PR introduces an AI-powered email assistant that can generate
images, attach files, draft reports, and send emails to multiple
recipients or specific users based on their queries. This feature is
highly beneficial for customer management and email marketing, enhancing
automation and improving efficiency.
## Related issue number
Open #6228
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
An app can pass untyped dicts to set configuration options of various
Task-Centric Memory classes. But tools like pyright can complain about
the loose typing. This PR exposes 4 TypedDict classes that apps can
optionally use.
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
Closes#6265
Convert the `Message` and `Resource` dataclasses to Pydantic models in
the `llamaindex-agent` cookbook.
* Replace `dataclass` with `BaseModel` for `Message` and `Resource`
classes.
* Update imports to use `BaseModel` from `pydantic`
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
`IncludeEnum` was removed in ChromaDB when it was updated to `1.0.0`.
This caused issues when using `ChromaDBVectorMemory`. This PR fixes
those issues
## Related issue number
Closes#6241
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
---------
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Add note on how to update modelinfo for new models.
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#6258
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
https://github.com/user-attachments/assets/e160f16d-f42d-49e2-a6c6-687e4e6786f4
Enable file upload/paste as a task in AGS. Enables tasks like
- Can you research and fact check the ideas in this screenshot?
- Summarize this file
Only text and images supported for now
Underneath, it constructs TextMessage and Multimodal messages as the
task.
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#5773
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
---------
Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
## Why are these changes needed?
bug fix : add get_embedding() implementation
## Related issue number
"Closes #6240 " -->
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [X] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [X] I've made sure all auto checks have passed.
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
- Exposes a few optional memory controller parameters for more detailed
control and evaluation.
- Fixes a couple formatting issues in the documentation.
## Related issue number
None
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
Fixes the sha256_hash docstring to refer to SHA-256 and not MD5.
## Why are these changes needed?
The docstring refers to (presumably) a previous implementation that was
using MD5.
## Related issue number
N/A
## Checks
- [X] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [N/A] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [N/A] I've made sure all auto checks have passed.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
This pull request introduces a new feature to the
`DockerCommandLineCodeExecutor` class, which allows temporary files
generated by code execution to be deleted after code execution. The most
important changes include adding a new configuration option, updating
the class to handle this option, and adding tests to verify the new
functionality.
### New Feature: Temporary File Deletion
*
[`python/packages/autogen-ext/src/autogen_ext/code_executors/docker/_docker_code_executor.py`](diffhunk://#diff-8ef47c21141ed8b0a757b0e6f9d1491561fc31684756d22ed0253edbfcfcdf91R81):
Added `delete_tmp_files` attribute to the
`DockerCommandLineCodeExecutorConfig` class and updated the
`DockerCommandLineCodeExecutor` class to handle this attribute. This
includes initializing the attribute, adding it to the configuration
methods, and implementing the file deletion logic in the
`_execute_code_dont_check_setup` method.
[[1]](diffhunk://#diff-8ef47c21141ed8b0a757b0e6f9d1491561fc31684756d22ed0253edbfcfcdf91R81)
[[2]](diffhunk://#diff-8ef47c21141ed8b0a757b0e6f9d1491561fc31684756d22ed0253edbfcfcdf91R128)
[[3]](diffhunk://#diff-8ef47c21141ed8b0a757b0e6f9d1491561fc31684756d22ed0253edbfcfcdf91R177)
[[4]](diffhunk://#diff-8ef47c21141ed8b0a757b0e6f9d1491561fc31684756d22ed0253edbfcfcdf91R231)
[[5]](diffhunk://#diff-8ef47c21141ed8b0a757b0e6f9d1491561fc31684756d22ed0253edbfcfcdf91R318)
[[6]](diffhunk://#diff-8ef47c21141ed8b0a757b0e6f9d1491561fc31684756d22ed0253edbfcfcdf91R346-R352)
[[7]](diffhunk://#diff-8ef47c21141ed8b0a757b0e6f9d1491561fc31684756d22ed0253edbfcfcdf91R527)
[[8]](diffhunk://#diff-8ef47c21141ed8b0a757b0e6f9d1491561fc31684756d22ed0253edbfcfcdf91R547)
### Testing
*
[`python/packages/autogen-ext/tests/code_executors/test_docker_commandline_code_executor.py`](diffhunk://#diff-635dbdcdeca161e620283399d5cd43ca756ec0f88d4429f059ee4f6b346874e4R318-R363):
Added a new test `test_delete_tmp_files` to verify the behavior of the
`delete_tmp_files` attribute. This test checks that temporary files are
correctly deleted or retained based on the configuration.<!-- Thank you
for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
This PR improves fallback safety when an invalid `model_family` is
supplied to `get_transformer()`. Previously, if a user passed an
arbitrary or incorrect `family` string in `model_info`, the lookup could
fail without falling back to `ModelFamily.UNKNOWN`.
Now, we explicitly check whether `model_family` is a valid value in
`ModelFamily.ANY`. If not, we fallback to `_find_model_family()` as
intended.
## Related issue number
Related #6011#issuecomment-2779957730
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Please refer to #6123 for full context.
That issue outlines several design and behavioral problems with
`SocietyOfMindAgent`.
This DRAFT PR focuses on resolving the most critical and broken
behaviors first.
Here is the error list
🔍 SocietyOfMindAgent: Design Issues and Historical Comparison (v0.2 vs
v0.4+)
### ✅ P1–P4 Regression Issue Table (Updated with Fixes in PR #6142)
| ID | Description | Current v0.4+ Issue | Resolution in PR #6142 | Was
it a problem in v0.2? | Notes |
|-----|-------------|----------------------|--------------------------|----------------------------|-------|
| **P1** | `inner_messages` leaks into outer team termination evaluation
| `Response.inner_messages` is appended to the outer team's
`_message_thread`, affecting termination conditions. Violates
encapsulation. | ✅ `inner_messages` is excluded from `_message_thread`,
avoiding contamination of outer termination logic. | ❌ No | Structural
boundary is now enforced |
| **P2** | Inner team does not execute when outer message history is
empty | In chained executions, if no new outer message exists, no task
is created and the inner team is skipped entirely | ✅ Detects absence of
new outer message and reuses the previous task, passing it via a handoff
message. This ensures the inner team always receives a valid task to
execute | ❌ No | The issue was silent task omission, not summary
failure. Summary succeeds as a downstream effect |
| **P3** | Summary LLM prompt is built from external input only | Prompt
is constructed using external message history, ignoring internal
reasoning | ✅ Prompt construction now uses
`final_response.inner_messages`, restoring internal reasoning as the
source of summarization | ❌ No | Matches v0.2 internal monologue
behavior |
| **P4** | External input is included in summary prompt (possibly
incorrectly) | Outer messages are used in the final LLM summarization
prompt | ✅ Resolved via the same fix as P3; outer messages are no longer
used for summary | ❌ No | Redundant with P3, now fully addressed |
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
resolve#6123
Blocked #6168 (Sometimes SoMA send last whitespace message)
related #6187
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
This PR defines the missing `LLMCallEventMessage` class to resolve an
instantiation error that occurs when using custom messages (e.g., via
AgentStudio).
> **Discord Report**
> சravanaன் — 오후 6:40
> _“i updated agentchat and agentcore and tried running the config from
agentstudio and it is now not running the agent and is throwing error
`TypeError: Can't instantiate abstract class LLMCallEventMessage with
abstract methods to_model_message, to_model_text, to_text`”_
The issue stems from `LLMCallEventMessage` being an abstract class that
lacks required methods from `BaseChatMessage`.
This PR implements the missing methods.
Since `LLMCallEventMessage` is intended for **logging/UI use only**, and
not to be sent to LLMs, the `to_model_message()` method raises
`NotImplementedError` by design.
## Related issue number
Reported in Discord
Closes#6206
Messages sent as part of `LLMCallEvent` for Anthropic were not fully serializable
The example below shows TextBlock and ToolUseBlocks inside the content of messages - these throw downsteam errors in apps like AGS (or event sinks) that expect serializable dicts inside the LLMCallEvent.
```
[
{'role': 'user', 'content': 'What is the weather in New York?'},
{'role': 'assistant', 'content': [TextBlock(citations=None, text='I can help you find the weather in New York. Let me check that for you.', type='text'), ToolUseBlock(id='toolu_016W8g55GejYGBzRRrcsnt7M', input={'city': 'New York'}, name='get_weather', type='tool_use')]},
{'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_016W8g55GejYGBzRRrcsnt7M', 'content': 'The weather in New York is 73 degrees and Sunny.'}]}
]
```
This PR attempts to first serialize content of anthropic messages before they are passed to `LLMCallEvent`
```
[
{'role': 'user', 'content': 'What is the weather in New York?'},
{'role': 'assistant', 'content': [{'citations': None, 'text': 'I can help you find the weather in New York. Let me check that for you.', 'type': 'text'}, {'id': 'toolu_016W8g55GejYGBzRRrcsnt7M', 'input': {'city': 'New York'}, 'name': 'get_weather', 'type': 'tool_use'}]},
{'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_016W8g55GejYGBzRRrcsnt7M', 'content': 'The weather in New York is 73 degrees and Sunny.'}]}
]
```
# Azure AI Search Tool Implementation
This PR adds a new tool for Azure AI Search integration to autogen-ext,
enabling agents to search and retrieve information from Azure AI Search
indexes.
## Why Are These Changes Needed?
AutoGen currently lacks native integration with Azure AI Search, which
is a powerful enterprise search service that supports semantic, vector,
and hybrid search capabilities. This integration enables agents to:
1. Retrieve relevant information from large document collections
2. Perform semantic search with AI-powered ranking
3. Execute vector similarity search using embeddings
4. Combine text and vector approaches for optimal results
This tool complements existing retrieval capabilities and provides a
seamless way to integrate with Azure's search infrastructure.
## Features
- **Multiple Search Types**: Support for text, semantic, vector, and
hybrid search
- **Flexible Configuration**: Customizable search parameters and fields
- **Robust Error Handling**: User-friendly error messages with
actionable guidance
- **Performance Optimizations**: Configurable caching and retry
mechanisms
- **Vector Search Support**: Built-in embedding generation with
extensibility
## Usage Example
```python
from autogen_ext.tools.azure import AzureAISearchTool
from azure.core.credentials import AzureKeyCredential
from autogen import AssistantAgent, UserProxyAgent
# Create the search tool
search_tool = AzureAISearchTool.load_component({
"provider": "autogen_ext.tools.azure.AzureAISearchTool",
"config": {
"name": "DocumentSearch",
"description": "Search for information in the knowledge base",
"endpoint": "https://your-service.search.windows.net",
"index_name": "your-index",
"credential": {"api_key": "your-api-key"},
"query_type": "semantic",
"semantic_config_name": "default"
}
})
# Create an agent with the search tool
assistant = AssistantAgent(
"assistant",
llm_config={"tools": [search_tool]}
)
# Create a user proxy agent
user_proxy = UserProxyAgent(
"user_proxy",
human_input_mode="TERMINATE",
max_consecutive_auto_reply=10,
code_execution_config={"work_dir": "coding"}
)
# Start the conversation
user_proxy.initiate_chat(
assistant,
message="What information do we have about quantum computing in our knowledge base?"
)
```
## Testing
- Added unit tests for all search types (text, semantic, vector, hybrid)
- Added tests for error handling and cancellation
- All tests pass locally
## Documentation
- Added comprehensive docstrings with examples
- Included warnings about placeholder embedding implementation
- Added links to Azure AI Search documentation
## Related issue number
Closes#5419
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Just simple change.
```python
messages: List[LLMMessage] = [UserMessage(content="Call the pass tool with input 'task'", source="user")]
```
to
```python
messages: List[LLMMessage] = [UserMessage(content="Call the pass tool with input 'task' and talk result", source="user")]
```
And, now.
Anthropic model could pass that test case
`test_model_client_with_function_calling`.
-> Yup. Before, claude could not pass that test case.
With this change, Claude (Anthropic) models are now able to pass the
test case successfully.
Before this fix, Claude failed to interpret the intent correctly. Now,
it can infer both tool usage and follow-up generation.
This change is backward-compatible with other models (e.g., GPT-4) and
improves cross-model consistency for function-calling tests.
This PR improves how model_family is resolved when selecting a
transformer from the registry.
Previously, model families were inferred using a simple prefix-based
match like:
```
if model.startswith(family): ...
```
This works for cleanly prefixed models (e.g., `gpt-4o`, `claude-3`) but
fails for models like `mistral-large-latest`, `codestral-latest`, etc.,
where prefix-based matching is ambiguous or misleading.
To address this:
• model_family can now be passed explicitly (e.g., via ModelInfo)
• _find_model_family() is only used as a fallback when the value is
"unknown"
• Transformer lookup is now more robust and predictable
• Example integration in to_oai_type() demonstrates this pattern using
self._model_info["family"]
This change is required for safe support of models like Mistral and
other future models that do not follow standard naming conventions.
Linked to discussion in
[#6151](https://github.com/microsoft/autogen/issues/6151)
Related : #6011
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
The initializer for ACADynamicSessionsCodeExecutor creates a new GUID to
use as the session ID for dynamic sessions.
In some scenarios it is desirable to be able to re-create the agent
group chat from saved state. In this case, the
ACADynamicSessionsCodeExecutor needs to be associated with a previous
instance (so that any execution state is still valid)
This PR adds a new argument to the initializer to allow a session ID to
be passed in (defaulting to the current behaviour of creating a GUID if
absent).
Closes#6119
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
## Why are these changes needed?
This PR fixes a `400 - invalid_request_error` that occurs when using
Anthropic models and the **final message is from the assistant and ends
with trailing whitespace**.
Example error:
```
Error code: 400 - {'error': {'code': 'invalid_request_error', 'message': 'messages: final assistant content cannot end with trailing whitespace', ...}}
```
To unblock ongoing internal usage, this patch introduces an **ad-hoc
fix** that strips trailing whitespace if the model is Anthropic and the
last message is from the assistant.
## Related issue number
Ad-hoc fix for issue discussed here:
https://github.com/microsoft/autogen/issues/6167
Follow-up structural proposal here:
https://github.com/microsoft/autogen/issues/6167https://github.com/microsoft/autogen/issues/6167#issuecomment-2768592840
Resolves#5851
* Added GroupChatError event type and terminate a run when an error
occurs in either a participant or the group chat manager
* Raise a RuntimeError from the error message within the group chat run
Resolves#5934
This PR adds ability for `AssistantAgent` to generate a
`StructuredMessage[T]` where `T` is the content type in base model.
How to use?
```python
from typing import Literal
from pydantic import BaseModel
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.ui import Console
# The response format for the agent as a Pydantic base model.
class AgentResponse(BaseModel):
thoughts: str
response: Literal["happy", "sad", "neutral"]
# Create an agent that uses the OpenAI GPT-4o model which supports structured output.
model_client = OpenAIChatCompletionClient(model="gpt-4o")
agent = AssistantAgent(
"assistant",
model_client=model_client,
system_message="Categorize the input as happy, sad, or neutral following the JSON format.",
# Setting the output format to AgentResponse to force the agent to produce a JSON string as response.
output_content_type=AgentResponse,
)
result = await Console(agent.run_stream(task="I am happy."))
# Check the last message in the result, validate its type, and print the thoughts and response.
assert isinstance(result.messages[-1], StructuredMessage)
assert isinstance(result.messages[-1].content, AgentResponse)
print("Thought: ", result.messages[-1].content.thoughts)
print("Response: ", result.messages[-1].content.response)
await model_client.close()
```
```
---------- user ----------
I am happy.
---------- assistant ----------
{
"thoughts": "The user explicitly states they are happy.",
"response": "happy"
}
Thought: The user explicitly states they are happy.
Response: happy
```
---------
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
## Why are these changes needed?
Changed default working directory of code executors, from the current
directory `"."` to Python's
[`tempfile`](https://docs.python.org/3/library/tempfile.html#tempfile.TemporaryDirectory).
These changes simplify file cleanup and prevent the model from accessing
code files or other sensitive data that should not be accessible.
These changes simplify file cleanup and prevent the model from accessing
code files or other sensitive data that should not be accessible.
Changes made:
- The default `work_dir` parameter in code executors is changed to
`None`; when invoking the `start()` method, if not `work_dir` was
specified (`None`) a temporary directory is created.
- The `start()` and `stop()` methods of code executors handle the
creation and cleanup of the working directory, for the default temporary
directory.
- For maintaining backward compatibility:
- A `DeprecationWarning` is emitted when the current dir, `"."`, is used
as `work_dir` as it is in the current code executor implementation. The
deprecation warning is tested in `test_deprecated_warning()`.
- For existing implementation that do not call the `start()` method and
do not specify a `work_dir`, the executors will continue using the
current directory `"."` as the working directory, mantaining backward
compatibility.
- Updated test suites:
- Added tests to confirm that by default code executors use a temporary
directory as their working directory: `test_default_work_dir_is_temp()`;
- Implemented test to ensure that a `DeprecationWarning` is raised when
the current directory is used as the default directory:
`test_deprecated_warning()`;
- Added tests to ensure that errors arise when invalid paths (doesn't
exist or user has not the right permissions) are provided:
`test_error_wrong_path()`.
Feel free to suggest any additions or improvements!
## Related issue number
Close#6041
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
This PR adds a module-level docstring to `_message_transform.py`, as
requested in the review for [PR
#6063](https://github.com/microsoft/autogen/pull/6063).
The documentation includes:
- Background and motivation behind the modular transformer design
- Key concepts such as transformer functions, pipelines, and maps
- Examples of how to define, register, and use transformers
- Design principles to guide future contributions and extensions
By embedding this explanation directly into the module, contributors and
maintainers can more easily understand the structure, purpose, and usage
of the transformer pipeline without needing to refer to external
documents.
## Related issue number
Follow-up to [PR #6063](https://github.com/microsoft/autogen/pull/6063)
## Why are these changes needed?
This change addresses a compatibility issue when using Google Gemini
models with AutoGen. Specifically, Gemini returns a 400 INVALID_ARGUMENT
error when receiving a response with an empty "text" parameter.
The root cause is that Gemini does not accept empty string values (e.g.,
"") as valid inputs in the history of the conversation.
To fix this, if the content field is falsy (e.g., None, "", etc.), it is
explicitly replaced with a single whitespace (" "), which prevents the
Gemini model from rejecting the request.
- **Gemini API compatibility:** Gemini models reject empty assistant
messages (e.g., `""`), causing runtime errors. This PR ensures such
messages are safely replaced with whitespace where appropriate.
- **Avoiding regressions:** Applying the empty content workaround **only
to Gemini**, and **only to valid message types**, avoids breaking OpenAI
or other models.
- **Reducing duplication:** Previously, message transformation logic was
scattered and repeated across different message types and models.
Modularizing this pipeline removes that redundancy.
- **Improved maintainability:** With future model variants likely to
introduce more constraints, this modular structure makes it easier to
adapt transformations without writing ad-hoc code each time.
- **Testing for correctness:** The new structure is verified with tests,
ensuring the bug fix is effective and non-intrusive.
## Summary
This PR introduces a **modular transformer pipeline** for message
conversion and **fixes a Gemini-specific bug** related to empty
assistant message content.
### Key Changes
- **[Refactor]** Extracted message transformation logic into a unified
pipeline to:
- Reduce code duplication
- Improve maintainability
- Simplify debugging and extension for future model-specific logic
- **[BugFix]** Gemini models do not accept empty assistant message
content.
- Introduced `_set_empty_to_whitespace` transformer to replace empty
strings with `" "` only where needed
- Applied it **only** to `"text"` and `"thought"` message types, not to
`"tools"` to avoid serialization errors
- **Improved structure for model-specific handling**
- Transformer functions are now grouped and conditionally applied based
on message type and model family
- This design makes it easier to support future models or combinations
(e.g., Gemini + R1)
- **Test coverage added**
- Added dedicated tests to verify that empty assistant content causes
errors for Gemini
- Ensured the fix resolves the issue without affecting OpenAI models
---
## Motivation
Originally, Gemini-compatible endpoints would fail when receiving
assistant messages with empty content (`""`).
This issue required special handling without introducing brittle, ad-hoc
patches.
In addressing this, I also saw an opportunity to **modularize** the
message transformation logic across models.
This improves clarity, avoids duplication, and simplifies future
adaptations (e.g., different constraints across model families).
---
## 📘 AutoGen Modular Message Transformer: Design & Usage Guide
This document introduces the **new modular transformer system** used in
AutoGen for converting `LLMMessage` instances to SDK-specific message
formats (e.g., OpenAI-style `ChatCompletionMessageParam`).
The design improves **reusability, extensibility**, and
**maintainability** across different model families.
---
### 🚀 Overview
Instead of scattering model-specific message conversion logic across the
codebase, the new design introduces:
- Modular transformer **functions** for each message type
- Per-model **transformer maps** (e.g., for OpenAI-compatible models)
- Optional **conditional transformers** for multimodal/text hybrid
models
- Clear separation between **message adaptation logic** and
**SDK-specific builder** (e.g., `ChatCompletionUserMessageParam`)
---
### 🧱 1. Define Transform Functions
Each transformer function takes:
- `LLMMessage`: a structured AutoGen message
- `context: dict`: metadata passed through the builder pipeline
And returns:
- A dictionary of keyword arguments for the target message constructor
(e.g., `{"content": ..., "name": ..., "role": ...}`)
```python
def _set_thought_as_content_gemini(message: LLMMessage, context: Dict[str, Any]) -> Dict[str, str | None]:
assert isinstance(message, AssistantMessage)
return {"content": message.thought or " "}
```
---
### 🪢 2. Compose Transformer Pipelines
Multiple transformer functions are composed into a pipeline using
`build_transformer_func()`:
```python
base_user_transformer_funcs: List[Callable[[LLMMessage, Dict[str, Any]], Dict[str, Any]]] = [
_assert_valid_name,
_set_name,
_set_role("user"),
]
user_transformer = build_transformer_func(
funcs=base_user_transformer_funcs,
message_param_func=ChatCompletionUserMessageParam
)
```
- The `message_param_func` is the actual constructor for the target
message class (usually from the SDK).
- The pipeline is **ordered** — each function adds or overrides keys in
the builder kwargs.
---
### 🗂️ 3. Register Transformer Map
Each model family maintains a `TransformerMap`, which maps `LLMMessage`
types to transformers:
```python
__BASE_TRANSFORMER_MAP: TransformerMap = {
SystemMessage: system_transformer,
UserMessage: user_transformer,
AssistantMessage: assistant_transformer,
}
register_transformer("openai", model_name_or_family, __BASE_TRANSFORMER_MAP)
```
- `"openai"` is currently required (as only OpenAI-compatible format is
supported now).
- Registration ensures AutoGen knows how to transform each message type
for that model.
---
### 🔁 4. Conditional Transformers (Optional)
When message construction depends on runtime conditions (e.g., `"text"`
vs. `"multimodal"`), use:
```python
conditional_transformer = build_conditional_transformer_func(
funcs_map=user_transformer_funcs_claude,
message_param_func_map=user_transformer_constructors,
condition_func=user_condition,
)
```
Where:
- `funcs_map`: maps condition label → list of transformer functions
```python
user_transformer_funcs_claude = {
"text": text_transformers + [_set_empty_to_whitespace],
"multimodal": multimodal_transformers + [_set_empty_to_whitespace],
}
```
- `message_param_func_map`: maps condition label → message builder
```python
user_transformer_constructors = {
"text": ChatCompletionUserMessageParam,
"multimodal": ChatCompletionUserMessageParam,
}
```
- `condition_func`: determines which transformer to apply at runtime
```python
def user_condition(message: LLMMessage, context: Dict[str, Any]) -> str:
if isinstance(message.content, str):
return "text"
return "multimodal"
```
---
### 🧪 Example Flow
```python
llm_message = AssistantMessage(name="a", thought="let’s go")
model_family = "openai"
model_name = "claude-3-opus"
transformer = get_transformer(model_family, model_name, type(llm_message))
sdk_message = transformer(llm_message, context={})
```
---
### 🎯 Design Benefits
| Feature | Benefit |
|--------|---------|
| 🧱 Function-based modular design | Easy to compose and test |
| 🧩 Per-model registry | Clean separation across model families |
| ⚖️ Conditional support | Allows multimodal / dynamic adaptation |
| 🔄 Reuse-friendly | Shared logic (e.g., `_set_name`) is DRY |
| 📦 SDK-specific | Keeps message adaptation aligned to builder interface
|
---
### 🔮 Future Direction
- Support more SDKs and formats by introducing new message_param_func
- Global registry integration (currently `"openai"`-scoped)
- Class-based transformer variant if complexity grows
---
## Related issue number
Closes#5762
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ v ] I've made sure all auto checks have passed.
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Rename the `ChatMessage` and `AgentEvent` base classes to `BaseChatMessage` and `BaseAgentEvent`.
Bring back the `ChatMessage` and `AgentEvent` as union of built-in concrete types to avoid breaking existing applications that depends on Pydantic serialization.
Why?
Many existing code uses containers like this:
```python
class AppMessage(BaseModel):
name: str
message: ChatMessage
# Serialization is this:
m = AppMessage(...)
m.model_dump_json()
# Fields like HandoffMessage.target will be lost because it is now treated as a base class without content or target fields.
```
The assumption on `ChatMessage` or `AgentEvent` to be a union of concrete types could be in many existing code bases. So this PR brings back the union types, while keep method type hints such as those on `on_messages` to use the `BaseChatMessage` and `BaseAgentEvent` base classes for flexibility.
Token limited model context is currently broken because it is importing
from extensions.
This fix removed the imports and updated the model context
implementation to use model client directly.
In the future, the model client's token counting should cache results
from model API to provide accurate counting.
Anthropic SDK could not takes multiple system messages.
However some autogen Agent(e.g. SocietyOfMindAgent) makes multiple
system messages.
And... Gemini with OpenaiSDK do not take error. However is not working
mulitple system messages.
(Just last one is working)
So, I simple change of, "merge multiple system message" at these cases.
## Related issue number
Closes#6116Closes#6117
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
When using the `ACADynamicSessionsCodeExecutor` it includes the stdout
from the execution but also the `results` property from the call to
dynamic sessions. In some situations, when the executed code results in
a file being saved this is included in the result:
```console
Plot saved as 'results_by_date.png'
{'type': 'image', 'format': 'png', 'base64_data': 'iVBORw0KGgoAAAANSUhEUgAAA90AAAJOCAYAAACqS2TfAAAAOXRFWHRTb2Z0d2FyZQ...
```
In some situations, this additional output is not desirable:
- when displaying the code output to a user - in this case, the stdout
content is dwarfed by the base64 encoded file content
- when an LLM agent is going to evaluate the code output to determine
next steps - in this case, the base64 content will be included in the
message history sent to the LLM increasing the prompt token cost
To handle these cases, this PR adds a new (optional) argument to the
`ACADynamicSessionsCodeExecutor` constructor that would allow
suppressing the result content (but default to False to preserve the
current behaviour in the default case)
(from #6042)
Closes#6042
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
This PR adds missing model entries for OpenAI-compatible endpoints,
including gpt-4.5-turbo, gpt-4.5-turbo-preview, and claude-3.5-sonnet.
This improves coverage and avoids potential fallback or mismatch issues
when initializing clients.
This PR refactored `AgentEvent` and `ChatMessage` union types to
abstract base classes. This allows for user-defined message types that
subclass one of the base classes to be used in AgentChat.
To support a unified interface for working with the messages, the base
classes added abstract methods for:
- Convert content to string
- Convert content to a `UserMessage` for model client
- Convert content for rendering in console.
- Dump into a dictionary
- Load and create a new instance from a dictionary
This way, all agents such as `AssistantAgent` and `SocietyOfMindAgent`
can utilize the unified interface to work with any built-in and
user-defined message type.
This PR also introduces a new message type, `StructuredMessage` for
AgentChat (Resolves#5131), which is a generic type that requires a
user-specified content type.
You can create a `StructuredMessage` as follow:
```python
class MessageType(BaseModel):
data: str
references: List[str]
message = StructuredMessage[MessageType](content=MessageType(data="data", references=["a", "b"]), source="user")
# message.content is of type `MessageType`.
```
This PR addresses the receving side of this message type. To produce
this message type from `AssistantAgent`, the work continue in #5934.
Added unit tests to verify this message type works with agents and
teams.
So the behavior of hosted R1 model is the same as locally hosted R1 model.
Addresses: #5989
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Take the output of the tool and use that to create the HandoffMessage.
[discussion is
here](https://github.com/microsoft/autogen/discussions/6067#discussion-8117177)
Supports agents to carry specific instructions when performing handoff
operations
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Add utf encoding to file reading.
Without this, a default system encoding will be used. On Windows
machines this can default to any local encoding causing errors.
```python
with open(
os.path.join(os.path.abspath(os.path.dirname(__file__)), "page_script.js"), "rt", encoding="utf-8"
) as fh:
```
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#6093
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
added support for the thought process in tool calls for
`OpenAIChatCompletionClient`, allowing additional text produced by a
model alongside tool calls to be preserved in the thought field of
`CreateResult`. This PR extends the same functionality to
`AzureAIChatCompletionClient` for consistency across model clients.
#5650
Co-authored-by: Jay Prakash Thakur <jathakur@microsoft.com>
This PR introduces a metadata field in AssistantAgentConfig, allowing
applications to assign and track identity information for agents.
The metadata field is a Dict[str, str] and is included in the
configuration for proper serialization.
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
## Why are these changes needed?
This PR fixes a `TypeError: Cannot instantiate typing.Union` that occurs
when using the `MultimodalWebSurfer_agent` with Anthropic models. The
error was caused by the incorrect usage of `typing.Union` as a class
constructor instead of a type hint within the `_anthropic_client.py`
file. The code was attempting to instantiate `typing.Union`, which is
not allowed. The fix correctly uses `typing.Union` within type hints,
and uses the correct `Base64ImageSourceParam` type. It also updates the
`pyproject.toml` dependency.
## Related issue number
Closes#6035
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [v] I've made sure all auto checks have passed.
---------
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
- Adds tracing docs page for AgentChat with Jaeger example
- [x] Runtime tracing: Example code where tracing is done with the
SingleThreaded Runtime, logging all events
- [x] Custom event tracing: Example code logging messages returned from
`team.run_stream()`
- [ ] LLM span tracing .. depends on
https://github.com/microsoft/autogen/issues/5895
- [ ] [TBD] Distributed tracing
See
[tracing.ipynb](bdb6ac5315/python/packages/autogen-core/docs/src/user-guide/agentchat-user-guide/tracing.ipynb)
here
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
#5992
## Open Questions
@ekzu
- What is the recommended way to directly log custom events like
LLMCallEvents and ToolCallEvents? LogEventhandlers in user code that
become traced spans?
- Currenltly tool calls and their args are already logged (not sure
where this is done), but LLM call events are not. Should we include
samples on this?
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
## Why are these changes needed?
fix the accessibility issue that screen reader doesn't announce the
theme when it changes
## Related issue number
#5631 (13) (31) (59)
---------
Co-authored-by: peterychang <49209570+peterychang@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
This PR allows docker-out-of-docker scenarios to be run in agbench
(e.g., agent teams that rely on the DockerCommandLineExecutor)
This is becoming increasingly important for benchmarking and testing,
since the behaviors of running local executors can diverge in important
ways.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
If user tab to a code block copy button then hit enter, screen reader
doesn't announce "Copied". This PR fixed this bug.
## Related issue number
#5631 (8)
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
## Why are these changes needed?
Fixes (53) on screen reader issues. A special thanks to @sjay8 for
starting the work on this task
## Related issue number
https://github.com/microsoft/autogen/issues/5631
This pull request introduces a new linting feature to the benchmark
configuration in the `agbench` package. The main changes include adding
a new command to the CLI, implementing the linter functionality, and
integrating it with the existing codebase.
### New Linting Feature:
*
[`python/packages/agbench/src/agbench/cli.py`](diffhunk://#diff-0eafed70ad5e99e6f7319927bf92ee3ce4787d156dd2775b10a61baad7ec1799R10):
Added `lint_cli` import and integrated the new "lint" command into the
`main` function.
[[1]](diffhunk://#diff-0eafed70ad5e99e6f7319927bf92ee3ce4787d156dd2775b10a61baad7ec1799R10)
[[2]](diffhunk://#diff-0eafed70ad5e99e6f7319927bf92ee3ce4787d156dd2775b10a61baad7ec1799R37-R41)
### Linter Implementation:
*
[`python/packages/agbench/src/agbench/linter/__init__.py`](diffhunk://#diff-45842e728e3daad063b3cf84d5857a4fdfe14e6d977fb2054f284eb9f5bb5272R1-R4):
Added necessary imports to initialize the linter module.
*
[`python/packages/agbench/src/agbench/linter/_base.py`](diffhunk://#diff-f7ea2f6706232406b6c727fda6d71f09c568b4573f070af79bb7f3da3514e364R1-R81):
Defined core classes such as `Document`, `Code`, `CodeExample`,
`CodedDocument`, and the `BaseQualitativeCoder` protocol.
*
[`python/packages/agbench/src/agbench/linter/cli.py`](diffhunk://#diff-e6ad1e14dc0df2c10fe62fede5a06d83865ad1961f99ec2d78f9052feb4d663bR1-R86):
Implemented the `lint_cli` function, which includes loading log files,
coding them, and printing the results.
*
[`python/packages/agbench/src/agbench/linter/coders/oai_coder.py`](diffhunk://#diff-5059129410822c8a214f797a6167cbfcfbe31bd6a3b1efcb65a2dd703ef9b331R1-R212):
Implemented the `OAIQualitativeCoder` class to interact with OpenAI for
coding documents and caching results.
Example usage:
<img width="997" alt="image"
src="https://github.com/user-attachments/assets/6718688e-9917-4a43-a2f1-1105b030528d"
/>
<img width="999" alt="image"
src="https://github.com/user-attachments/assets/7fcb9c43-70f2-4fe7-ae29-5ad6a4ef2a16"
/>
> If you are in VSCode Terminal, you can click on the links in the
terminal output to jump to the exact error.
---------
Co-authored-by: afourney <adamfo@microsoft.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Fixes Screen Reader issue (58)
## Related issue number
https://github.com/microsoft/autogen/issues/5631
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
---------
Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
Optionally limit what files and folders FileSurfer can access
(constraining it to a subtree of the FS).
This is not a replacement for Docker sandboxing, but can be used in
conjunction with sandboxing to help prevent FileSurfer from accessing
sensitive files.
## Why are these changes needed?
when I want to create a ACASessionsExecutor instance and execute some
code, the default library imported does not work. It always returns:
"ClientAuthenticationError: Authentication failed: AADSTS70011: The
provided request must include a 'scope' input parameter. The provided
value for the input parameter 'scope' is not valid. The scope
https://dynamicsessions.io/ is not valid. Trace ID:
d75efa58-8be7-44ef-8839-aacfdc850600 Correlation ID:
a8e4d859-92da-4fbe-a8e0-05116323ab55 Timestamp: 2025-03-14 14:15:09Z"
After changing the scope in _ensure_access_token to be
"https://dynamicsessions.io/.default" rather than
""https://dynamicsessions.io/" and it worked.
## Related issue number
issue #5946
## Checks
- [Y ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ Y] I've made sure all auto checks have passed.
Co-authored-by: edwinwu <edwin@Edwin-MBA.local>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Resolve#5953
## Related issue number
#5953
<!-- For example: "Closes #1234" -->
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
I have run all [common
tasks](https://github.com/microsoft/autogen/blob/main/python/README.md#common-tasks),
got below errors which I think it is due to no OpenAI API Key is in my
environment variables. Can we ignore them or do I need to buy one?
```
=================================================== short test summary info ===================================================
ERROR tests/test_db_manager.py::TestDatabaseOperations::test_basic_entity_creation - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_...
ERROR tests/test_db_manager.py::TestDatabaseOperations::test_upsert_operations - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_...
ERROR tests/test_db_manager.py::TestDatabaseOperations::test_delete_operations - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_...
ERROR tests/test_team_manager.py::TestTeamManager::test_load_from_file - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_...
ERROR tests/test_team_manager.py::TestTeamManager::test_load_from_directory - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_...
ERROR tests/test_team_manager.py::TestTeamManager::test_create_team - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_...
ERROR tests/test_team_manager.py::TestTeamManager::test_run_stream - openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_...
=========================================== 3 passed, 5 warnings, 7 errors in 9.07s ===========================================
```
Co-authored-by: Leonardo Pinheiro <leosantospinheiro@gmail.com>
For some reason --locked is causing the CI to fall over in the codesign
pipeline.
Unclear why it is not happening on the GitHub Actions side - though it
may be an artifact of the way we have the uv cache set up (and the uv
install version does not match between the two). Getting to a building
state, then will investigate further.
Resolves#5982
This PR adds support for `json_schema` as a `response_format` type in
`OpenAIChatCompletionClient`. This is necessary because it allows the
client to be serialized along with the schema. If user use
`response_format=SomeBaseModel`, the client cannot be serialized.
Usage:
```python
# Structured output response, with a pre-defined JSON schema.
OpenAIChatCompletionClient(...,
response_format = {
"type": "json_schema",
"json_schema": {
"name": "name of the schema, must be an identifier.",
"description": "description for the model.",
# You can convert a Pydantic (v2) model to JSON schema
# using the `model_json_schema()` method.
"schema": "<the JSON schema itself>",
# Whether to enable strict schema adherence when
# generating the output. If set to true, the model will
# always follow the exact schema defined in the
# `schema` field. Only a subset of JSON Schema is
# supported when `strict` is `true`.
# To learn more, read
# https://platform.openai.com/docs/guides/structured-outputs.
"strict": False, # or True
},
},
)
````
## Summary of Changes
- Added 'candidate_func' to 'SelectorGroupChat' to narrow-down the pool
of candidate speakers.
- Introduced a test in tests/test_group_chat_endpoint.py to validate its
functionality.
- Updated the selector group chat user guide with an example
demonstrating 'candidate_func'.
## Why are these changes needed?
- These changes adds a new parameter `candidate_func` to
`SelectorGroupChat` that helps user narrow-down the set of agents for
speaker selection, allowing users to automatically select next speaker
from a smaller pool of agents.
## Related issue number
Closes#5828
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
---------
Signed-off-by: Abhijeetsingh Meena <abhijeet040403@gmail.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
R1 reasoning tokens from hosted R1 model were not parsed correctly for the openai client
Resolves#5941
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
https://github.com/user-attachments/assets/b649053b-c377-40c7-aa51-ee64af766fc2
<img width="100%" alt="image"
src="https://github.com/user-attachments/assets/03ba1df5-c9a2-4734-b6a2-0eb97ec0b0e0"
/>
## Authentication
This PR implements an experimental authentication feature to enable
personalized experiences (multiple users). Currently, only GitHub
authentication is supported. You can extend the base authentication
class to add support for other authentication methods.
By default authenticatio is disabled and only enabled when you pass in
the `--auth-config` argument when running the application.
### Enable GitHub Authentication
To enable GitHub authentication, create a `auth.yaml` file in your app
directory:
```yaml
type: github
jwt_secret: "your-secret-key"
token_expiry_minutes: 60
github:
client_id: "your-github-client-id"
client_secret: "your-github-client-secret"
callback_url: "http://localhost:8081/api/auth/callback"
scopes: ["user:email"]
```
Please see the documentation on [GitHub
OAuth](https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/authenticating-to-the-rest-api-with-an-oauth-app)
for more details on obtaining the `client_id` and `client_secret`.
To pass in this configuration you can use the `--auth-config` argument
when running the application:
```bash
autogenstudio ui --auth-config /path/to/auth.yaml
```
Or set the environment variable:
```bash
export AUTOGENSTUDIO_AUTH_CONFIG="/path/to/auth.yaml"
```
```{note}
- Authentication is currently experimental and may change in future releases
- User data is stored in your configured database
- When enabled, all API endpoints require authentication except for the authentication endpoints
- WebSocket connections require the token to be passed as a query parameter (`?token=your-jwt-token`)
```
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#4350
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
---------
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
`poe check` fails on main on Windows due to a combination line ending
mismatches, Unix-specific commands, and Windows-specific `asyncio`
behavior. This PR attempts to fix this (so that `poe check` on a
freshly-pulled `main` passes on Windows 11.)
Currently we have SecretStr type for model clients to promote security
best practices.
- when we dump_component, keys are serialized as SecreteStr ..
- when we load_component ... SecreteStr type is passed to the client in
the api_key field. This i causes the type problems as the clients expect
a string type.
This PR updates the from_config method for model clients to ensure we
get the value from SecretStr.
Closes#5944
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<img width="1151" alt="image"
src="https://github.com/user-attachments/assets/98bc91ee-749c-4831-b36f-10322979883b"
/>
- Update migration guide to cover teachability/rag agents (mention how
similar functionality can be accomplished with AssistantAgent + Memory)
- Update memory docs to explicitly add a text chunking example and a rag
agent
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#5772Closes#4742
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
Fixes issues like the following trace:
```
packages/autogen_ext/agents/file_surfer/_markdown_file_browser.py", line 39, in __init__
self.set_path(self._base_path)
File "/home/hmozannar/webby/.venv/lib/python3.12/site-packages/autogen_ext/agents/file_surfer/_markdown_file_browser.py", line 67, in set_path
self._open_path(path)
File "/home/hmozannar/webby/.venv/lib/python3.12/site-packages/autogen_ext/agents/file_surfer/_markdown_file_browser.py", line 210, in _open_path
io.StringIO(self._fetch_local_dir(path)), file_extension=".txt"
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmozannar/webby/.venv/lib/python3.12/site-packages/autogen_ext/agents/file_surfer/_markdown_file_browser.py", line 248, in _fetch_local_dir
mtime = datetime.datetime.fromtimestamp(os.path.getmtime(full_path)).strftime("%Y-%m-%d %H:%M")
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen genericpath>", line 67, in getmtime
PermissionError: [Errno 13] Permission denied: '/home/hmozannar/webby/autogen-studio/frontend/readme.txt'
```
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
This reverts the base image in AutoGen Studio Dockerfile to `FROM
python:3.10-slim`. This fixes the Docker image build failure due to
conflicting UID with Dev Container's `vscode` user.
## Related issue number
Fixes#5929
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?
The [agentchat teams
docs](https://microsoft.github.io/autogen/dev/user-guide/agentchat-user-guide/tutorial/teams.html)
page did not list out the teams currently supported. This is confusing
for readers/uisers as they have to search around to discover that
selector groupchat, swarm and magentic one are available.
This PR adds a list of supported teams to the top of the teams page and
links to the relevant tutorials.
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
Fixed a typo, chroma_user_memory instead of user_memory
## Why are these changes needed?
There's a confusing typo in the documentation.
## Related issue number
None
## Checks
- [x ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x ] I've made sure all auto checks have passed.
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
During the recent fix to SendMessageAsync's recurrence, we added code to ensure blocking on Publish. This adds additional resilience to the Publish delivery, ensuring that every subscribed agent receives the message, regardless of errors in the middle.
* Also adds --locked to the `uv sync` call in the Integration Tests project; this will hopefully reduce how much the uv.lock file is mangled during .NET development
Use of `SKChatCompletionAdapter` reliably fails with "'MockValSer'
object cannot be converted to 'SchemaSerializer'"; can repro with this
example:
https://microsoft.github.io/autogen/stable/user-guide/core-user-guide/components/model-clients.html#semantic-kernel-adapter
This appears to be related to
https://github.com/pydantic/pydantic/issues/7713 - commit uses
workaround from
https://github.com/pydantic/pydantic/issues/7713#issuecomment-2604574418
## Why are these changes needed?
This unblocks use of the Semantic Kernel integration by addressing the
above-referenced error, enabling the integration to perform as expected.
## Related issue number
N/A, see https://github.com/pydantic/pydantic/issues/7713 for context,
though.
## Checks
- [X] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- None needed, internal only change.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- None added; this works on my machine, but I'm not clear on the root
cause of the issue and have no strong opinion on whether this is the
ideal way to fix it long term - simply leaning towards PR`ing a tenative
fix instead of raising an issue.
- [ ] I've made sure all auto checks have passed.
- I am not familiar with these, but assume they will be run during CI.
---------
Co-authored-by: Leonardo Pinheiro <leosantospinheiro@gmail.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
In the .NET InProcessRuntime we tried to minimize the amount of tasks
created. Unfortunately, this results in a deadlock when a send message
handler is attempting to SendMessage during the handling of the incoming
message.
The fix is to create a new task for delivering the message in the
message processing loop, which creates a new logical stack to run the
delivery, freeing the loop to process the next delivery request.
* Also fixes passing exceptions and cancellations back to the waiting
task.
* Also fixes a build slowdown on Windows due to uv and llama-cpp
Closes#5915
1. Add `on_pause` and `on_resume` API to `ChatAgent` to support pausing
behavior when running `on_message` concurrently.
2. Add `GroupChatPause` and `GroupChatResume` RPC events and handle them
in `ChatAgentContainer`.
3. Add `pause` and `resume` API to `BaseGroupChat` to allow for this
behavior accessible from the public API.
4. Improve `SequentialRoutedAgent` class to customize which message
types are sequentially handled, making it possible to have concurrent
handling for some messages (e.g., `GroupChatPause`).
5. Added unit tests.
See `test_group_chat_pause_resume.py` for how to use this feature.
What is the difference between pause/resume vs. termination and restart?
- Pause and resume issue direct RPC calls to the participanting agents
of a team while they are running, allowing putting the on-going
generation or actions on hold. This is useful when an agent's turn takes
a long time and multiple steps to complete, and user/application wants
to re-evaluate whether it is worth continue the step or cancel. This
also allows user/application to pause individual agents and resuming
them independently from the team API.
- Termination and restart requires the whole team to comes to a
full-stop, and termination conditions are checked in between agents'
turns. So termination can only happen when no agent is working on its
turn. It is possible that a termination condition has reached well
before the team is terminated, if the agent is taking a long time to
generate a response.
Resolves: #5881
These changes allow for 2 important use-cases:
1. Add a span for tool calls which will enable tracing of all tool calls
in agent_chat
2. Allow runtimes to pick up global `tracer_providers` if they are
available. This is very helpful because it allows for nested teams/agent
to all use the same tracer.
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
These changes are needed because there is currently no way to get
logging information about Streaming LLM requests/responses.
I decided to put the StreamStart event AFTER the first chunk so there
aren't false positives about connections/auth.
Closes#5730
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
This pull request introduces the integration of the `llama-cpp` library
into the `autogen-ext` package, with significant changes to the project
dependencies and the implementation of a new chat completion client. The
most important changes include updating the project dependencies, adding
a new module for the `LlamaCppChatCompletionClient`, and implementing
the client with various functionalities.
### Project Dependencies:
*
[`python/packages/autogen-ext/pyproject.toml`](diffhunk://#diff-095119d4420ff09059557bd25681211d1772c2be0fbe0ff2d551a3726eff1b4bR34-R38):
Added `llama-cpp-python` as a new dependency under the `llama-cpp`
section.
### New Module:
*
[`python/packages/autogen-ext/src/autogen_ext/models/llama_cpp/__init__.py`](diffhunk://#diff-42ae3ba17d51ca917634c4ea3c5969cf930297c288a783f8d9c126f2accef71dR1-R8):
Introduced the `LlamaCppChatCompletionClient` class and handled import
errors with a descriptive message for missing dependencies.
### Implementation of `LlamaCppChatCompletionClient`:
*
`python/packages/autogen-ext/src/autogen_ext/models/llama_cpp/_llama_cpp_completion_client.py`:
- Added the `LlamaCppChatCompletionClient` class with methods to
initialize the client, create chat completions, detect and execute
tools, and handle streaming responses.
- Included detailed logging for debugging purposes and implemented
methods to count tokens, track usage, and provide model information.…d
chat capabilities
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [X ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [X ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ X] I've made sure all auto checks have passed.
---------
Co-authored-by: aribornstein <x@x.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Co-authored-by: Ryan Sweet <rysweet@microsoft.com>
This pull request introduces a new feature to the `FileSurfer` agent and
`MarkdownFileBrowser` by adding support for specifying a base path for
file browsing.
*
`python/packages/autogen-ext/src/autogen_ext/agents/file_surfer/_file_surfer.py`:
* Added `base_path` parameter to `FileSurfer` class and its
initialization method, with a default value of the current working
directory (`os.getcwd()`).
[[1]](diffhunk://#diff-084847b5e64c659c9aff0bd2d05bbcd0fff2c819a4b91bbe65fa0566054c0972R58)
[[2]](diffhunk://#diff-084847b5e64c659c9aff0bd2d05bbcd0fff2c819a4b91bbe65fa0566054c0972R80-R85)
* Updated `MarkdownFileBrowser` initialization within `FileSurfer` to
use the `base_path` parameter.
*
`python/packages/autogen-ext/src/autogen_ext/agents/file_surfer/_markdown_file_browser.py`:
* Added `base_path` parameter to `MarkdownFileBrowser` class and its
initialization method, with a default value of the current working
directory (`os.getcwd()`).
* Updated `MarkdownFileBrowser` to use the `base_path` for setting the
initial path and returning the current page path.
Modify `BaseGroupChat.save_state` to not require the team to be stopped
first. The `save_state` method is read-only. While it may retrieve an
inconsistent state when the team is running, we made a notice to it's
API doc.
Resolves: #5880
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
This PR makes it clear which agent is speaking per message in the
Chainlit team sample. Previously, messages would be exchanged without
showing who is communicating.
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#5609
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Add anthropic docs
- Add api docs
- Add sample code + usage in agent chat user guide
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#5856
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
Resolves#5745
Also made sure to log LLMCallEvent from all builtin model clients, and
added unit test for coverage.
---------
Co-authored-by: Ryan Sweet <rysweet@microsoft.com>
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
Fixes#4821 by adding a `close()` method to all clients.
Additionally:
* The m1 CLI is updated to close the client before exiting.
* The playwrightcontroller is updated to suppress some other unrelated
chatty warnings (e.g,, produced by markitdown when encountering
conversions that require external utilities)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Fixes accessibility issue (34)
## Related issue number
#5634
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
Co-authored-by: Ryan Sweet <rysweet@microsoft.com>
Factors out RunContext management into separate classes:
* Defines a LifecycleObject abstraction to manage initialization /
deinitialization
* Defines RunContextStack to enable a stack of init/deinit layers
* Defines a RunManager to own ensuring proper semantics for performing a
"single execution at a time" call while enabling a polymorphic base.
Implements Reset
Closes#5799
Unblocks #5800 (needs RunContext refactor too)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
(Partially?) fixes accessibility issue (19). Question out to
accessibility team whether its enough.
Migrating to 16.0 for accessibility fixes. Not moving to 16.1 yet
because of a weird change to the 'Show Source' link's appearance
## Related issue number
#5630
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
Co-authored-by: Ryan Sweet <rysweet@microsoft.com>
Fix issue here in this discussion -
https://github.com/microsoft/autogen/discussions/4208#discussioncomment-12394408
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Fix bug in AGS UI where frontend crashes because the default team config
is null
- update /teams endpoint to always return a default team if none is
found for the user
- update UI to check for team before rendering
- also update run_id type to be autoincrement int (similar to team id)
instead of uuid. This helps side step the migration failed errors
related to UUID type when using an sqlite backend
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
---------
Co-authored-by: Ryan Sweet <rysweet@microsoft.com>
Resolves#4075
1. Introduce custom runtime parameter for all AgentChat teams
(RoundRobinGroupChat, SelectorGroupChat, etc.). This is done by making
sure each team's topics are isolated from other teams, and decoupling
state from agent identities. Also, I removed the closure agent from the
BaseGroupChat and use the group chat manager agent to relay messages to
the output message queue.
2. Added unit tests to test scenarios with custom runtimes by using
pytest fixture
3. Refactored existing unit tests to use ReplayChatCompletionClient with
a few improvements to the client.
4. Fix a one-liner bug in AssistantAgent that caused deserialized agent
to have handoffs.
How to use it?
```python
import asyncio
from autogen_core import SingleThreadedAgentRuntime
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import TextMentionTermination
from autogen_ext.models.replay import ReplayChatCompletionClient
async def main() -> None:
# Create a runtime
runtime = SingleThreadedAgentRuntime()
runtime.start()
# Create a model client.
model_client = ReplayChatCompletionClient(
["1", "2", "3", "4", "5", "6", "7", "8", "9", "10"],
)
# Create agents
agent1 = AssistantAgent("assistant1", model_client=model_client, system_message="You are a helpful assistant.")
agent2 = AssistantAgent("assistant2", model_client=model_client, system_message="You are a helpful assistant.")
# Create a termination condition
termination_condition = TextMentionTermination("10", sources=["assistant1", "assistant2"])
# Create a team
team = RoundRobinGroupChat([agent1, agent2], runtime=runtime, termination_condition=termination_condition)
# Run the team
stream = team.run_stream(task="Count to 10.")
async for message in stream:
print(message)
# Save the state.
state = await team.save_state()
# Load the state to an existing team.
await team.load_state(state)
# Run the team again
model_client.reset()
stream = team.run_stream(task="Count to 10.")
async for message in stream:
print(message)
# Create a new team, with the same agent names.
agent3 = AssistantAgent("assistant1", model_client=model_client, system_message="You are a helpful assistant.")
agent4 = AssistantAgent("assistant2", model_client=model_client, system_message="You are a helpful assistant.")
new_team = RoundRobinGroupChat([agent3, agent4], runtime=runtime, termination_condition=termination_condition)
# Load the state to the new team.
await new_team.load_state(state)
# Run the new team
model_client.reset()
new_stream = new_team.run_stream(task="Count to 10.")
async for message in new_stream:
print(message)
# Stop the runtime
await runtime.stop()
asyncio.run(main())
```
TODOs as future PRs:
1. Documentation.
2. How to handle errors in custom runtime when the agent has exception?
---------
Co-authored-by: Ryan Sweet <rysweet@microsoft.com>
## Why are these changes needed?
Keyboard focus location was being lost after a copy event. Header anchor
was also not selectable while hidden
Fixes (2), (4), (11), (35)
## Related issue number
#5630
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
_(EXPERIMENTAL, RESEARCH IN PROGRESS)_
In 2023 AutoGen introduced [Teachable
Agents](https://microsoft.github.io/autogen/0.2/blog/2023/10/26/TeachableAgent/)
that users could teach new facts, preferences and skills. But teachable
agents were limited in several ways: They could only be
`ConversableAgent` subclasses, they couldn't learn a new skill unless
the user stated (in a single turn) both the task and how to solve it,
and they couldn't learn on their own. **Task-Centric Memory** overcomes
these limitations, allowing users to teach arbitrary agents (or teams)
more flexibly and reliably, and enabling agents to learn from their own
trial-and-error experiences.
This PR is large and complex. All of the files are new, and most of the
added components depend on the others to run at all. But the review
process can be accelerated if approached in the following order.
1. Start with the [Task-Centric Memory
README](https://github.com/microsoft/autogen/tree/agentic_memory/python/packages/autogen-ext/src/autogen_ext/task_centric_memory).
1. Install the memory extension locally, since it won't be in pypi until
it's merged. In the `agentic_memory` branch, and the `python/packages`
directory:
- `pip install -e autogen-agentchat`
- `pip install -e autogen-ext[openai]`
- `pip install -e autogen-ext[task-centric-memory]`
2. Run the Quickstart sample code, then immediately open the
`./pagelogs/quick/0 Call Tree.html` file in a browser to view the work
in progress.
3. Click through the web page links to see the details.
2. Continue through the rest of the main README to get a high-level
overview of the architecture.
3. Read through the [code samples
README](https://github.com/microsoft/autogen/tree/agentic_memory/python/samples/task_centric_memory),
running each of the 4 code samples while viewing their page logs.
4. Skim through the 4 code samples, along with their corresponding yaml
config files:
1. `chat_with_teachable_agent.py`
2. `eval_retrieval.py`
3. `eval_teachability.py`
4. `eval_learning_from_demonstration.py`
5. `eval_self_teaching.py`
6. Read `task_centric_memory_controller.py`, referring back to the
previously generated page logs as needed. This is the most important and
complex file in the PR.
7. Read the remaining core files.
1. `_task_centric_memory_bank.py`
2. `_string_similarity_map.py`
3. `_prompter.py`
8. Read the supporting files in the utils dir.
1. `teachability.py`
2. `apprentice.py`
3. `grader.py`
4. `page_logger.py`
5. `_functions.py`
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Adds sidebar and breadcrumb selection and focus indicators to high
contrast modes.
Fixes (46), (55), (56)
## Related issue number
#5633
This change has no affect on normal color modes, but adds selection and
focus indicators to high contrast modes. I'm not sure how to get rid of
the double bars on nested links, but thats a minor issue
before:

after:

## Why are these changes needed?
Current webpage theme does not highlight code output boxes. Issues (5)
and (29)
## Related issue number
#5630
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
Resolves#5786
Also updated the termination tutorial to include an example of a custom
termination conditon.
Also added to guide about FunctionTool and MCP tools.
## Why are these changes needed?
The current installation command fails in certain shells (e.g., `zsh`,
`fish`) because brackets (`[]`) are interpreted as special characters.
Adding quotes ensures compatibility across different environments,
including Linux, macOS, and Windows.
## Related issue number
No related issue, but this fixes an installation issue encountered by
multiple users.
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
## Why are these changes needed?
* `python3` to `python`: Windows uses `python` for Python 3 by default,
not `python3`.
* `bin` to `scripts`: Windows virtual environments use `Scripts` instead
of `bin`.
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
AutoGen was passing raw dictionaries to functions instead of
constructing Pydantic model or dataclass instances. If a tool function’s
parameter was a Pydantic BaseModel or a dataclass, the function would
receive a dict and likely throw an error or behave incorrectly (since it
expected an object of that type).
This PR addresses problem in AutoGen where tool functions expecting
structured inputs (Pydantic models or dataclasses) were receiving raw
dictionaries. It ensures that structured inputs are automatically
validated and instantiated before function calls. Complete details are
in Issue #5736
[Reproducible Example Code - Failing
Case](https://colab.research.google.com/drive/1hgoP-cGdSZ1-OqQLpwYmlmcExgftDqlO?usp=sharing)
<!-- Please give a short summary of the change and the problem this
solves. -->
## Changes Made:
- Inspect function signatures for Pydantic BaseModel and dataclass
annotations.
- Convert input dictionaries into properly instantiated objects using
BaseModel.model_validate() for Pydantic models or standard instantiation
for dataclasses.
- Raise descriptive errors when validation or instantiation fails.
- Unit tests have been added to cover all scenarios
Now structured inputs are automatically validated and instantiated
before function calls.
- **Updated Conversion Logic:**
In the `run()` method, we now inspect the function’s signature and
convert input dictionaries to structured objects. For parameters
annotated with a Pydantic model, we use `model_validate()` to create an
instance; for those annotated with a dataclass, we instantiate the
object using the dataclass constructor. For example:
```python
# Get the function signature.
sig = inspect.signature(self._func)
raw_kwargs = args.model_dump()
kwargs = {}
# Iterate over the parameters expected by the function.
for name, param in sig.parameters.items():
if name in raw_kwargs:
expected_type = param.annotation
value = raw_kwargs[name]
# If expected type is a subclass of BaseModel, perform conversion.
if inspect.isclass(expected_type) and issubclass(expected_type,
BaseModel):
try:
kwargs[name] = expected_type.model_validate(value)
except ValidationError as e:
raise ValueError(
f"Error validating parameter '{name}' for function
'{self._func.__name__}': {e}"
) from e
# If it's a dataclass, instantiate it.
elif is_dataclass(expected_type):
try:
cls = expected_type if isinstance(expected_type, type) else
type(expected_type)
kwargs[name] = cls(**value)
except Exception as e:
raise ValueError(
f"Error instantiating dataclass parameter '{name}' for function
'{self._func.__name__}': {e}"
) from e
else:
kwargs[name] = value
```
- **Error Handling Improvements:**
Conversion steps are wrapped in try/except blocks to raise descriptive
errors when instantiation fails, aiding in debugging invalid inputs.
- **Testing:**
Unit tests have been added to simulate tool calls (e.g., an `add` tool)
to ensure that with input like:
```json
{"input": {"x": 2, "y": 3}}
```
The tool function receives an instance of the expected type and returns
the correct result.
## Related issue number
Closes#5736
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
Correcting an error: If CodeReviewResult is not approved, the coder
agents sends a CodeReviewTask back to the reviewer agent, not a
CodeWritingTask.
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Starting out this draft PR to add documentation for the
`with_requirements` decorator in the `autogen-core` package.
---------
Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
The PR introduces two changes.
The first change is adding a name attribute to
`FunctionExecutionResult`. The motivation is that semantic kernel
requires it for their function result interface and it seemed like a
easy modification as `FunctionExecutionResult` is always created in the
context of a `FunctionCall` which will contain the name. I'm unsure if
there was a motivation to keep it out but this change makes it easier to
trace which tool the result refers to and also increases api
compatibility with SK.
The second change is an update to how messages are mapped from autogen
to semantic kernel, which includes an update/fix in the processing of
function results.
## Related issue number
<!-- For example: "Closes #1234" -->
Related to #5675 but wont fix the underlying issue of anthropic
requiring tools during AssistantAgent reflection.
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
---------
Co-authored-by: Leonardo Pinheiro <lpinheiro@microsoft.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
(I'm not familiar with English, so I use Goggle translation a lot. So
please forgive me if I say something rude.)
I started autogen-studio and opened the page in the browser, nothing was
rendered.
I checked using the Developer tool, the following error appeared:
```
TypeError: Cannot read properties of null (reading 'label')
at editor.tsx:114:42
```
I check the implementation, it looks like `team.component` is `null`,
which seems to be caused during initialization process when the team is
not registered.
[source](78ff883d24/python/packages/autogen-studio/frontend/src/components/views/playground/manager.tsx (L199))
So I fixed the issue where the gallery wasn't being retrieved which was
causing the issue.
### Reproduce bug
1. clone this repository
2. open devcontainer(`python/packages/autogen-studio`)
3. Running the application
```sh
cd frontend
yarn build
cd -
OPENAI_API_KEY="" autogenstudio ui --port 8081
```
4. Open `localhost:8081` in browser.
## Related issue number
<!-- For example: "Closes #1234" -->
Probably not found. (sorry if I had to raise an issue before PR)
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [-] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
stream_options are not part of the model classes, so they won't get
serialized when calling dump_component. Adding this to the model allows
us to store the stream options when the component is serialized.
---------
Signed-off-by: Peter Jausovec <peter.jausovec@solo.io>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
> Hey Victor, this is maybe a bug, but when a session is delete, runs
and messages for that session are not deleted, any reason why to keep
them?
@husseinmozannar
The main fix is to add a pragma that ensures SQL lite enforces foreign
key constraints.
Also needed to update error messages for autoupgrade of databases. Also
adds a test for cascade deletes and for parts of teammanager
With this fix,
- Messages get deleted when the run is deleted
- Runs get deleted when sessiosn are deleted
- Sessions get deleted when a team is deleted
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
- AGS contains image files which are required for local build /dev
- These files are stored using git lfs which needs to be run before the
files are downloaded
- This PR adds a reminder to the AGS readme to run `git lfs checkout`
after installing git lfs to download the image files
## Related issue number
<!-- For example: "Closes #1234" -->
Close#3418
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
## Why are these changes needed?
The CodeExecutorAgent documentation needs to be updated to explicitly
mention that it only processes code properly formatted in markdown code
blocks with triple backticks. This change adds a clear note with
examples to help users understand the required format for code
execution.
## Related issue number
Closes#5771
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Make FileSurfer and CodeExecAgent Declarative.
These agent presents are used as part of magentic one and having them
declarative is a precursor to their use in AGS.
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#5607
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
Shows an example of how to use the `Memory` interface to implement a
just-in-time vector memory based on chromadb.
```python
import os
from pathlib import Path
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.ui import Console
from autogen_core.memory import MemoryContent, MemoryMimeType
from autogen_ext.memory.chromadb import ChromaDBVectorMemory, PersistentChromaDBVectorMemoryConfig
from autogen_ext.models.openai import OpenAIChatCompletionClient
# Initialize ChromaDB memory with custom config
chroma_user_memory = ChromaDBVectorMemory(
config=PersistentChromaDBVectorMemoryConfig(
collection_name="preferences",
persistence_path=os.path.join(str(Path.home()), ".chromadb_autogen"),
k=2, # Return top k results
score_threshold=0.4, # Minimum similarity score
)
)
# a HttpChromaDBVectorMemoryConfig is also supported for connecting to a remote ChromaDB server
# Add user preferences to memory
await chroma_user_memory.add(
MemoryContent(
content="The weather should be in metric units",
mime_type=MemoryMimeType.TEXT,
metadata={"category": "preferences", "type": "units"},
)
)
await chroma_user_memory.add(
MemoryContent(
content="Meal recipe must be vegan",
mime_type=MemoryMimeType.TEXT,
metadata={"category": "preferences", "type": "dietary"},
)
)
# Create assistant agent with ChromaDB memory
assistant_agent = AssistantAgent(
name="assistant_agent",
model_client=OpenAIChatCompletionClient(
model="gpt-4o",
),
tools=[get_weather],
memory=[user_memory],
)
stream = assistant_agent.run_stream(task="What is the weather in New York?")
await Console(stream)
await user_memory.close()
```
```txt
---------- user ----------
What is the weather in New York?
---------- assistant_agent ----------
[MemoryContent(content='The weather should be in metric units', mime_type='MemoryMimeType.TEXT', metadata={'category': 'preferences', 'mime_type': 'MemoryMimeType.TEXT', 'type': 'units', 'score': 0.4342913043162201, 'id': '8a8d683c-5866-41e1-ac17-08c4fda6da86'}), MemoryContent(content='The weather should be in metric units', mime_type='MemoryMimeType.TEXT', metadata={'category': 'preferences', 'mime_type': 'MemoryMimeType.TEXT', 'type': 'units', 'score': 0.4342913043162201, 'id': 'f27af42c-cb63-46f0-b26b-ffcc09955ca1'})]
---------- assistant_agent ----------
[FunctionCall(id='call_a8U3YEj2dxA065vyzdfXDtNf', arguments='{"city":"New York","units":"metric"}', name='get_weather')]
---------- assistant_agent ----------
[FunctionExecutionResult(content='The weather in New York is 23 °C and Sunny.', call_id='call_a8U3YEj2dxA065vyzdfXDtNf', is_error=False)]
---------- assistant_agent ----------
The weather in New York is 23 °C and Sunny.
```
Note that MemoryContent object in the MemoryQuery events have useful
metadata like the score and id retrieved memories.
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
The Team DB class doesn't have the "config" field anymore. It has
"component"
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Add support for default model client, in AGS updates to settings UI.
A default model client will be used for subsequent background tasks
(e.g, automatically naming sessions to meaningful names).
<img width="1441" alt="image"
src="https://github.com/user-attachments/assets/31675ef4-8f80-4a8c-9762-3d6bcbc6d33d"
/>
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#5685
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
Typo in example text
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [X] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [X] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [X] I've made sure all auto checks have passed.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
- Running a team generates a MemoryQueryEvent has a MemoryContent field
and is part of the model context
- Saving team state (`team.save_state()`) includes serializing model
context
- MemoryContent has a mime_type field, which was not being properly
serialized.
"Object of type MemoryMimeType is not JSON serializable"
This PR? -> add explicit serialization instruction for the mimetype
field.
```python
import asyncio
import logging
import json
import yaml, aiofiles
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.conditions import MaxMessageTermination
from autogen_core.memory import ListMemory, MemoryContent, MemoryMimeType
from autogen_agentchat.ui import Console
logger = logging.getLogger(__name__)
state_path = "team_state.json"
model_client = OpenAIChatCompletionClient(model="gpt-4o-mini")
max_msg_termination = MaxMessageTermination(max_messages=3)
# Initialize user memory
user_memory = ListMemory()
# Add user preferences to memory
await user_memory.add(MemoryContent(content="The weather should be in metric units", mime_type=MemoryMimeType.TEXT))
# Create the team.
agent = AssistantAgent(
name="assistant",
model_client=model_client,
system_message="You are a helpful assistant.",
memory = [user_memory],
)
yoda = AssistantAgent(
name="yoda",
model_client=model_client,
system_message="Repeat the same message in the tone of Yoda.",
)
team = RoundRobinGroupChat(
[agent, yoda],
termination_condition=max_msg_termination
)
await Console(team.run_stream(task="Hi, How are you ?"))
# Save team state to file.
state = await team.save_state()
with open(state_path, "w") as f:
json.dump(state, f)
# Load team state from file.
with open("team_state.json", "r") as f:
team_state = json.load(f)
await team.load_state(state)
await Console( team.run_stream(task="What was the last thing that was said in this conversation "))
```
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#5688
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
Closes#4904
Does not change default behavior in core.
In agentchat, this change will mean that exceptions that used to be
ignored and result in bugs like the group chat stopping are now reported
out to the user application.
---------
Co-authored-by: Ben Constable <benconstable@microsoft.com>
Co-authored-by: Ryan Sweet <rysweet@microsoft.com>
@ekzhu should likely be assigned as reviewer
## Why are these changes needed?
These changes address the bug reported in #5663. Prevents TypeError from
being thrown at inference time by ollama AsyncClient when `host` (and
other) kwargs are passed to autogen OllamaChatCompletionClient
constructor.
It also adds ollama as a named optional extra so that the ollama
requirements can be installed alongside autogen-ext (e.g. `pip install
autogen-ext[ollama]`
@ekzhu, I will need some help or guidance to ensure that the associated
test (which requires ollama and tiktoken as dependencies of the
OllamaChatCompletionClient) can run successfully in autogen's test
execution environment.
I have also left the "I've made sure all auto checks have passed" check
below unchecked as this PR is coming from my fork. (UPDATE: auto checks
appear to have passed after opening PR, so I have checked box below)
## Related issue number
Intended to close#5663
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
---------
Co-authored-by: Ryan Stewart <ryanstewart@Ryans-MacBook-Pro.local>
Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
Co-authored-by: peterychang <49209570+peterychang@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
Claude 3.7 just came out. Its a pretty capable model and it would be
great to support it in Autogen.
This will could augment the already excellent support we have for
Anthropic via the SKAdapters in the following ways
- Based on the ChatCompletion API similar to the ollama and openai
client
- Configurable/serializable (can be dumped) .. this means it can be used
easily in AGS.
## What is Supported
(video below shows the client being used in autogen studio)
https://github.com/user-attachments/assets/8fb7c17c-9f9c-4525-aa9c-f256aad0f40b
- streaming
- tool callign / function calling
- drop in integration with assistant agent.
- multimodal support
```python
from dotenv import load_dotenv
import os
load_dotenv()
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.ui import Console
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.models.anthropic import AnthropicChatCompletionClient
model_client = AnthropicChatCompletionClient(
model="claude-3-7-sonnet-20250219"
)
async def get_weather(city: str) -> str:
"""Get the weather for a given city."""
return f"The weather in {city} is 73 degrees and Sunny."
agent = AssistantAgent(
name="weather_agent",
model_client=model_client,
tools=[get_weather],
system_message="You are a helpful assistant.",
# model_client_stream=True,
)
# Run the agent and stream the messages to the console.
async def main() -> None:
await Console(agent.run_stream(task="What is the weather in New York?"))
await main()
```
result
```
messages = [
UserMessage(content="Write a very short story about a dragon.", source="user"),
]
# Create a stream.
stream = model_client.create_stream(messages=messages)
# Iterate over the stream and print the responses.
print("Streamed responses:")
async for response in stream: # type: ignore
if isinstance(response, str):
# A partial response is a string.
print(response, flush=True, end="")
else:
# The last response is a CreateResult object with the complete message.
print("\n\n------------\n")
print("The complete response:", flush=True)
print(response.content, flush=True)
print("\n\n------------\n")
print("The token usage was:", flush=True)
print(response.usage, flush=True)
```
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#5205Closes#5708
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
cc @rohanthacker
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
update human in the loop docs for agentchat
..
> Outputs in docs are outdated as summary is not printed out by default
when using Console
[#5590](https://github.com/microsoft/autogen/issues/5590)
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#5590
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
This pull request includes updates to the `README.md` file for the
`autogen-studio` package to improve clarity and accuracy. The most
important changes include a minor rewrite of the project structure
section and updates to the installation instructions (better tabbing).
The `SemanticKernelAgent` class has been updated to include an optional
`modelServiceId` parameter, allowing the specification of a service ID
for the model.
## Why are these changes needed?
Currently, `SemanticKernelAgent` uses the parameterless method for
resolving `IChatCompletionSerivce`. This will fail, when multiple models
are registered in the Kernel.
To support different models registered in the Kernel, I adopted the
resolving of the `IChatCompletionSerivce` within the
`SemanticKernelAgent` with an optional parameter. When it is not set, I
resolve the default instance, otherwise, I use the optional parameter as
a servide id for resolving the `IChatCompletionSerivce` service.
## Related issue number
## Checks
- [x] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
---------
Co-authored-by: Ryan Sweet <rysweet@microsoft.com>
Co-authored-by: Xiaoyun Zhang <bigmiao.zhang@gmail.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Fixes `(14) Ensures the contrast between foreground and background
colors meets WCAG 2 AA minimum contrast ratio thresholds`
Note: the color values don't output at the exact values on the
stylesheet. For example, the value `#1774E5` evaluates to `#2274E0` by
the Accessibility Insights app.
## Related issue number
https://github.com/microsoft/autogen/issues/5633
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
I'm unsure if everyone will agree, but I started to look into adding new
logic and found that refactoring into smaller functions would make it
more maintainable.
There is no change in functionality, only a breakdown into smaller
methods to make it more modular and improve readability. There is a lot
of logic in the method and this refactor breaks it down into context
management, llm call and result processing.
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
---------
Co-authored-by: Leonardo Pinheiro <lpinheiro@microsoft.com>
## Why are these changes needed?
For the sake of subsequent people reading the metacode and following
programming specifications, variable names are updated in combination
with usage scenarios. Mainly contains the variable "_self_text" in
TextMentionTermination
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
This PR has 3 main improvements.
- Token streaming
- Adds support for environment variables in the app settings
- Updates AGS to persist Gallery entry in db.
## Adds Token Streaming in AGS.
Agentchat now supports streaming of tokens via
`ModelClientStreamingChunkEvent `. This PR is to track progress on
supporting that in the AutoGen Studio UI.
If `model_client_stream` is enabled in an assitant agent, then token
will be streamed in UI.
```python
streaming_assistant = AssistantAgent(
name="assistant",
model_client=model_client,
system_message="You are a helpful assistant.",
model_client_stream=True, # Enable streaming tokens.
)
```
https://github.com/user-attachments/assets/74d43d78-6359-40c3-a78e-c84dcb5e02a1
## Env Variables
Also adds support for env variables in AGS Settings
You can set env variables that are loaded just before a team is run.
Handy to set variable to be used by tools etc.
<img width="1291" alt="image"
src="https://github.com/user-attachments/assets/437b9d90-ccee-42f7-be5d-94ab191afd67"
/>
> Note: the set variables are available to the server process.
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#5627Closes#5662Closes#5619
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
Add metadata field to BaseMessage.
Why?
- additional metadata field can track 1) timestamp if needed, 2) flags
about the message. For instance, a use case is a metadata field
{"internal":"yes"} that would hide messages from being displayed in an
application or studio.
As long as an extra field is added to basemessage that is not consumed
by existing agents, I am happy.
Notes:
- We can also only add it to BaseChatMessage, that would be fine
- I don't care what the extra field is called as long as there is an
extra field somewhere
- I don't have preference for the type, a str could work, but a dict
would be more useful.
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
convenience - allows to just run "agenthost"
```
dotnet pack --no-build --configuration Release --output './output/release' -bl\n
dotnet tool install --add-source ./output/release Microsoft.AutoGen.AgentHost
agenthost
```
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
closes#5646
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
Replace the undefined `tools` variable with `tool_schema` parameter
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
This change keeps the documentation up to date :
https://microsoft.github.io/autogen/stable//user-guide/core-user-guide/components/tools.html
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
removes unneeded deps
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#5644
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
## Why are these changes needed?
Fix minor issues in docs:
- probably editing left over
- typo
- code formatting inconsistency
## Related issue number
N/A
## Checks
- [ X] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ X] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ X] I've made sure all auto checks have passed.
Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
completes the table on the readme with the .NET links to docs and
packages.
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
The HttpTool in AutoGen accepts a headers parameter, but it is not being
used in the actual request. This fix ensures that the headers provided
by users are correctly included in HTTP requests. This resolves issues
where authentication or other custom headers are required but currently
ignored.
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#5638
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
## Why are these changes needed?
See issue for a bug description.
The problem was that a lot of openrouter models return `""` as
`tool_call.arguments`, which caused `json.loads` to fail
## Related issue number
https://github.com/microsoft/autogen/issues/5666
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
This PR makes makes ChatCompletionCache support component config
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Ensures we have a path to serializing ChatCompletionCache , similar to
the ChatCompletion client that it wraps.
This PR does the following
- Makes CacheStore serializable first (part of this includes converting
from Protocol to base class). Makes it's derivatives serializable as
well (diskcache, redis)
- Makes ChatCompletionCache serializable
- Adds some tests
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#5141
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
cc @nour-bouzid
Resolves#5192
Test
```python
import asyncio
import os
from random import randint
from typing import List
from autogen_core.tools import BaseTool, FunctionTool
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.ui import Console
async def get_current_time(city: str) -> str:
return f"The current time in {city} is {randint(0, 23)}:{randint(0, 59)}."
tools: List[BaseTool] = [
FunctionTool(
get_current_time,
name="get_current_time",
description="Get current time for a city.",
),
]
model_client = OpenAIChatCompletionClient(
model="anthropic/claude-3.5-haiku-20241022",
base_url="https://openrouter.ai/api/v1",
api_key=os.environ["OPENROUTER_API_KEY"],
model_info={
"family": "claude-3.5-haiku",
"function_calling": True,
"vision": False,
"json_output": False,
}
)
agent = AssistantAgent(
name="Agent",
model_client=model_client,
tools=tools,
system_message= "You are an assistant with some tools that can be used to answer some questions",
)
async def main() -> None:
await Console(agent.run_stream(task="What is current time of Paris and Toronto?"))
asyncio.run(main())
```
```
---------- user ----------
What is current time of Paris and Toronto?
---------- Agent ----------
I'll help you find the current time for Paris and Toronto by using the get_current_time function for each city.
---------- Agent ----------
[FunctionCall(id='toolu_01NwP3fNAwcYKn1x656Dq9xW', arguments='{"city": "Paris"}', name='get_current_time'), FunctionCall(id='toolu_018d4cWSy3TxXhjgmLYFrfRt', arguments='{"city": "Toronto"}', name='get_current_time')]
---------- Agent ----------
[FunctionExecutionResult(content='The current time in Paris is 1:10.', call_id='toolu_01NwP3fNAwcYKn1x656Dq9xW', is_error=False), FunctionExecutionResult(content='The current time in Toronto is 7:28.', call_id='toolu_018d4cWSy3TxXhjgmLYFrfRt', is_error=False)]
---------- Agent ----------
The current time in Paris is 1:10.
The current time in Toronto is 7:28.
```
---------
Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
Update AGS, Remove Numpy Dep
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#5639
## Checks
- [ ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Fixes the problem where Gallery items could only be modified via JSON
This PR does the following
- Refactor TeamBuilder to have modular component editor UI primarily
focused on editing each component type.
- Refactor the Gallery UX
- improve layout to use tabs for each component type
- enable editing of each component item by reusing the component editor
- Enable switching between form editing and UI editing for coponent
editor view
This way, gallery items can be readily modified and then reused in the
component library in team builder.
It also implements an upate to the Gallery data structure to make it
more intuitive - it has a components field that has teams, agents,
models ...
<img width="1598" alt="image"
src="https://github.com/user-attachments/assets/3c3a228a-0bd2-4fc1-85ec-c9685c80bf72"
/>
<img width="1614" alt="image"
src="https://github.com/user-attachments/assets/5b6ed840-9c48-47bc-8c17-2aa50c7dcb99"
/>
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#5465Closes#5047
cc @nour-bouzid @balakreshnan @EItanya @joslat @IustinT @leonG7
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
Fix typo in doc
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
cleaning up the dotnet docs site, improving formatting, making the
landing page look more like the python
## Why are these changes needed?
---------
Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
Fixing grammar issues
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Adds ollama client documentation to the docs page
## Related issue number
https://github.com/microsoft/autogen/issues/5604
## Checks
- [ ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Initial commit's docstrings were incorrect, which would be confusing for
a user
## Related issue number
https://github.com/microsoft/autogen/issues/5595
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
<img width="1560" alt="image"
src="https://github.com/user-attachments/assets/da3d781f-2572-4bd7-802d-1d3900f6c6d9"
/>
## Why are these changes needed?
Improves editing UI for tools in AGS
- add remove tools
- add/remove imports
- show source code section of FunctionTool in in code editor
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
- Add a section on how to use model client with tools
- Replace the obsecure pattern of using ToolAgent with a simpler
implementation of calling tool within the agent itself.
sometimes a client will subscribe but the message it was hoping for is
already published and delivered to one of its peers but it missed it.
this adds a five second (default) buffer and will deliver buffered
messages to new subscribers. messages are removed from the buffer after
5 seconds
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Right now if a remote agent sends a message that local agents do not
listen to (and thus will never configure a deserializer for), or if the
first agent in the iteration is one such, the runtime will throw an
unnecessary exception an come down, even though the deserialized message
will never actually be needed before a deserializer is registered.
The fix will downgrade that to a warning.
* Also updates the HelloAgent sample to be more amenable to being used
with gRPC directly, without configuring environment variables.
* integration tests used to use the samples - now they are separate
* patch dictionary problem in serializer
* add Message Registry with dead letter queue that gets checked on new
subs.
This pull request includes a small change to the
`python/packages/magentic-one-cli/src/magentic_one_cli/_m1.py` file. The
change modifies the help message for the `--config` argument to remove
the part about leaving it empty to print a sample configuration. This is
already handled by `--sample-config`.
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Make
[PythonCodeExecutionTool](https://github.com/microsoft/autogen/blob/main/python/packages/autogen-ext/src/autogen_ext/tools/code_execution/_code_execution.py)
declarative so it can be used in tools like AGS
Summary of changes
- Make CodeExecutor declarative (convert from Protocol to ABC, inherit
from ComponentBase)
- Make LocalCommandLineCodeExecutor, JupyterCodeExecutor and
DockerCommandLineCodeExecutor declarative , best effort. Not all fields
are serialized, warnings are shown where appropriate.
- Make PythonCodeExecutionTool declarative.
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
Closes#5526
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
Pulls the .NET Docs a bit more into the documentation website proper:
* Add a link from the 0.4-series Python docs to
* Adds a transient redirect from /dotnet/ to /dotnet/dev/ to forward to
the dev docs until we do a non "-dev" release
the uv sync logging in the test build was really noisy - no longer
needed
## Why are these changes needed?
## Related issue number
## Checks
- [ ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
This finishes the fix for the race condition between opening a
GrpcWorkerConnection and registering agent types on that worker. Now,
instead of failing to register, we return from the call (with the
expectation that we will finish registration as we set up the
connection)
Part 1: #5494
Part 2: #5514
---------
Co-authored-by: Ryan Sweet <rysweet@microsoft.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
* There was an inconsistentcy between the package names that prevented
xlang from working even though the actual types were 1:1 compatible
* Also moves the HelloAgent aspire project into the Hello solution
folder
Fixed a wrong class name style in the document and removed trailing whitespace.
Co-authored-by: Wei Jen Lu <weijenlu@microsoft.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
Adds the following blurb to the model clients docs
## API Keys From Environment Variables
In the examples above, we show that you can provide the API key through
the `api_key` argument. Importantly, the OpenAI and Azure OpenAI clients
use the [openai
package](3f8d8205ae/src/openai/__init__.py (L260)),
which will automatically read an api key from the environment variable
if one is not provided.
- For OpenAI, you can set the `OPENAI_API_KEY` environment variable.
- For Azure OpenAI, you can set the `AZURE_OPENAI_API_KEY` environment
variable.
This is a good practice to explore, as it avoids including sensitive api
keys in your code.
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
Right now we rely on opening the channel to associate a ClientId with an
entry on the gateway side. This causes a race when the channel is being
opened in the background while an RPC (e.g. MyAgent.register()) is
invoked.
If the RPC is processed first, the gateway rejects it due to "invalid"
clientId.
This fix makes this condition less likely to trigger, but there is still
a piece of the puzzle that needs to be solved on the Gateway side.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
It is useful to rapidly validate any changes to a team structure as
teams are built either via drag and drop or by modifying the underlying
spec
You can now “validate” your team. The key ideas are as follows
- Each team is based on some Component Config specification which is a
pedantic model underneath.
- Validation is 3 pronged based on a ValidatorService class
- Data model validation (validate component schema)
- Instantiation validation (validate component can be instantiated)
- Provider validation, component_type validation (validate that provider
exists and can be imported)
- UX: each time a component is **loaded or saved**, it is automatically
validated and any errors shown (via a server endpoint). This way, the
developer immediately knows if updates to the configuration is wrong or
has errors.
> Note: this is different from actually running the component against a
task. Currently you can run the entire team. In a separate PR we will
implement ability to run/test other components.
<img width="1360" alt="image"
src="https://github.com/user-attachments/assets/d61095b7-0b07-463a-b4b2-5c50ded750f6"
/>
<img width="1368" alt="image"
src="https://github.com/user-attachments/assets/09a1677e-76e8-44a4-9749-15c27457efbb"
/>
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
Closes#4616
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
We were registering the serializers when we already had a concrete type
on the way into Publish or Send on the .NET side. However, in a xLang
scenario, messages could originate from e.g. Python before being
sent/published from .NET, resulting in no serializer being found.
This change adds a second-change registration and lookup when the agent
is instantiated based on the IHandle<T> implementations.
We don't provide links to the dev releases on the site anymore, so let's
save some CI time and clutter
---------
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
Co-authored-by: Ryan Sweet <rysweet@microsoft.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Closes#5297
---------
Co-authored-by: Ryan Sweet <rysweet@microsoft.com>
Co-authored-by: Jacob Alber <jaalber@microsoft.com>
Co-authored-by: Jacob Alber <jacob.alber@microsoft.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Unlike with the InProcessRuntime, there is a two-phase initialization,
first when AgentsApp is built (when initial agents are registered) and
when it StartAsync()s and connects to the Gateway. Unfortunately, it is
possible to attempt to send direct RPC calls to the Gateway before the
message channel is opened; in this case, the Gateway has no connected
client corresponding to the RPC's clientId, and falls over.
The fix is to defer registering agents and subscriptions with the
gateway until after the connection is established after .StartAsync() is
called.
Add the following additional configuration options to
DockerCommandLineCodeExectutor:
- **extra_volumes** (Optional[Dict[str, Dict[str, str]]], optional): A
dictionary of extra volumes (beyond the work_dir) to mount to the
container. Defaults to None.
- **extra_hosts** (Optional[Dict[str, str]], optional): A dictionary of
host mappings to add to the container. (See Docker docs on extra_hosts)
Defaults to None.
- **init_command** (Optional[str], optional): A shell command to run
before each shell operation execution. Defaults to None.
## Why are these changes needed?
See linked issue below.
In summary: Enable the agents to:
- work with a richer set of sys admin tools on top of code execution
- add support for a 'project' directory the agents can interact on
that's accessible by bash tools and custom scripts
## Related issue number
Closes#5363
## Checks
- [x] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
This PR improves documentation on custom agents
- Shows example on how to create a custom agent that directly uses a
model client. In this case an example of a GeminiAssistantAgent that
directly uses the Gemini SDK model client.
- Shows that that CustomAgent can be easily added to any agentchat team
- Shows how the same CustomAgent can be made declarative by inheriting
the Component interface and implementing the required methods.
Closes#5450
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
Fix#4731
## Checks
- [ ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
## Why are these changes needed?
These changes are needed because currently there's no generic way to add
`tools` to autogen studio workflows using the existing DSL and schema
other than inline python.
This API will be quite verbose, and lacks a discovery mechanism, but it
unlocks a lot of programmatic use-cases.
## Related issue number
https://github.com/microsoft/autogen/issues/5170
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Have seen discussion on Discord regarding confusion about multi-modal
support in v0.4. This change adds a small note on how to use multi-modal
messages with agents.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
The current implementation tries to recreate the metadata but it does it
in an incomplete way. This PR uses SK built-in kernel from function
decorator to infer the callable from the `run_json` and makes better use
of the pydantic schemas for the input and output to infer the schema of
the kernel function.
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#5458
## Checks
- [ ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
---------
Co-authored-by: Leonardo Pinheiro <lpinheiro@microsoft.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
Add clear notes on how to specify api key and modifying specifications
in AGS.
Add diagrams explaining how to switch between visual builder and JSON
mode
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number
<!-- For example: "Closes #1234" -->
@nour-bouzid
## Checks
- [ ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
It is often helpful to inspect the raw request and response to/from an
LLM as agents act.
This PR, does the following:
- Updates TeamManager to yield LLMCallEvents from core library
- Run in an async background task to listen for LLMCallEvent JIT style
when a team is run
- Add events to an async queue and
- yield those events in addition to whatever actual agentchat
team.run_stream yields.
- Update the AGS UI to show those LLMCallEvents in the messages section
as a team runs
- Add settings panel to show/hide llm call events in messages.
- Minor updates to default team
<img width="1539" alt="image"
src="https://github.com/user-attachments/assets/bfbb19fe-3560-4faa-b600-7dd244e0e974"
/>
<img width="1554" alt="image"
src="https://github.com/user-attachments/assets/775624f5-ba83-46e8-81ff-512bfeed6bab"
/>
<img width="1538" alt="image"
src="https://github.com/user-attachments/assets/3becbf85-b75a-4506-adb7-e5575ebbaec4"
/>
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#5440
## Checks
- [ ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
The current stream processing of SK model adapter returns on the first
function call chunk but this behavior is incorrect end ends up returning
with an incomplete function call. The observed behavior is that the
function name and arguments are split into different chunks and this
update correctly processes the chunks in this way.
## Related issue number
<!-- For example: "Closes #1234" -->
Fixes the reply in #5420
## Checks
- [ ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
---------
Co-authored-by: Leonardo Pinheiro <lpinheiro@microsoft.com>
Don't throw an exception when model makes a mistake. Use retries, and if
not succeeding after a fixed attempts, fall back to the previous sepaker
if available, or the first participant.
Resolves#5453
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
Semantic kernel prepends the plugin name to the tool name when passing
the tools to model clients and this is causing a mismatch between tool
names in SK and the AssistantAgent. Since plugin names are optional, we
have opted to remove it.
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#5420
## Checks
- [ ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
---------
Co-authored-by: Leonardo Pinheiro <lpinheiro@microsoft.com>
Restoring the grpc + Orleans server into the project
## Why are these changes needed?
This is the distributed agent runtime for .NET that can manage routing
messages amongst a fleet of grpc agent runtimes.
## Related issue number
## Checks
- [ ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
---------
Co-authored-by: Jack Gerrits <jack@jackgerrits.com>
Co-authored-by: Jacob Alber <jaalber@microsoft.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
- Add ability to test teams in Team Builder view
- Update Gallery (add deep research default team, fix bug with gallery
serialization)
- UI fixes
- improve drag drop component experience
- improve new session experience (single click rather than 3 clicks to
create a session)
- fix bug with stop reason not being shown in some cases
<img width="1738" alt="Image"
src="https://github.com/user-attachments/assets/4b895df2-3bad-474e-bec6-4fbcbf1c4346"
/>
<img width="1761" alt="Image"
src="https://github.com/user-attachments/assets/65f52eb9-e926-4168-88fb-d2496c159474"
/>
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
Closes#5392
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
Introduce a sample chat application using AgentChat and FastAPI,
demonstrating single-agent and team chat functionalities, along with
state persistence and conversation history management.
Resolves#5423
---------
Co-authored-by: Victor Dibia <victor.dibia@gmail.com>
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
Presently MagenticOne and the m1 CLI use the LocalCommandLineExecutor
(presumably copied from the agbench code, which already runs in Docker).
This pr defaults m1 to Docker, and adds a code_executor parameter to
MagenticOne, which defaults to local for now to maintain backward
compatibility -- but this behavior is immediately deprecated.
- Updated HumanEval template to use AgentChat
- Update templates to use config.yaml for model and other configuration
- Read environment from ENV.yaml (ENV.json still supported but
deprecated)
- Temporarily removed WebArena and AssistantBench. Neither had viable
Templates after `autogen_magentic_one` was removed. Templates need to be
update to AgentChat (in a future PR, but this PR is getting big enough
already)
A series of changes to the
`python/packages/autogen-ext/src/autogen_ext/agents/web_surfer/_multimodal_web_surfer.py`
file have been made to better support smaller models.
This includes changes to the prompts, state descriptions, and ordering
of messages.
Regression tasks with OpenAI models shows no change in GAIA scores,
while scores for Llama are significantly improved.
Get's SelectorGroupChat working for llama by:
1. Using a UserMessage rather than a SystemMessage
2. Normalizing how roles are presented (one agent per line)
3. Normalizing how the transcript is constructed (a blank line between
every message)
This PR fixes:
A prompting bug when no control had focus.
Awkward prompt phrasing.
Renamed page_down to scroll_down to better match other prompting and
agent descriptions.
Some agent descriptions were split over multiple lines in the M1
orchestrator. This PR ensures that each description appears on one, and
only one, line. This makes it easier for smaller models to understand.
The Tuple class is never used in CountDownAgent class.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [x] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Currently the way to accomplish RAG behavior with agent chat,
specifically assistant agents is with the memory interface, however
there is no way to configure it via the declarative API.
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
---------
Co-authored-by: Victor Dibia <chuvidi2003@gmail.com>
Just fix typo
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [x] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
To allow serialization of OAI Assistant Agent.
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#5130
## Checks
- [ ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Fix permission issue in ags windows (env files now stored in a user
directory)
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#5355
## Checks
- [ ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
Allow AssistantAgent to drop images when not equipped with a multi-modal model.
Adds a corresponding utility function, which can be used in autogen-ext and teams, to accomplish the same.
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Wrong file names in the README.md
## Related issue number
## Checks
- [x] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
Co-authored-by: Mohammad Mazraeh <Mazraeh.Mohammad@Gmail.com>
This PR does the following:
- Fix warning messages in AGS on launch.
- Improve Cli message to include app URL on startup from command line
- Minor improvements default gallery generator. (add more default tools)
- Improve new session behaviour.
## Related issue number
Closes#5097
## Checks
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Fix bug where the `model_info` field is not serialized for the
`OpenAIChatCompletionClient` class. This was because the `_raw_config`
field was based on a version of the args that had been sanitized
(model_info removed). We need the full model info field for non-openai
models
```python
from autogen_ext.agents.web_surfer import MultimodalWebSurfer
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_core.models import ModelInfo
mistral_vllm_model = OpenAIChatCompletionClient(
model="TheBloke/Mistral-7B-Instruct-v0.2-GGUF",
base_url="http://localhost:1234/v1",
api_key="empty",
model_info=ModelInfo(vision=False, function_calling=True, json_output=False, family="unkown"),
)
(mistral_vllm_model.dump_component().model_dump_json())
```
Before
```
{
"provider": "autogen_ext.models.openai.OpenAIChatCompletionClient",
"component_type": "model",
"version": 1,
"component_version": 1,
"description": "Chat completion client for OpenAI hosted models.",
"label": "OpenAIChatCompletionClient",
"config": {
"model": "TheBloke/Mistral-7B-Instruct-v0.2-GGUF",
"api_key": "empty",
"base_url": "http://localhost:1234/v1"
}
}
```
After
```
{
"provider": "autogen_ext.models.openai.OpenAIChatCompletionClient",
"component_type": "model",
"version": 1,
"component_version": 1,
"description": "Chat completion client for OpenAI hosted models.",
"label": "OpenAIChatCompletionClient",
"config": {
"model": "TheBloke/Mistral-7B-Instruct-v0.2-GGUF",
"api_key": "empty",
"model_info": {
"vision": false,
"function_calling": true,
"json_output": false,
"family": "unkown"
},
"base_url": "http://localhost:1234/v1"
}
}
```
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
## Why are these changes needed?
This pull request introduces the 'o3' model family and adds support for
the 'o3-mini' model.
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
We are seeing this issue more often now, probably related to the load on
the API servers. Hence this PR:
1. Demotes the the `max_consecutive_empty_chunk_tolerance` parameter
from function to inline threshold
2. Change exception to a one time warning
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [x] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
---------
Signed-off-by: Mohammad Mazraeh <mazraeh.mohammad@gmail.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
This PR adds a method that approximately extracts the text visible in
the viewport of the web browser (as opposed to always printing the first
50 lines, or relying entirely on OCR).
- Some other users might have the same issue as yours.
### Are you familiar with GitHub Markdown Syntax?
Please use [GitHub Markdown](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax)
syntax to format your input.
Pay attention to code blocks. Use "```" blocks for code and output.
For examples:
````
```python
# your Python code here.
```
````
````
```bash
# your bash shell command here.
```
````
If your output contains "```", use "````" (four "`") to escape them.
You can use the "Preview" switcher to check your formatted output.
- type:textarea
attributes:
label:What happened?
description:Please provide as much information as possible, this helps us address the issue. Use Markdown to format your text.
value:|
**Describethe bug**
A clear and concise description of what the bug is.
If it is a question or suggestion, please use [Discussions](https://github.com/microsoft/autogen/discussions)
instead.
**ToReproduce**
Steps to reproduce the behavior. Please include code and outputs such as stacktrace.
- If your input is just "I tried X, and it didn't work" or
"X is not working",your issue will be ignored.
- If your input is not well formatted, it will hurt readability and
may be ignored as well.
**Expectedbehavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additionalcontext**
Add any other context about the problem here.
validations:
required:true
- type:dropdown
attributes:
label:Which packages was the bug in?
multiple:true
options:
- Python Core (autogen-core)
- Python AgentChat (autogen-agentchat>=0.4.0)
- Python Extensions (autogen-ext)
- .NET Core (Microsoft.AutoGen.Core)
- AutoGen Studio (autogensudio)
- AutoGen Bench (agbench)
- Magentic One CLI (magentic-one-cli)
- V0.2 (autogen-agetnchat==0.2.*)
validations:
required:true
- type:dropdown
attributes:
label:AutoGen library version.
description:What is the version of the library was used.
multiple:false
options:
- "Python dev (main branch)"
- "Python 0.5.2"
- "Python 0.5.1"
- "Python 0.4.9"
- "Python 0.4.8"
- "Python 0.4.7"
- "Python 0.4.6"
- "Python 0.4.5"
- "Python 0.4.4"
- "Python 0.4.3"
- "Python 0.4.2"
- "Python 0.4.1"
- "Python 0.4.0"
- ".NET dev (main branch)"
- "Studio 0.4.1"
- "Studio 0.4.0"
- "Other (please specify)"
validations:
required:True
- type:input
attributes:
label:Other library version.
description:"Please specify if selected 'Other' above"
- type:input
attributes:
label:Model used
description:If a model was used, please name here. Use full model name with version number.
placeholder:"e.g., gpt-4o-2024-11-20"
- type:dropdown
attributes:
label:Model provider
description:The provider or hosting service that runs the model.
options:
- "Anthropic"
- "AWS Bedrock"
- "Azure OpenAI"
- "Azure AI Foundary (Azure AI Studio)"
- "DeepSeek (Hosted)"
- "GitHub Models"
- "Google Gemini"
- "Google Vertex AI"
- "HuggingFace Models (Hosted)"
- "HuggingFace Transformers (Local)"
- "LlamaCpp"
- "Mistral AI"
- "Ollama"
- "OpenAI"
- "OpenRouter"
- "Together AI"
- "vLLM"
- "Other (please specify below)"
- type:input
attributes:
label:Other model provider
description:"Other provider if not found above."
- type:dropdown
attributes:
label:Python version
options:
- "3.10"
- "3.11"
- "3.12"
- "3.13"
- Other (please note we only support Python 3.10+)
description:Only use this template if you are a maintainer.
body:
- type:checkboxes
attributes:
label:Confirmation
description:Please only use this template if you are a maintainer. Thanks for helping us keep the issue tracker organized!
options:
- label:I confirm that I am a maintainer and so can use this template. If I am not, I understand this issue will be closed and I will be asked to use a different template.
required:true
- type:textarea
id:body
attributes:
label:Issue body
description:"How do you trigger this bug? Please walk us through it step by step."
- [ ] I've included any doc changes needed for https://microsoft.github.io/autogen/. See https://microsoft.github.io/autogen/docs/Contribute#documentation to build and test documentation locally.
- [ ] I've included any doc changes needed for <https://microsoft.github.io/autogen/>. See <https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes introduced in this PR.
@ -34,7 +34,7 @@ For common tasks that are helpful during development and run in CI, see [here](.
## Roadmap
We use GitHub issues and milestones to track our roadmap. You can view the upcoming milestones [here]([Roadmap Issues](https://aka.ms/autogen-roadmap).
We use GitHub issues and milestones to track our roadmap. You can view the upcoming milestones [here]([Roadmap Issues](https://aka.ms/autogen-roadmap)).
## Versioning
@ -48,11 +48,11 @@ We will update verion numbers according to the following rules:
## Release process
1. Create a PR that updates the version numbers across the codebase ([example](https://github.com/microsoft/autogen/pull/4359))
2. The docs CI will fail for the PR, but this is expected and will be resolved in the next step
2. After merging the PR, create and push a tag that corresponds to the new verion. For example, for `0.4.0.dev13`:
2. The docs CI will fail for the PR, but this is expected and will be resolved in the next step
3. After merging the PR, create and push a tag that corresponds to the new verion. For example, for `0.4.0.dev13`:
- `git tag v0.4.0.dev13 && git push origin v0.4.0.dev13`
3. Restart the docs CI by finding the failed [job corresponding to the `push` event](https://github.com/microsoft/autogen/actions/workflows/docs.yml) and restarting all jobs
4. Run [this](https://github.com/microsoft/autogen/actions/workflows/single-python-package.yml) workflow for each of the packages that need to be released and get an approval for the release for it to run
4. Restart the docs CI by finding the failed [job corresponding to the `push` event](https://github.com/microsoft/autogen/actions/workflows/docs.yml) and restarting all jobs
5. Run [this](https://github.com/microsoft/autogen/actions/workflows/single-python-package.yml) workflow for each of the packages that need to be released and get an approval for the release for it to run
<strong>Important:</strong> This is the official project. We are not affiliated with any fork or startup. See our <ahref="https://x.com/pyautogen/status/1857264760951296210">statement</a>.
</div>
# AutoGen
**AutoGen** is a framework for creating multi-agent AI applications that can act autonomously or work alongside humans.
@ -42,22 +47,24 @@ from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
# Web surfer and user proxy take turns in a round-robin fashion.
team = RoundRobinGroupChat([web_surfer, user_proxy], termination_condition=termination)
try:
# Start the team and wait for it to terminate.
await Console(team.run_stream(task="Find information about AutoGen and write a short summary."))
finally:
await web_surfer.close()
await model_client.close()
asyncio.run(main())
```
@ -96,7 +112,7 @@ The AutoGen ecosystem provides everything you need to create AI agents, especial
The _framework_ uses a layered and extensible design. Layers have clearly divided responsibilities and build on top of layers below. This design enables you to use the framework at different levels of abstraction, from high-level APIs to low-level components.
- [Core API](./python/packages/autogen-core/) implements message passing, event-driven agents, and local and distributed runtime for flexibility and power. It also support cross-language support for .NET and Python.
- [AgentChat API](./python/packages/autogen-agentchat/) implements a simpler but opinionatedAPI rapid for prototyping. This API is built on top of the Core API and is closest to what users of v0.2 are familiar with and supports familiar multi-agent patterns such as two-agent chat or group chats.
- [AgentChat API](./python/packages/autogen-agentchat/) implements a simpler but opinionatedAPI for rapid prototyping. This API is built on top of the Core API and is closest to what users of v0.2 are familiar with and supports common multi-agent patterns such as two-agent chat or group chats.
- [Extensions API](./python/packages/autogen-ext/) enables first- and third-party extensions continuously expanding framework capabilities. It support specific implementation of LLM clients (e.g., OpenAI, AzureOpenAI), and capabilities such as code execution.
The ecosystem also supports two essential _developer tools_:
@ -108,7 +124,7 @@ The ecosystem also supports two essential _developer tools_:
- [AutoGen Studio](./python/packages/autogen-studio/) provides a no-code GUI for building multi-agent applications.
- [AutoGen Bench](./python/packages/agbench/) provides a benchmarking suite for evaluating agent performance.
You can use the AutoGen framework and developer tools to create applications for your domain. For example, [Magentic-One](./python/packages/magentic-one-cli/) is a state-of-art multi-agent team built using AgentChat API and Extensions API that can handle variety of tasks that require web browsing, code execution, and file handling.
You can use the AutoGen framework and developer tools to create applications for your domain. For example, [Magentic-One](./python/packages/magentic-one-cli/) is a state-of-the-art multi-agent team built using AgentChat API and Extensions API that can handle a variety of tasks that require web browsing, code execution, and file handling.
With AutoGen you get to join and contribute to a thriving ecosystem. We host weekly office hours and talks with maintainers and community. We also have a [Discord server](https://aka.ms/autogen-discord) for real-time chat, GitHub Discussions for Q&A, and a blog for tutorials and updates.
@ -118,19 +134,18 @@ With AutoGen you get to join and contribute to a thriving ecosystem. We host wee
Interested in contributing? See [CONTRIBUTING.md](./CONTRIBUTING.md) for guidelines on how to get started. We welcome contributions of all kinds, including bug fixes, new features, and documentation improvements. Join our community and help us make AutoGen better!
Have questions? Check out our [Frequently Asked Questions (FAQ)](./FAQ.md) for answers to common queries. If you don't find what you're looking for, feel free to ask in our [GitHub Discussions](https://github.com/microsoft/autogen/discussions) or join our [Discord server](https://aka.ms/autogen-discord) for real-time support.
Have questions? Check out our [Frequently Asked Questions (FAQ)](./FAQ.md) for answers to common queries. If you don't find what you're looking for, feel free to ask in our [GitHub Discussions](https://github.com/microsoft/autogen/discussions) or join our [Discord server](https://aka.ms/autogen-discord) for real-time support. You can also read our [blog](https://devblogs.microsoft.com/autogen/) for updates.
@ -11,7 +11,7 @@ Each event in the system is defined using the [CloudEvents Specification](https:
1. *id* - A unique id (eg. a UUID).
2. *source* - A URI or URN indicating the event's origin.
3. *type* - The namespace of the event - prefixed with a reverse-DNS name.
- The prefixed domain dictates the organization which defines the semantics of this event type: e.g `com.github.pull_request.opened` or `com.example.object.deleted.v2`), and optionally fields describing the data schema/content-type or extensions.
- The prefixed domain dictates the organization which defines the semantics of this event type: e.g (`com.github.pull_request.opened` or `com.example.object.deleted.v2`), and optionally fields describing the data schema/content-type or extensions.
@ -62,7 +62,7 @@ For this subscription source should map directly to agent key.
This subscription will therefore receive all events for the following well known topics:
- `{AgentType}:` - General purpose direct messages. These should be routed to the approriate message handler.
- `{AgentType}:rpc_request={RequesterAgentType}` - RPC request messages. These should be routed to the approriate RPC handler, and RequesterAgentType used to publish the response
- `{AgentType}:` - General purpose direct messages. These should be routed to the appropriate message handler.
- `{AgentType}:rpc_request={RequesterAgentType}` - RPC request messages. These should be routed to the appropriate RPC handler, and RequesterAgentType used to publish the response
- `{AgentType}:rpc_response={RequestId}` - RPC response messages. These should be routed back to the response future of the caller.
- `{AgentType}:error={RequestId}` - Error message that corresponds to the given request.
AutoGen Core for .NET follows the same concepts and conventions of its Python counterpart. In fact, in order to understand the concepts in the .NET version, we recommend reading the Python documentation first. Unless otherwise stated, the concepts in the Python version map to .NET.
AutoGen Core for .NET follows the same concepts and conventions of its Python counterpart. In fact, in order to understand the concepts in the .NET version, we recommend reading the [Python documentation](https://microsoft.github.io/autogen/stable/) first. Unless otherwise stated, the concepts in the Python version map to .NET.
Any important differences between the language versions are documented in the [Differences from Python](./differences-from-python.md) section. For things that only affect a given language, such as dependency injection or host builder patterns, these will not be specified in the differences document.
For .NET we are starting with the core functionality and will be expanding support progressively. So far the core abstractions of Agent and Runtime are available. The InProcessRuntime is the only runtime available at this time. We will be expanding to cross language support in upcoming releases.
## Getting Started
You can obtain the SDK as a nuget package or by cloning the repository. The SDK is available on [NuGet](https://www.nuget.org/packages/Microsoft.AutoGen).
Minimally you will need the following:
```bash
dotnet add package Microsoft.AutoGen.Contracts
dotnet add package Microsoft.AutoGen.Core
```
See [Installation](./installation.md) for more detailed notes on installing all the related packages.
You can quickly get started by looking at the samples in the [samples](https://github.com/microsoft/autogen/tree/main/dotnet/samples) directory of the repository.
### Creating an Agent
To create an agent, you can inherit from BaseAgent and implement event handlers for the events you care about. Here is a minimal example demonstrating how to inherit from BaseAgent and implement an event handler:
```csharp
public class MyAgent : BaseAgent, IHandle<MyMessage>
{
// ...
public async ValueTask HandleAsync(MyMessage item, MessageContext context)
{
// ...logic here...
}
}
```
By overriding BaseAgent, you gain access to the runtime and logging utilities, and by implementing IHandle<T>, you can easily define event-handling methods for your custom messages.
### Running an Agent in an Application
To run your agent in an application, you can use the `AgentsAppBuilder` class. Here is an example of how to run an agent 'HelloAgent' in an application:
```csharp
AgentsAppBuilder appBuilder = new AgentsAppBuilder()
.UseInProcessRuntime(deliverToSelf: true)
.AddAgent<HelloAgent>("HelloAgent");
var app = await appBuilder.BuildAsync();
// start the app by publishing a message to the runtime
The .NET SDK includes both an InMemory Single Process Runtime and a Remote, Distributed Runtime meant for running your agents in the cloud. The Distributed Runtime supports running agents in python and in .NET, allowing those agents to talk to one another. The distributed runtime uses Microsoft Orleans to provide resilience, persistence, and integration with messaging services such as Azure Event Hubs. The xlang functionality requires that your agent's Messages are serializable as CloudEvents. The messages are exchanged as CloudEvents over Grpc, and the runtime takes care of ensuring that the messages are delivered to the correct agents.
To use the Distributed Runtime, you will need to add the following package to your project:
```bash
dotnet add package Microsoft.AutoGen.Core.Grpc
```
This is the package that runs in the application with your agent(s) and connects to the distributed system.
### Running Multiple Agents and the Runtime in separate processes with .NET Aspire
The [Hello.AppHost project](https://github.com/microsoft/autogen/blob/50d7587a4649504af3bb79ab928b2a3882a1a394/dotnet/samples/Hello/Hello.AppHost/Program.cs#L4) illustrates how to orchestrate a distributed system with multiple agents and the runtime in separate processes using .NET Aspire. It also points to a [python agent that illustrates how to run agents in different languages in the same distributed system](https://github.com/microsoft/autogen/blob/50d7587a4649504af3bb79ab928b2a3882a1a394/python/samples/core_xlang_hello_python_agent/README.md#L1).
```csharp
// Copyright (c) Microsoft Corporation. All rights reserved.
// Program.cs
using Microsoft.Extensions.Hosting;
var builder = DistributedApplication.CreateBuilder(args);
var backend = builder.AddProject<Projects.Microsoft_AutoGen_AgentHost>("backend").WithExternalHttpEndpoints();
var client = builder.AddProject<Projects.HelloAgent>("HelloAgentsDotNET")
You can find more examples of how to use Aspire and XLang agents in the [Microsoft.AutoGen.Integration.Tests.AppHost](https://github.com/microsoft/autogen/blob/acd7e864300e24a3ee67a89a916436e8894bb143/dotnet/test/Microsoft.AutoGen.Integration.Tests.AppHosts/) directory.
### Configuring Logging
The SDK uses the Microsoft.Extensions.Logging framework for logging. Here is an example appsettings.json file with some useful defaults:
```json
{
"Logging": {
"LogLevel": {
"Default": "Warning",
"Microsoft.Hosting.Lifetime": "Information",
"Microsoft.AspNetCore": "Information",
"Microsoft": "Information",
"Microsoft.Orleans": "Warning",
"Orleans.Runtime": "Error",
"Grpc": "Information"
}
},
"AllowedHosts": "*",
"Kestrel": {
"EndpointDefaults": {
"Protocols": "Http2"
}
}
}
```
### Defining Message Types in Protocol Buffers
A convenient way to define common event or message types to be used in both python and .NET agents is to define your events. This is covered here: [Using Protocol Buffers to Define Message Types](./protobuf-message-types.md).
To enable running a system with agents in different processes that allows for x-language communication between python and .NET agents, there are additional packages:
- *Microsoft.AutoGen.Core.Grpc* - the .NET client runtime for agents in a distributed system. It has the same API as *Microsoft.AutoGen.Core*.
- *Microsoft.AutoGen.RuntimeGatewway.Grpc* - the .NET server side of the distributed system that allows you to run multiple gateways to manage fleets of agents and enables x-language interoperability.
- *Microsoft.AutoGen.AgentHost* - A .NET Aspire project that hosts the Grpc Service
publicstaticSemanticKernelAgentToSemanticKernelAgent(thisKernelkernel,stringname,stringsystemMessage="You are a helpful AI assistant",PromptExecutionSettings?settings=null)
publicstaticSemanticKernelAgentToSemanticKernelAgent(thisKernelkernel,stringname,stringsystemMessage="You are a helpful AI assistant",string?modelServiceId=null,PromptExecutionSettings?settings=null)
/// The name of the agent. This is used by team to uniquely identify the agent.It should be unique within the team.
/// </summary>
publicAgentNameName{get;}
/// <summary>
/// The description of the agent. This is used by team to make decisions about which agents to use.The description
/// should describe the agent's capabilities and how to interact with it.
/// </summary>
publicstringDescription{get;}
/// <summary>
/// The types of messages that the agent produces.
/// </summary>
publicIEnumerable<Type>ProducedMessageTypes{get;}// TODO: Is there a way to make this part of the type somehow? Annotations, or IProduce<>? Do we ever actually access this?
thrownewArgumentException("The number of elements in the source is greater than the available space from arrayIndex to the end of the destination array.");
thrownewException("Could not create chat manager",tie.InnerException);
}
catch(Exceptionex)
{
thrownewException("Could not create chat manager",ex);
}
thrownewException("Could not create chat manager; make sure that it contains a ctor() or ctor(GroupChatOptions), or override the CreateChatManager method");
# See https://aka.ms/customizecontainer to learn how to customize your debug container and how Visual Studio uses this Dockerfile to build your images for faster debugging.
# This stage is used when running from VS in fast mode (Default for Debug configuration)