update dev8 (#4417)

This commit is contained in:
Eric Zhu 2024-11-27 14:39:31 -08:00 committed by GitHub
parent 7c8d25c448
commit f70869f236
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
17 changed files with 468 additions and 457 deletions

View File

@ -40,6 +40,7 @@ jobs:
{ ref: "v0.4.0.dev5", dest-dir: "0.4.0.dev5" },
{ ref: "v0.4.0.dev6", dest-dir: "0.4.0.dev6" },
{ ref: "v0.4.0.dev7", dest-dir: "0.4.0.dev7" },
{ ref: "v0.4.0.dev8", dest-dir: "0.4.0.dev8" },
]
steps:
- name: Checkout

View File

@ -49,8 +49,8 @@ We will update verion numbers according to the following rules:
1. Create a PR that updates the version numbers across the codebase ([example](https://github.com/microsoft/autogen/pull/4359))
2. The docs CI will fail for the PR, but this is expected and will be resolved in the next step
2. After merging the PR, create and push a tag that corresponds to the new verion. For example, for `0.4.0.dev7`:
- `git tag 0.4.0.dev7 && git push origin 0.4.0.dev7`
2. After merging the PR, create and push a tag that corresponds to the new verion. For example, for `0.4.0.dev8`:
- `git tag 0.4.0.dev8 && git push origin 0.4.0.dev8`
3. Restart the docs CI by finding the failed [job corresponding to the `push` event](https://github.com/microsoft/autogen/actions/workflows/docs.yml) and restarting all jobs
4. Run [this](https://github.com/microsoft/autogen/actions/workflows/single-python-package.yml) workflow for each of the packages that need to be released and get an approval for the release for it to run
@ -59,27 +59,27 @@ We will update verion numbers according to the following rules:
To help ensure the health of the project and community the AutoGen committers have a weekly triage process to ensure that all issues and pull requests are reviewed and addressed in a timely manner. The following documents the responsibilites while on triage duty:
- Issues
- Review all new issues - these will be tagged with [`needs-triage`](https://github.com/microsoft/autogen/issues?q=is%3Aissue%20state%3Aopen%20label%3Aneeds-triage).
- Apply appropriate labels:
- One of `proj-*` labels based on the project the issue is related to
- `documentation`: related to documentation
- `x-lang`: related to cross language functionality
- `dotnet`: related to .NET
- Add the issue to a relevant milestone if necessary
- If you can resolve the issue or reply to the OP please do.
- If you cannot resolve the issue, assign it to the appropriate person.
- If awaiting a reply add the tag `awaiting-op-response` (this will be auto removed when the OP replies).
- Bonus: there is a backlog of old issues that need to be reviewed - if you have time, review these as well and close or refresh as many as you can.
- Review all new issues - these will be tagged with [`needs-triage`](https://github.com/microsoft/autogen/issues?q=is%3Aissue%20state%3Aopen%20label%3Aneeds-triage).
- Apply appropriate labels:
- One of `proj-*` labels based on the project the issue is related to
- `documentation`: related to documentation
- `x-lang`: related to cross language functionality
- `dotnet`: related to .NET
- Add the issue to a relevant milestone if necessary
- If you can resolve the issue or reply to the OP please do.
- If you cannot resolve the issue, assign it to the appropriate person.
- If awaiting a reply add the tag `awaiting-op-response` (this will be auto removed when the OP replies).
- Bonus: there is a backlog of old issues that need to be reviewed - if you have time, review these as well and close or refresh as many as you can.
- PRs
- The UX on GH flags all recently updated PRs. Draft PRs can be ignored, otherwise review all recently updated PRs.
- If a PR is ready for review and you can provide one please go ahead. If you cant, please assign someone. You can quickly spin up a codespace with the PR to test it out.
- If a PR is needing a reply from the op, please tag it `awaiting-op-response`.
- If a PR is approved and passes CI, its ready to merge, please do so.
- If it looks like there is a possibly transient CI failure, re-run failed jobs.
- The UX on GH flags all recently updated PRs. Draft PRs can be ignored, otherwise review all recently updated PRs.
- If a PR is ready for review and you can provide one please go ahead. If you cant, please assign someone. You can quickly spin up a codespace with the PR to test it out.
- If a PR is needing a reply from the op, please tag it `awaiting-op-response`.
- If a PR is approved and passes CI, its ready to merge, please do so.
- If it looks like there is a possibly transient CI failure, re-run failed jobs.
- Discussions
- Look for recently updated discussions and reply as needed or find someone on the team to reply.
- Look for recently updated discussions and reply as needed or find someone on the team to reply.
- Security
- Look through any securty alerts and file issues or dismiss as needed.
- Look through any securty alerts and file issues or dismiss as needed.
## Becoming a Reviewer

View File

@ -4,14 +4,14 @@
<img src="https://microsoft.github.io/autogen/0.2/img/ag.svg" alt="AutoGen Logo" width="100">
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Follow%20%40pyautogen)](https://twitter.com/pyautogen) [![GitHub Discussions](https://img.shields.io/badge/Discussions-Q%26A-green?logo=github)](https://github.com/microsoft/autogen/discussions) [![0.2 Docs](https://img.shields.io/badge/Docs-0.2-blue)](https://microsoft.github.io/autogen/0.2/) [![0.4 Docs](https://img.shields.io/badge/Docs-0.4-blue)](https://microsoft.github.io/autogen/dev/)
[![PyPi autogen-core](https://img.shields.io/badge/PyPi-autogen--core-blue?logo=pypi)](https://pypi.org/project/autogen-core/0.4.0.dev7/) [![PyPi autogen-agentchat](https://img.shields.io/badge/PyPi-autogen--agentchat-blue?logo=pypi)](https://pypi.org/project/autogen-agentchat/0.4.0.dev7/) [![PyPi autogen-ext](https://img.shields.io/badge/PyPi-autogen--ext-blue?logo=pypi)](https://pypi.org/project/autogen-ext/0.4.0.dev7/)
[![PyPi autogen-core](https://img.shields.io/badge/PyPi-autogen--core-blue?logo=pypi)](https://pypi.org/project/autogen-core/0.4.0.dev8/) [![PyPi autogen-agentchat](https://img.shields.io/badge/PyPi-autogen--agentchat-blue?logo=pypi)](https://pypi.org/project/autogen-agentchat/0.4.0.dev8/) [![PyPi autogen-ext](https://img.shields.io/badge/PyPi-autogen--ext-blue?logo=pypi)](https://pypi.org/project/autogen-ext/0.4.0.dev8/)
</div>
# AutoGen
> [!IMPORTANT]
>
> - (11/14/24) ⚠️ In response to a number of asks to clarify and distinguish between official AutoGen and its forks that created confusion, we issued a [clarification statement](https://github.com/microsoft/autogen/discussions/4217).
> - (10/13/24) Interested in the standard AutoGen as a prior user? Find it at the actively-maintained *AutoGen* [0.2 branch](https://github.com/microsoft/autogen/tree/0.2) and `autogen-agentchat~=0.2` PyPi package.
> - (10/02/24) [AutoGen 0.4](https://microsoft.github.io/autogen/dev) is a from-the-ground-up rewrite of AutoGen. Learn more about the history, goals and future at [this blog post](https://microsoft.github.io/autogen/blog). Were excited to work with the community to gather feedback, refine, and improve the project before we officially release 0.4. This is a big change, so AutoGen 0.2 is still available, maintained, and developed in the [0.2 branch](https://github.com/microsoft/autogen/tree/0.2).
@ -104,7 +104,7 @@ We look forward to your contributions!
First install the packages:
```bash
pip install 'autogen-agentchat==0.4.0.dev7' 'autogen-ext[openai]==0.4.0.dev7'
pip install 'autogen-agentchat==0.4.0.dev8' 'autogen-ext[openai]==0.4.0.dev8'
```
The following code uses OpenAI's GPT-4o model and you need to provide your

View File

@ -46,7 +46,12 @@
{
"name": "0.4.0.dev7",
"version": "0.4.0.dev7",
"url": "/autogen/0.4.0.dev7/",
"url": "/autogen/0.4.0.dev7/"
},
{
"name": "0.4.0.dev8",
"version": "0.4.0.dev8",
"url": "/autogen/0.4.0.dev8/",
"preferred": true
}
]

View File

@ -1,8 +1,7 @@
# AutoGen Python packages
[![0.4 Docs](https://img.shields.io/badge/Docs-0.4-blue)](https://microsoft.github.io/autogen/dev/)
[![PyPi autogen-core](https://img.shields.io/badge/PyPi-autogen--core-blue?logo=pypi)](https://pypi.org/project/autogen-core/0.4.0.dev7/) [![PyPi autogen-agentchat](https://img.shields.io/badge/PyPi-autogen--agentchat-blue?logo=pypi)](https://pypi.org/project/autogen-agentchat/0.4.0.dev7/) [![PyPi autogen-ext](https://img.shields.io/badge/PyPi-autogen--ext-blue?logo=pypi)](https://pypi.org/project/autogen-ext/0.4.0.dev7/)
[![PyPi autogen-core](https://img.shields.io/badge/PyPi-autogen--core-blue?logo=pypi)](https://pypi.org/project/autogen-core/0.4.0.dev8/) [![PyPi autogen-agentchat](https://img.shields.io/badge/PyPi-autogen--agentchat-blue?logo=pypi)](https://pypi.org/project/autogen-agentchat/0.4.0.dev8/) [![PyPi autogen-ext](https://img.shields.io/badge/PyPi-autogen--ext-blue?logo=pypi)](https://pypi.org/project/autogen-ext/0.4.0.dev8/)
This directory works as a single `uv` workspace containing all project packages. See [`packages`](./packages/) to discover all project packages.
@ -17,10 +16,13 @@ poe check
```
### Setup
`uv` is a package manager that assists in creating the necessary environment and installing packages to run AutoGen.
- [Install `uv`](https://docs.astral.sh/uv/getting-started/installation/).
### Virtual Environment
During development, you may need to test changes made to any of the packages.\
To do so, create a virtual environment where the AutoGen packages are installed based on the current state of the directory.\
Run the following commands at the root level of the Python directory:
@ -29,11 +31,14 @@ Run the following commands at the root level of the Python directory:
uv sync --all-extras
source .venv/bin/activate
```
- `uv sync --all-extras` will create a `.venv` directory at the current level and install packages from the current directory along with any other dependencies. The `all-extras` flag adds optional dependencies.
- `source .venv/bin/activate` activates the virtual environment.
### Common Tasks
To create a pull request (PR), ensure the following checks are met. You can run each check individually:
- Format: `poe format`
- Lint: `poe lint`
- Test: `poe test`

View File

@ -4,7 +4,7 @@ build-backend = "hatchling.build"
[project]
name = "autogen-agentchat"
version = "0.4.0.dev7"
version = "0.4.0.dev8"
license = {file = "LICENSE-CODE"}
description = "AutoGen agents and teams library"
readme = "README.md"
@ -15,7 +15,7 @@ classifiers = [
"Operating System :: OS Independent",
]
dependencies = [
"autogen-core==0.4.0.dev7",
"autogen-core==0.4.0.dev8",
]
[tool.uv]

View File

@ -61,7 +61,7 @@ AgentChat </div>
High-level API that includes preset agents and teams for building multi-agent systems.
```sh
pip install 'autogen-agentchat==0.4.0.dev7'
pip install 'autogen-agentchat==0.4.0.dev8'
```
💡 *Start here if you are looking for an API similar to AutoGen 0.2*
@ -82,7 +82,7 @@ Get Started
Provides building blocks for creating asynchronous, event driven multi-agent systems.
```sh
pip install 'autogen-core==0.4.0.dev7'
pip install 'autogen-core==0.4.0.dev8'
```
+++

View File

@ -31,10 +31,10 @@ myst:
Library that is at a similar level of abstraction as AutoGen 0.2, including default agents and group chat.
```sh
pip install 'autogen-agentchat==0.4.0.dev7'
pip install 'autogen-agentchat==0.4.0.dev8'
```
[{fas}`circle-info;pst-color-primary` User Guide](/user-guide/agentchat-user-guide/index.md) | [{fas}`file-code;pst-color-primary` API Reference](/reference/python/autogen_agentchat/autogen_agentchat.rst) | [{fab}`python;pst-color-primary` PyPI](https://pypi.org/project/autogen-agentchat/0.4.0.dev7/) | [{fab}`github;pst-color-primary` Source](https://github.com/microsoft/autogen/tree/main/python/packages/autogen-agentchat)
[{fas}`circle-info;pst-color-primary` User Guide](/user-guide/agentchat-user-guide/index.md) | [{fas}`file-code;pst-color-primary` API Reference](/reference/python/autogen_agentchat/autogen_agentchat.rst) | [{fab}`python;pst-color-primary` PyPI](https://pypi.org/project/autogen-agentchat/0.4.0.dev8/) | [{fab}`github;pst-color-primary` Source](https://github.com/microsoft/autogen/tree/main/python/packages/autogen-agentchat)
:::
(pkg-info-autogen-core)=
@ -46,10 +46,10 @@ pip install 'autogen-agentchat==0.4.0.dev7'
Implements the core functionality of the AutoGen framework, providing basic building blocks for creating multi-agent systems.
```sh
pip install 'autogen-core==0.4.0.dev7'
pip install 'autogen-core==0.4.0.dev8'
```
[{fas}`circle-info;pst-color-primary` User Guide](/user-guide/core-user-guide/index.md) | [{fas}`file-code;pst-color-primary` API Reference](/reference/python/autogen_core/autogen_core.rst) | [{fab}`python;pst-color-primary` PyPI](https://pypi.org/project/autogen-core/0.4.0.dev7/) | [{fab}`github;pst-color-primary` Source](https://github.com/microsoft/autogen/tree/main/python/packages/autogen-core)
[{fas}`circle-info;pst-color-primary` User Guide](/user-guide/core-user-guide/index.md) | [{fas}`file-code;pst-color-primary` API Reference](/reference/python/autogen_core/autogen_core.rst) | [{fab}`python;pst-color-primary` PyPI](https://pypi.org/project/autogen-core/0.4.0.dev8/) | [{fab}`github;pst-color-primary` Source](https://github.com/microsoft/autogen/tree/main/python/packages/autogen-core)
:::
(pkg-info-autogen-ext)=
@ -61,7 +61,7 @@ pip install 'autogen-core==0.4.0.dev7'
Implementations of core components that interface with external services, or use extra dependencies. For example, Docker based code execution.
```sh
pip install 'autogen-ext==0.4.0.dev7'
pip install 'autogen-ext==0.4.0.dev8'
```
Extras:
@ -71,7 +71,7 @@ Extras:
- `docker` needed for {py:class}`~autogen_ext.code_executors.DockerCommandLineCodeExecutor`
- `openai` needed for {py:class}`~autogen_ext.models.OpenAIChatCompletionClient`
[{fas}`circle-info;pst-color-primary` User Guide](/user-guide/extensions-user-guide/index.md) | [{fas}`file-code;pst-color-primary` API Reference](/reference/python/autogen_ext/autogen_ext.rst) | [{fab}`python;pst-color-primary` PyPI](https://pypi.org/project/autogen-ext/0.4.0.dev7/) | [{fab}`github;pst-color-primary` Source](https://github.com/microsoft/autogen/tree/main/python/packages/autogen-ext)
[{fas}`circle-info;pst-color-primary` User Guide](/user-guide/extensions-user-guide/index.md) | [{fas}`file-code;pst-color-primary` API Reference](/reference/python/autogen_ext/autogen_ext.rst) | [{fab}`python;pst-color-primary` PyPI](https://pypi.org/project/autogen-ext/0.4.0.dev8/) | [{fab}`github;pst-color-primary` Source](https://github.com/microsoft/autogen/tree/main/python/packages/autogen-ext)
:::
(pkg-info-autogen-magentic-one)=

View File

@ -61,7 +61,7 @@ Install the `autogen-agentchat` package using pip:
```bash
pip install 'autogen-agentchat==0.4.0.dev7'
pip install 'autogen-agentchat==0.4.0.dev8'
```
```{note}
@ -74,7 +74,7 @@ To use the OpenAI and Azure OpenAI models, you need to install the following
extensions:
```bash
pip install 'autogen-ext[openai]==0.4.0.dev7'
pip install 'autogen-ext[openai]==0.4.0.dev8'
```
## Install Docker for Code Execution

View File

@ -37,7 +37,7 @@
},
"outputs": [],
"source": [
"pip install 'autogen-agentchat==0.4.0.dev7' 'autogen-ext[openai]==0.4.0.dev7'"
"pip install 'autogen-agentchat==0.4.0.dev8' 'autogen-ext[openai]==0.4.0.dev8'"
]
},
{

View File

@ -1,187 +1,187 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Models\n",
"\n",
"In many cases, agents need access to model services such as OpenAI, Azure OpenAI, and local models.\n",
"AgentChat utilizes model clients provided by the\n",
"[`autogen-ext`](../../core-user-guide/framework/model-clients.ipynb) package."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## OpenAI\n",
"\n",
"To access OpenAI models, you need to install the `openai` extension to use the {py:class}`~autogen_ext.models.OpenAIChatCompletionClient`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "shellscript"
}
},
"outputs": [],
"source": [
"pip install 'autogen-ext[openai]==0.4.0.dev7'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You will also need to obtain an [API key](https://platform.openai.com/account/api-keys) from OpenAI."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"from autogen_ext.models import OpenAIChatCompletionClient\n",
"\n",
"opneai_model_client = OpenAIChatCompletionClient(\n",
" model=\"gpt-4o-2024-08-06\",\n",
" # api_key=\"sk-...\", # Optional if you have an OPENAI_API_KEY environment variable set.\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To test the model client, you can use the following code:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CreateResult(finish_reason='stop', content='The capital of France is Paris.', usage=RequestUsage(prompt_tokens=15, completion_tokens=7), cached=False, logprobs=None)\n"
]
}
],
"source": [
"from autogen_core.components.models import UserMessage\n",
"\n",
"result = await opneai_model_client.create([UserMessage(content=\"What is the capital of France?\", source=\"user\")])\n",
"print(result)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```{note}\n",
"You can use this client with models hosted on OpenAI-compatible endpoints, however, we have not tested this functionality.\n",
"See {py:class}`~autogen_ext.models.OpenAIChatCompletionClient` for more information.\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Azure OpenAI\n",
"\n",
"Install the `azure` and `openai` extensions to use the {py:class}`~autogen_ext.models.AzureOpenAIChatCompletionClient`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "shellscript"
}
},
"outputs": [],
"source": [
"pip install 'autogen-ext[openai,azure]==0.4.0.dev7'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To use the client, you need to provide your deployment id, Azure Cognitive Services endpoint, api version, and model capabilities.\n",
"For authentication, you can either provide an API key or an Azure Active Directory (AAD) token credential.\n",
"\n",
"The following code snippet shows how to use AAD authentication.\n",
"The identity used must be assigned the [Cognitive Services OpenAI User](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/role-based-access-control#cognitive-services-openai-user) role."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from autogen_ext.models import AzureOpenAIChatCompletionClient\n",
"from azure.identity import DefaultAzureCredential, get_bearer_token_provider\n",
"\n",
"# Create the token provider\n",
"token_provider = get_bearer_token_provider(DefaultAzureCredential(), \"https://cognitiveservices.azure.com/.default\")\n",
"\n",
"az_model_client = AzureOpenAIChatCompletionClient(\n",
" azure_deployment=\"{your-azure-deployment}\",\n",
" model=\"{model-name, such as gpt-4o}\",\n",
" api_version=\"2024-06-01\",\n",
" azure_endpoint=\"https://{your-custom-endpoint}.openai.azure.com/\",\n",
" azure_ad_token_provider=token_provider, # Optional if you choose key-based authentication.\n",
" # api_key=\"sk-...\", # For key-based authentication.\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"See [here](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/managed-identity#chat-completions) for how to use the Azure client directly or for more info."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Local Models\n",
"\n",
"We are working on it. Stay tuned!"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Models\n",
"\n",
"In many cases, agents need access to model services such as OpenAI, Azure OpenAI, and local models.\n",
"AgentChat utilizes model clients provided by the\n",
"[`autogen-ext`](../../core-user-guide/framework/model-clients.ipynb) package."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## OpenAI\n",
"\n",
"To access OpenAI models, you need to install the `openai` extension to use the {py:class}`~autogen_ext.models.OpenAIChatCompletionClient`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "shellscript"
}
},
"outputs": [],
"source": [
"pip install 'autogen-ext[openai]==0.4.0.dev8'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You will also need to obtain an [API key](https://platform.openai.com/account/api-keys) from OpenAI."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"from autogen_ext.models import OpenAIChatCompletionClient\n",
"\n",
"opneai_model_client = OpenAIChatCompletionClient(\n",
" model=\"gpt-4o-2024-08-06\",\n",
" # api_key=\"sk-...\", # Optional if you have an OPENAI_API_KEY environment variable set.\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To test the model client, you can use the following code:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CreateResult(finish_reason='stop', content='The capital of France is Paris.', usage=RequestUsage(prompt_tokens=15, completion_tokens=7), cached=False, logprobs=None)\n"
]
}
],
"source": [
"from autogen_core.components.models import UserMessage\n",
"\n",
"result = await opneai_model_client.create([UserMessage(content=\"What is the capital of France?\", source=\"user\")])\n",
"print(result)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```{note}\n",
"You can use this client with models hosted on OpenAI-compatible endpoints, however, we have not tested this functionality.\n",
"See {py:class}`~autogen_ext.models.OpenAIChatCompletionClient` for more information.\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Azure OpenAI\n",
"\n",
"Install the `azure` and `openai` extensions to use the {py:class}`~autogen_ext.models.AzureOpenAIChatCompletionClient`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "shellscript"
}
},
"outputs": [],
"source": [
"pip install 'autogen-ext[openai,azure]==0.4.0.dev8'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To use the client, you need to provide your deployment id, Azure Cognitive Services endpoint, api version, and model capabilities.\n",
"For authentication, you can either provide an API key or an Azure Active Directory (AAD) token credential.\n",
"\n",
"The following code snippet shows how to use AAD authentication.\n",
"The identity used must be assigned the [Cognitive Services OpenAI User](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/role-based-access-control#cognitive-services-openai-user) role."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from autogen_ext.models import AzureOpenAIChatCompletionClient\n",
"from azure.identity import DefaultAzureCredential, get_bearer_token_provider\n",
"\n",
"# Create the token provider\n",
"token_provider = get_bearer_token_provider(DefaultAzureCredential(), \"https://cognitiveservices.azure.com/.default\")\n",
"\n",
"az_model_client = AzureOpenAIChatCompletionClient(\n",
" azure_deployment=\"{your-azure-deployment}\",\n",
" model=\"{model-name, such as gpt-4o}\",\n",
" api_version=\"2024-06-01\",\n",
" azure_endpoint=\"https://{your-custom-endpoint}.openai.azure.com/\",\n",
" azure_ad_token_provider=token_provider, # Optional if you choose key-based authentication.\n",
" # api_key=\"sk-...\", # For key-based authentication.\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"See [here](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/managed-identity#chat-completions) for how to use the Azure client directly or for more info."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Local Models\n",
"\n",
"We are working on it. Stay tuned!"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -1,223 +1,223 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Distributed Agent Runtime\n",
"\n",
"```{attention}\n",
"The distributed agent runtime is an experimental feature. Expect breaking changes\n",
"to the API.\n",
"```\n",
"\n",
"A distributed agent runtime facilitates communication and agent lifecycle management\n",
"across process boundaries.\n",
"It consists of a host service and at least one worker runtime.\n",
"\n",
"The host service maintains connections to all active worker runtimes,\n",
"facilitates message delivery, and keeps sessions for all direct messages (i.e., RPCs).\n",
"A worker runtime processes application code (agents) and connects to the host service.\n",
"It also advertises the agents which they support to the host service,\n",
"so the host service can deliver messages to the correct worker.\n",
"\n",
"````{note}\n",
"The distributed agent runtime requires extra dependencies, install them using:\n",
"```bash\n",
"pip install autogen-core[grpc]==0.4.0.dev7\n",
"```\n",
"````\n",
"\n",
"We can start a host service using {py:class}`~autogen_core.application.WorkerAgentRuntimeHost`."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from autogen_core.application import WorkerAgentRuntimeHost\n",
"\n",
"host = WorkerAgentRuntimeHost(address=\"localhost:50051\")\n",
"host.start() # Start a host service in the background."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The above code starts the host service in the background and accepts\n",
"worker connections on port 50051.\n",
"\n",
"Before running worker runtimes, let's define our agent.\n",
"The agent will publish a new message on every message it receives.\n",
"It also keeps track of how many messages it has published, and \n",
"stops publishing new messages once it has published 5 messages."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from dataclasses import dataclass\n",
"\n",
"from autogen_core.base import MessageContext\n",
"from autogen_core.components import DefaultTopicId, RoutedAgent, default_subscription, message_handler\n",
"\n",
"\n",
"@dataclass\n",
"class MyMessage:\n",
" content: str\n",
"\n",
"\n",
"@default_subscription\n",
"class MyAgent(RoutedAgent):\n",
" def __init__(self, name: str) -> None:\n",
" super().__init__(\"My agent\")\n",
" self._name = name\n",
" self._counter = 0\n",
"\n",
" @message_handler\n",
" async def my_message_handler(self, message: MyMessage, ctx: MessageContext) -> None:\n",
" self._counter += 1\n",
" if self._counter > 5:\n",
" return\n",
" content = f\"{self._name}: Hello x {self._counter}\"\n",
" print(content)\n",
" await self.publish_message(MyMessage(content=content), DefaultTopicId())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we can set up the worker agent runtimes.\n",
"We use {py:class}`~autogen_core.application.WorkerAgentRuntime`.\n",
"We set up two worker runtimes. Each runtime hosts one agent.\n",
"All agents publish and subscribe to the default topic, so they can see all\n",
"messages being published.\n",
"\n",
"To run the agents, we publishes a message from a worker."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"worker1: Hello x 1\n",
"worker2: Hello x 1\n",
"worker2: Hello x 2\n",
"worker1: Hello x 2\n",
"worker1: Hello x 3\n",
"worker2: Hello x 3\n",
"worker2: Hello x 4\n",
"worker1: Hello x 4\n",
"worker1: Hello x 5\n",
"worker2: Hello x 5\n"
]
}
],
"source": [
"import asyncio\n",
"\n",
"from autogen_core.application import WorkerAgentRuntime\n",
"\n",
"worker1 = WorkerAgentRuntime(host_address=\"localhost:50051\")\n",
"worker1.start()\n",
"await MyAgent.register(worker1, \"worker1\", lambda: MyAgent(\"worker1\"))\n",
"\n",
"worker2 = WorkerAgentRuntime(host_address=\"localhost:50051\")\n",
"worker2.start()\n",
"await MyAgent.register(worker2, \"worker2\", lambda: MyAgent(\"worker2\"))\n",
"\n",
"await worker2.publish_message(MyMessage(content=\"Hello!\"), DefaultTopicId())\n",
"\n",
"# Let the agents run for a while.\n",
"await asyncio.sleep(5)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can see each agent published exactly 5 messages.\n",
"\n",
"To stop the worker runtimes, we can call {py:meth}`~autogen_core.application.WorkerAgentRuntime.stop`."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"await worker1.stop()\n",
"await worker2.stop()\n",
"\n",
"# To keep the worker running until a termination signal is received (e.g., SIGTERM).\n",
"# await worker1.stop_when_signal()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can call {py:meth}`~autogen_core.application.WorkerAgentRuntimeHost.stop`\n",
"to stop the host service."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"await host.stop()\n",
"\n",
"# To keep the host service running until a termination signal (e.g., SIGTERM)\n",
"# await host.stop_when_signal()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Next Steps\n",
"To see complete examples of using distributed runtime, please take a look at the following samples:\n",
"\n",
"- [Distributed Workers](https://github.com/microsoft/autogen/tree/main/python/packages/autogen-core/samples/worker) \n",
"- [Distributed Semantic Router](https://github.com/microsoft/autogen/tree/main/python/packages/autogen-core/samples/semantic_router) \n",
"- [Distributed Group Chat](https://github.com/microsoft/autogen/tree/main/python/packages/autogen-core/samples/distributed-group-chat) \n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "agnext",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Distributed Agent Runtime\n",
"\n",
"```{attention}\n",
"The distributed agent runtime is an experimental feature. Expect breaking changes\n",
"to the API.\n",
"```\n",
"\n",
"A distributed agent runtime facilitates communication and agent lifecycle management\n",
"across process boundaries.\n",
"It consists of a host service and at least one worker runtime.\n",
"\n",
"The host service maintains connections to all active worker runtimes,\n",
"facilitates message delivery, and keeps sessions for all direct messages (i.e., RPCs).\n",
"A worker runtime processes application code (agents) and connects to the host service.\n",
"It also advertises the agents which they support to the host service,\n",
"so the host service can deliver messages to the correct worker.\n",
"\n",
"````{note}\n",
"The distributed agent runtime requires extra dependencies, install them using:\n",
"```bash\n",
"pip install autogen-core[grpc]==0.4.0.dev8\n",
"```\n",
"````\n",
"\n",
"We can start a host service using {py:class}`~autogen_core.application.WorkerAgentRuntimeHost`."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from autogen_core.application import WorkerAgentRuntimeHost\n",
"\n",
"host = WorkerAgentRuntimeHost(address=\"localhost:50051\")\n",
"host.start() # Start a host service in the background."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The above code starts the host service in the background and accepts\n",
"worker connections on port 50051.\n",
"\n",
"Before running worker runtimes, let's define our agent.\n",
"The agent will publish a new message on every message it receives.\n",
"It also keeps track of how many messages it has published, and \n",
"stops publishing new messages once it has published 5 messages."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from dataclasses import dataclass\n",
"\n",
"from autogen_core.base import MessageContext\n",
"from autogen_core.components import DefaultTopicId, RoutedAgent, default_subscription, message_handler\n",
"\n",
"\n",
"@dataclass\n",
"class MyMessage:\n",
" content: str\n",
"\n",
"\n",
"@default_subscription\n",
"class MyAgent(RoutedAgent):\n",
" def __init__(self, name: str) -> None:\n",
" super().__init__(\"My agent\")\n",
" self._name = name\n",
" self._counter = 0\n",
"\n",
" @message_handler\n",
" async def my_message_handler(self, message: MyMessage, ctx: MessageContext) -> None:\n",
" self._counter += 1\n",
" if self._counter > 5:\n",
" return\n",
" content = f\"{self._name}: Hello x {self._counter}\"\n",
" print(content)\n",
" await self.publish_message(MyMessage(content=content), DefaultTopicId())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we can set up the worker agent runtimes.\n",
"We use {py:class}`~autogen_core.application.WorkerAgentRuntime`.\n",
"We set up two worker runtimes. Each runtime hosts one agent.\n",
"All agents publish and subscribe to the default topic, so they can see all\n",
"messages being published.\n",
"\n",
"To run the agents, we publishes a message from a worker."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"worker1: Hello x 1\n",
"worker2: Hello x 1\n",
"worker2: Hello x 2\n",
"worker1: Hello x 2\n",
"worker1: Hello x 3\n",
"worker2: Hello x 3\n",
"worker2: Hello x 4\n",
"worker1: Hello x 4\n",
"worker1: Hello x 5\n",
"worker2: Hello x 5\n"
]
}
],
"source": [
"import asyncio\n",
"\n",
"from autogen_core.application import WorkerAgentRuntime\n",
"\n",
"worker1 = WorkerAgentRuntime(host_address=\"localhost:50051\")\n",
"worker1.start()\n",
"await MyAgent.register(worker1, \"worker1\", lambda: MyAgent(\"worker1\"))\n",
"\n",
"worker2 = WorkerAgentRuntime(host_address=\"localhost:50051\")\n",
"worker2.start()\n",
"await MyAgent.register(worker2, \"worker2\", lambda: MyAgent(\"worker2\"))\n",
"\n",
"await worker2.publish_message(MyMessage(content=\"Hello!\"), DefaultTopicId())\n",
"\n",
"# Let the agents run for a while.\n",
"await asyncio.sleep(5)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can see each agent published exactly 5 messages.\n",
"\n",
"To stop the worker runtimes, we can call {py:meth}`~autogen_core.application.WorkerAgentRuntime.stop`."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"await worker1.stop()\n",
"await worker2.stop()\n",
"\n",
"# To keep the worker running until a termination signal is received (e.g., SIGTERM).\n",
"# await worker1.stop_when_signal()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can call {py:meth}`~autogen_core.application.WorkerAgentRuntimeHost.stop`\n",
"to stop the host service."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"await host.stop()\n",
"\n",
"# To keep the host service running until a termination signal (e.g., SIGTERM)\n",
"# await host.stop_when_signal()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Next Steps\n",
"To see complete examples of using distributed runtime, please take a look at the following samples:\n",
"\n",
"- [Distributed Workers](https://github.com/microsoft/autogen/tree/main/python/packages/autogen-core/samples/worker) \n",
"- [Distributed Semantic Router](https://github.com/microsoft/autogen/tree/main/python/packages/autogen-core/samples/semantic_router) \n",
"- [Distributed Group Chat](https://github.com/microsoft/autogen/tree/main/python/packages/autogen-core/samples/distributed-group-chat) \n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "agnext",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@ -4,7 +4,7 @@ build-backend = "hatchling.build"
[project]
name = "autogen-core"
version = "0.4.0.dev7"
version = "0.4.0.dev8"
license = {file = "LICENSE-CODE"}
description = "Foundational interfaces and agent runtime implementation for AutoGen"
readme = "README.md"

View File

@ -4,7 +4,7 @@ build-backend = "hatchling.build"
[project]
name = "autogen-ext"
version = "0.4.0.dev7"
version = "0.4.0.dev8"
license = {file = "LICENSE-CODE"}
description = "AutoGen extensions library"
readme = "README.md"
@ -15,7 +15,7 @@ classifiers = [
"Operating System :: OS Independent",
]
dependencies = [
"autogen-core==0.4.0.dev7",
"autogen-core==0.4.0.dev8",
]

View File

@ -908,7 +908,7 @@ class OpenAIChatCompletionClient(BaseOpenAIChatCompletionClient):
.. code-block:: bash
pip install 'autogen-ext[openai]==0.4.0.dev7'
pip install 'autogen-ext[openai]==0.4.0.dev8'
The following code snippet shows how to use the client with an OpenAI model:
@ -988,7 +988,7 @@ class AzureOpenAIChatCompletionClient(BaseOpenAIChatCompletionClient):
.. code-block:: bash
pip install 'autogen-ext[openai,azure]==0.4.0.dev7'
pip install 'autogen-ext[openai,azure]==0.4.0.dev8'
To use the client, you need to provide your deployment id, Azure Cognitive Services endpoint,
api version, and model capabilities.

View File

@ -33,9 +33,9 @@ dependencies = [
"alembic",
"loguru",
"pyyaml",
"autogen-core==0.4.0.dev7",
"autogen-agentchat==0.4.0.dev7",
"autogen-ext==0.4.0.dev7"
"autogen-core==0.4.0.dev8",
"autogen-agentchat==0.4.0.dev8",
"autogen-ext==0.4.0.dev8"
]
optional-dependencies = {web = ["fastapi", "uvicorn"], database = ["psycopg"]}
@ -87,4 +87,4 @@ ignore = ["B008"]
fmt = "ruff format"
format.ref = "fmt"
lint = "ruff check"
test = "pytest -n 0"
test = "pytest -n 0"

View File

@ -331,7 +331,7 @@ wheels = [
[[package]]
name = "autogen-agentchat"
version = "0.4.0.dev7"
version = "0.4.0.dev8"
source = { editable = "packages/autogen-agentchat" }
dependencies = [
{ name = "autogen-core" },
@ -342,7 +342,7 @@ requires-dist = [{ name = "autogen-core", editable = "packages/autogen-core" }]
[[package]]
name = "autogen-core"
version = "0.4.0.dev7"
version = "0.4.0.dev8"
source = { editable = "packages/autogen-core" }
dependencies = [
{ name = "aiohttp" },
@ -465,7 +465,7 @@ dev = [
[[package]]
name = "autogen-ext"
version = "0.4.0.dev7"
version = "0.4.0.dev8"
source = { editable = "packages/autogen-ext" }
dependencies = [
{ name = "autogen-core" },