Currently we have SecretStr type for model clients to promote security
best practices.
- when we dump_component, keys are serialized as SecreteStr ..
- when we load_component ... SecreteStr type is passed to the client in
the api_key field. This i causes the type problems as the clients expect
a string type.
This PR updates the from_config method for model clients to ensure we
get the value from SecretStr.
Closes#5944
Fixes issues like the following trace:
```
packages/autogen_ext/agents/file_surfer/_markdown_file_browser.py", line 39, in __init__
self.set_path(self._base_path)
File "/home/hmozannar/webby/.venv/lib/python3.12/site-packages/autogen_ext/agents/file_surfer/_markdown_file_browser.py", line 67, in set_path
self._open_path(path)
File "/home/hmozannar/webby/.venv/lib/python3.12/site-packages/autogen_ext/agents/file_surfer/_markdown_file_browser.py", line 210, in _open_path
io.StringIO(self._fetch_local_dir(path)), file_extension=".txt"
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hmozannar/webby/.venv/lib/python3.12/site-packages/autogen_ext/agents/file_surfer/_markdown_file_browser.py", line 248, in _fetch_local_dir
mtime = datetime.datetime.fromtimestamp(os.path.getmtime(full_path)).strftime("%Y-%m-%d %H:%M")
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen genericpath>", line 67, in getmtime
PermissionError: [Errno 13] Permission denied: '/home/hmozannar/webby/autogen-studio/frontend/readme.txt'
```
Use of `SKChatCompletionAdapter` reliably fails with "'MockValSer'
object cannot be converted to 'SchemaSerializer'"; can repro with this
example:
https://microsoft.github.io/autogen/stable/user-guide/core-user-guide/components/model-clients.html#semantic-kernel-adapter
This appears to be related to
https://github.com/pydantic/pydantic/issues/7713 - commit uses
workaround from
https://github.com/pydantic/pydantic/issues/7713#issuecomment-2604574418
## Why are these changes needed?
This unblocks use of the Semantic Kernel integration by addressing the
above-referenced error, enabling the integration to perform as expected.
## Related issue number
N/A, see https://github.com/pydantic/pydantic/issues/7713 for context,
though.
## Checks
- [X] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- None needed, internal only change.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- None added; this works on my machine, but I'm not clear on the root
cause of the issue and have no strong opinion on whether this is the
ideal way to fix it long term - simply leaning towards PR`ing a tenative
fix instead of raising an issue.
- [ ] I've made sure all auto checks have passed.
- I am not familiar with these, but assume they will be run during CI.
---------
Co-authored-by: Leonardo Pinheiro <leosantospinheiro@gmail.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
1. Add `on_pause` and `on_resume` API to `ChatAgent` to support pausing
behavior when running `on_message` concurrently.
2. Add `GroupChatPause` and `GroupChatResume` RPC events and handle them
in `ChatAgentContainer`.
3. Add `pause` and `resume` API to `BaseGroupChat` to allow for this
behavior accessible from the public API.
4. Improve `SequentialRoutedAgent` class to customize which message
types are sequentially handled, making it possible to have concurrent
handling for some messages (e.g., `GroupChatPause`).
5. Added unit tests.
See `test_group_chat_pause_resume.py` for how to use this feature.
What is the difference between pause/resume vs. termination and restart?
- Pause and resume issue direct RPC calls to the participanting agents
of a team while they are running, allowing putting the on-going
generation or actions on hold. This is useful when an agent's turn takes
a long time and multiple steps to complete, and user/application wants
to re-evaluate whether it is worth continue the step or cancel. This
also allows user/application to pause individual agents and resuming
them independently from the team API.
- Termination and restart requires the whole team to comes to a
full-stop, and termination conditions are checked in between agents'
turns. So termination can only happen when no agent is working on its
turn. It is possible that a termination condition has reached well
before the team is terminated, if the agent is taking a long time to
generate a response.
Resolves: #5881
These changes allow for 2 important use-cases:
1. Add a span for tool calls which will enable tracing of all tool calls
in agent_chat
2. Allow runtimes to pick up global `tracer_providers` if they are
available. This is very helpful because it allows for nested teams/agent
to all use the same tracer.
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
These changes are needed because there is currently no way to get
logging information about Streaming LLM requests/responses.
I decided to put the StreamStart event AFTER the first chunk so there
aren't false positives about connections/auth.
Closes#5730
---------
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
This pull request introduces the integration of the `llama-cpp` library
into the `autogen-ext` package, with significant changes to the project
dependencies and the implementation of a new chat completion client. The
most important changes include updating the project dependencies, adding
a new module for the `LlamaCppChatCompletionClient`, and implementing
the client with various functionalities.
### Project Dependencies:
*
[`python/packages/autogen-ext/pyproject.toml`](diffhunk://#diff-095119d4420ff09059557bd25681211d1772c2be0fbe0ff2d551a3726eff1b4bR34-R38):
Added `llama-cpp-python` as a new dependency under the `llama-cpp`
section.
### New Module:
*
[`python/packages/autogen-ext/src/autogen_ext/models/llama_cpp/__init__.py`](diffhunk://#diff-42ae3ba17d51ca917634c4ea3c5969cf930297c288a783f8d9c126f2accef71dR1-R8):
Introduced the `LlamaCppChatCompletionClient` class and handled import
errors with a descriptive message for missing dependencies.
### Implementation of `LlamaCppChatCompletionClient`:
*
`python/packages/autogen-ext/src/autogen_ext/models/llama_cpp/_llama_cpp_completion_client.py`:
- Added the `LlamaCppChatCompletionClient` class with methods to
initialize the client, create chat completions, detect and execute
tools, and handle streaming responses.
- Included detailed logging for debugging purposes and implemented
methods to count tokens, track usage, and provide model information.…d
chat capabilities
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [X ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [X ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ X] I've made sure all auto checks have passed.
---------
Co-authored-by: aribornstein <x@x.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Co-authored-by: Ryan Sweet <rysweet@microsoft.com>
This pull request introduces a new feature to the `FileSurfer` agent and
`MarkdownFileBrowser` by adding support for specifying a base path for
file browsing.
*
`python/packages/autogen-ext/src/autogen_ext/agents/file_surfer/_file_surfer.py`:
* Added `base_path` parameter to `FileSurfer` class and its
initialization method, with a default value of the current working
directory (`os.getcwd()`).
[[1]](diffhunk://#diff-084847b5e64c659c9aff0bd2d05bbcd0fff2c819a4b91bbe65fa0566054c0972R58)
[[2]](diffhunk://#diff-084847b5e64c659c9aff0bd2d05bbcd0fff2c819a4b91bbe65fa0566054c0972R80-R85)
* Updated `MarkdownFileBrowser` initialization within `FileSurfer` to
use the `base_path` parameter.
*
`python/packages/autogen-ext/src/autogen_ext/agents/file_surfer/_markdown_file_browser.py`:
* Added `base_path` parameter to `MarkdownFileBrowser` class and its
initialization method, with a default value of the current working
directory (`os.getcwd()`).
* Updated `MarkdownFileBrowser` to use the `base_path` for setting the
initial path and returning the current page path.
Modify `BaseGroupChat.save_state` to not require the team to be stopped
first. The `save_state` method is read-only. While it may retrieve an
inconsistent state when the team is running, we made a notice to it's
API doc.
Resolves: #5880
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
<!-- Please give a short summary of the change and the problem this
solves. -->
This PR makes it clear which agent is speaking per message in the
Chainlit team sample. Previously, messages would be exchanged without
showing who is communicating.
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#5609
## Checks
- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Add anthropic docs
- Add api docs
- Add sample code + usage in agent chat user guide
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
Closes#5856
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
Resolves#5745
Also made sure to log LLMCallEvent from all builtin model clients, and
added unit test for coverage.
---------
Co-authored-by: Ryan Sweet <rysweet@microsoft.com>
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
Fixes#4821 by adding a `close()` method to all clients.
Additionally:
* The m1 CLI is updated to close the client before exiting.
* The playwrightcontroller is updated to suppress some other unrelated
chatty warnings (e.g,, produced by markitdown when encountering
conversions that require external utilities)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Fixes accessibility issue (34)
## Related issue number
#5634
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
Co-authored-by: Ryan Sweet <rysweet@microsoft.com>
Factors out RunContext management into separate classes:
* Defines a LifecycleObject abstraction to manage initialization /
deinitialization
* Defines RunContextStack to enable a stack of init/deinit layers
* Defines a RunManager to own ensuring proper semantics for performing a
"single execution at a time" call while enabling a polymorphic base.
Implements Reset
Closes#5799
Unblocks #5800 (needs RunContext refactor too)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
(Partially?) fixes accessibility issue (19). Question out to
accessibility team whether its enough.
Migrating to 16.0 for accessibility fixes. Not moving to 16.1 yet
because of a weird change to the 'Show Source' link's appearance
## Related issue number
#5630
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
Co-authored-by: Ryan Sweet <rysweet@microsoft.com>
Fix issue here in this discussion -
https://github.com/microsoft/autogen/discussions/4208#discussioncomment-12394408
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Fix bug in AGS UI where frontend crashes because the default team config
is null
- update /teams endpoint to always return a default team if none is
found for the user
- update UI to check for team before rendering
- also update run_id type to be autoincrement int (similar to team id)
instead of uuid. This helps side step the migration failed errors
related to UUID type when using an sqlite backend
<!-- Please give a short summary of the change and the problem this
solves. -->
## Related issue number
<!-- For example: "Closes #1234" -->
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
---------
Co-authored-by: Ryan Sweet <rysweet@microsoft.com>
Resolves#4075
1. Introduce custom runtime parameter for all AgentChat teams
(RoundRobinGroupChat, SelectorGroupChat, etc.). This is done by making
sure each team's topics are isolated from other teams, and decoupling
state from agent identities. Also, I removed the closure agent from the
BaseGroupChat and use the group chat manager agent to relay messages to
the output message queue.
2. Added unit tests to test scenarios with custom runtimes by using
pytest fixture
3. Refactored existing unit tests to use ReplayChatCompletionClient with
a few improvements to the client.
4. Fix a one-liner bug in AssistantAgent that caused deserialized agent
to have handoffs.
How to use it?
```python
import asyncio
from autogen_core import SingleThreadedAgentRuntime
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.conditions import TextMentionTermination
from autogen_ext.models.replay import ReplayChatCompletionClient
async def main() -> None:
# Create a runtime
runtime = SingleThreadedAgentRuntime()
runtime.start()
# Create a model client.
model_client = ReplayChatCompletionClient(
["1", "2", "3", "4", "5", "6", "7", "8", "9", "10"],
)
# Create agents
agent1 = AssistantAgent("assistant1", model_client=model_client, system_message="You are a helpful assistant.")
agent2 = AssistantAgent("assistant2", model_client=model_client, system_message="You are a helpful assistant.")
# Create a termination condition
termination_condition = TextMentionTermination("10", sources=["assistant1", "assistant2"])
# Create a team
team = RoundRobinGroupChat([agent1, agent2], runtime=runtime, termination_condition=termination_condition)
# Run the team
stream = team.run_stream(task="Count to 10.")
async for message in stream:
print(message)
# Save the state.
state = await team.save_state()
# Load the state to an existing team.
await team.load_state(state)
# Run the team again
model_client.reset()
stream = team.run_stream(task="Count to 10.")
async for message in stream:
print(message)
# Create a new team, with the same agent names.
agent3 = AssistantAgent("assistant1", model_client=model_client, system_message="You are a helpful assistant.")
agent4 = AssistantAgent("assistant2", model_client=model_client, system_message="You are a helpful assistant.")
new_team = RoundRobinGroupChat([agent3, agent4], runtime=runtime, termination_condition=termination_condition)
# Load the state to the new team.
await new_team.load_state(state)
# Run the new team
model_client.reset()
new_stream = new_team.run_stream(task="Count to 10.")
async for message in new_stream:
print(message)
# Stop the runtime
await runtime.stop()
asyncio.run(main())
```
TODOs as future PRs:
1. Documentation.
2. How to handle errors in custom runtime when the agent has exception?
---------
Co-authored-by: Ryan Sweet <rysweet@microsoft.com>
## Why are these changes needed?
Keyboard focus location was being lost after a copy event. Header anchor
was also not selectable while hidden
Fixes (2), (4), (11), (35)
## Related issue number
#5630
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
_(EXPERIMENTAL, RESEARCH IN PROGRESS)_
In 2023 AutoGen introduced [Teachable
Agents](https://microsoft.github.io/autogen/0.2/blog/2023/10/26/TeachableAgent/)
that users could teach new facts, preferences and skills. But teachable
agents were limited in several ways: They could only be
`ConversableAgent` subclasses, they couldn't learn a new skill unless
the user stated (in a single turn) both the task and how to solve it,
and they couldn't learn on their own. **Task-Centric Memory** overcomes
these limitations, allowing users to teach arbitrary agents (or teams)
more flexibly and reliably, and enabling agents to learn from their own
trial-and-error experiences.
This PR is large and complex. All of the files are new, and most of the
added components depend on the others to run at all. But the review
process can be accelerated if approached in the following order.
1. Start with the [Task-Centric Memory
README](https://github.com/microsoft/autogen/tree/agentic_memory/python/packages/autogen-ext/src/autogen_ext/task_centric_memory).
1. Install the memory extension locally, since it won't be in pypi until
it's merged. In the `agentic_memory` branch, and the `python/packages`
directory:
- `pip install -e autogen-agentchat`
- `pip install -e autogen-ext[openai]`
- `pip install -e autogen-ext[task-centric-memory]`
2. Run the Quickstart sample code, then immediately open the
`./pagelogs/quick/0 Call Tree.html` file in a browser to view the work
in progress.
3. Click through the web page links to see the details.
2. Continue through the rest of the main README to get a high-level
overview of the architecture.
3. Read through the [code samples
README](https://github.com/microsoft/autogen/tree/agentic_memory/python/samples/task_centric_memory),
running each of the 4 code samples while viewing their page logs.
4. Skim through the 4 code samples, along with their corresponding yaml
config files:
1. `chat_with_teachable_agent.py`
2. `eval_retrieval.py`
3. `eval_teachability.py`
4. `eval_learning_from_demonstration.py`
5. `eval_self_teaching.py`
6. Read `task_centric_memory_controller.py`, referring back to the
previously generated page logs as needed. This is the most important and
complex file in the PR.
7. Read the remaining core files.
1. `_task_centric_memory_bank.py`
2. `_string_similarity_map.py`
3. `_prompter.py`
8. Read the supporting files in the utils dir.
1. `teachability.py`
2. `apprentice.py`
3. `grader.py`
4. `page_logger.py`
5. `_functions.py`
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->
<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->
## Why are these changes needed?
Adds sidebar and breadcrumb selection and focus indicators to high
contrast modes.
Fixes (46), (55), (56)
## Related issue number
#5633
This change has no affect on normal color modes, but adds selection and
focus indicators to high contrast modes. I'm not sure how to get rid of
the double bars on nested links, but thats a minor issue
before:

after:

## Why are these changes needed?
Current webpage theme does not highlight code output boxes. Issues (5)
and (29)
## Related issue number
#5630
## Checks
- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
Resolves#5786
Also updated the termination tutorial to include an example of a custom
termination conditon.
Also added to guide about FunctionTool and MCP tools.