Commit Graph

81 Commits

Author SHA1 Message Date
Sungjun.Kim 71a4eaedf9
Bump up json-schema-to-pydantic from v0.2.3 to v0.2.4 (#6300)
---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-04-14 21:50:49 -07:00
Eric Zhu 3500170be1
update version 0.5.2 (#6296)
Update version
2025-04-14 18:03:44 -07:00
Macon Pegram 196be34cb6
[Bugfix] Fix for Issue #6241 - ChromaDB removed IncludeEnum (#6260)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

`IncludeEnum` was removed in ChromaDB when it was updated to `1.0.0`.
This caused issues when using `ChromaDBVectorMemory`. This PR fixes
those issues

## Related issue number

Closes #6241

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.

---------

Co-authored-by: Victor Dibia <victordibia@microsoft.com>
2025-04-10 09:41:41 -07:00
Eric Zhu f564781fef
Update json_schema_to_pydantic version and make relaxed requirement on arry item. (#6209)
Resolves #6152
2025-04-07 18:44:18 +00:00
Eric Zhu 47602eac9e
Update version to 0.5.1 (#6195) 2025-04-03 15:10:41 -07:00
Eric Zhu 5508cc7a43
Update versions to 0.5.0 (#6184) 2025-04-02 18:15:50 -07:00
Jay Prakash Thakur 0d9b574d09
Add Azure AI Search tool implementation (#5844)
# Azure AI Search Tool Implementation

This PR adds a new tool for Azure AI Search integration to autogen-ext,
enabling agents to search and retrieve information from Azure AI Search
indexes.

## Why Are These Changes Needed?
AutoGen currently lacks native integration with Azure AI Search, which
is a powerful enterprise search service that supports semantic, vector,
and hybrid search capabilities. This integration enables agents to:
1. Retrieve relevant information from large document collections
2. Perform semantic search with AI-powered ranking
3. Execute vector similarity search using embeddings
4. Combine text and vector approaches for optimal results

This tool complements existing retrieval capabilities and provides a
seamless way to integrate with Azure's search infrastructure.

## Features
- **Multiple Search Types**: Support for text, semantic, vector, and
hybrid search
- **Flexible Configuration**: Customizable search parameters and fields
- **Robust Error Handling**: User-friendly error messages with
actionable guidance
- **Performance Optimizations**: Configurable caching and retry
mechanisms
- **Vector Search Support**: Built-in embedding generation with
extensibility

## Usage Example
```python
from autogen_ext.tools.azure import AzureAISearchTool
from azure.core.credentials import AzureKeyCredential
from autogen import AssistantAgent, UserProxyAgent
# Create the search tool
search_tool = AzureAISearchTool.load_component({
   "provider": "autogen_ext.tools.azure.AzureAISearchTool",
   "config": {
       "name": "DocumentSearch",
       "description": "Search for information in the knowledge base",
       "endpoint": "https://your-service.search.windows.net",
       "index_name": "your-index",
       "credential": {"api_key": "your-api-key"},
       "query_type": "semantic",
       "semantic_config_name": "default"
   }
})
# Create an agent with the search tool
assistant = AssistantAgent(
   "assistant",
   llm_config={"tools": [search_tool]}
)
# Create a user proxy agent
user_proxy = UserProxyAgent(
   "user_proxy",
   human_input_mode="TERMINATE",
   max_consecutive_auto_reply=10,
   code_execution_config={"work_dir": "coding"}
)
# Start the conversation
user_proxy.initiate_chat(
   assistant,
   message="What information do we have about quantum computing in our knowledge base?"
)
```

## Testing
- Added unit tests for all search types (text, semantic, vector, hybrid)
- Added tests for error handling and cancellation
- All tests pass locally

## Documentation
- Added comprehensive docstrings with examples
- Included warnings about placeholder embedding implementation
- Added links to Azure AI Search documentation

## Related issue number

Closes #5419 

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.

---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-04-02 23:16:48 +00:00
Eric Zhu 68c1879675
Update mcp version to 1.6.0 to avoid bug in closing client. (#6162)
Upgrade `mcp` package version to >=1.6.0 to avoid bugs causing hanging
when using mcp_server_featch.
2025-04-01 14:54:30 +00:00
Eric Zhu 29485ef85b
Fix MCP tool bug by dropping unset parameters from input (#6125)
Resolves #6096

Additionally: make sure MCP errors are formatted correctly, added unit
tests for mcp servers and upgrade mcp version.
2025-03-27 13:22:06 -07:00
EeS bca4d7e82f
FIX: Anthropic multimodal(Image) message for Anthropic >= 0.48 aware (#6054)
## Why are these changes needed?
This PR fixes a `TypeError: Cannot instantiate typing.Union` that occurs
when using the `MultimodalWebSurfer_agent` with Anthropic models. The
error was caused by the incorrect usage of `typing.Union` as a class
constructor instead of a type hint within the `_anthropic_client.py`
file. The code was attempting to instantiate `typing.Union`, which is
not allowed. The fix correctly uses `typing.Union` within type hints,
and uses the correct `Base64ImageSourceParam` type. It also updates the
`pyproject.toml` dependency.

## Related issue number
Closes #6035 

## Checks

- [ ] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [v] I've made sure all auto checks have passed.

---------

Co-authored-by: Victor Dibia <victordibia@microsoft.com>
2025-03-22 00:46:55 -07:00
Eric Zhu 69292e6ff4
Update mimum openai version to 1.66.5 as import path changed (#5996)
Resolves #5994

Open AI moved `openai.types.beta.vector_store` to
`openai.types.vector_store`.
https://github.com/openai/openai-python/compare/v1.65.5...v1.66.0

Also fixed unit tests and use parameterized fixture to run all
scenarios.
2025-03-19 05:20:04 +00:00
Eric Zhu 5f9e37dc27
Upgrade llama cpp to 0.3.8 to fix windows related error (#5948)
use the latest version of llama-cpp-python to ensure `uv sync
--all-extras` don't fail on windows.

reference:
https://github.com/microsoft/autogen/pull/5942#issuecomment-2724478534
2025-03-14 12:20:42 -07:00
afourney aefa66a3ce
Update MarkItDown. (#5920)
Update FileSurfer and WebSurfer to use the latest MarkItDown package.
2025-03-12 21:17:25 -07:00
Eric Zhu bb8439c7bd
update version to v0.4.9 (#5903) 2025-03-11 19:35:22 -07:00
PythicCoder 6a3acc4548
Feature add Add LlamaCppChatCompletionClient and llama-cpp (#5326)
This pull request introduces the integration of the `llama-cpp` library
into the `autogen-ext` package, with significant changes to the project
dependencies and the implementation of a new chat completion client. The
most important changes include updating the project dependencies, adding
a new module for the `LlamaCppChatCompletionClient`, and implementing
the client with various functionalities.

### Project Dependencies:

*
[`python/packages/autogen-ext/pyproject.toml`](diffhunk://#diff-095119d4420ff09059557bd25681211d1772c2be0fbe0ff2d551a3726eff1b4bR34-R38):
Added `llama-cpp-python` as a new dependency under the `llama-cpp`
section.

### New Module:

*
[`python/packages/autogen-ext/src/autogen_ext/models/llama_cpp/__init__.py`](diffhunk://#diff-42ae3ba17d51ca917634c4ea3c5969cf930297c288a783f8d9c126f2accef71dR1-R8):
Introduced the `LlamaCppChatCompletionClient` class and handled import
errors with a descriptive message for missing dependencies.

### Implementation of `LlamaCppChatCompletionClient`:

*
`python/packages/autogen-ext/src/autogen_ext/models/llama_cpp/_llama_cpp_completion_client.py`:
- Added the `LlamaCppChatCompletionClient` class with methods to
initialize the client, create chat completions, detect and execute
tools, and handle streaming responses.
- Included detailed logging for debugging purposes and implemented
methods to count tokens, track usage, and provide model information.…d
chat capabilities

<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

<!-- Please give a short summary of the change and the problem this
solves. -->

## Related issue number

<!-- For example: "Closes #1234" -->

## Checks

- [X ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [X ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ X] I've made sure all auto checks have passed.

---------

Co-authored-by: aribornstein <x@x.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Co-authored-by: Ryan Sweet <rysweet@microsoft.com>
2025-03-10 16:53:53 -07:00
Leonardo Pinheiro a1858efac9
feat: update local code executor to support powershell (#5884)
To support powershell on the local code executor.
Closes #5518
2025-03-10 14:00:14 -07:00
afourney 5685bd1888
Update markitdown requirements to >= 0.0.1, while still in the 0.0.x range (#5864) 2025-03-06 21:33:09 -08:00
Ricky Loynd 97536af7a3
Task-Centric Memory (#5227)
_(EXPERIMENTAL, RESEARCH IN PROGRESS)_

In 2023 AutoGen introduced [Teachable
Agents](https://microsoft.github.io/autogen/0.2/blog/2023/10/26/TeachableAgent/)
that users could teach new facts, preferences and skills. But teachable
agents were limited in several ways: They could only be
`ConversableAgent` subclasses, they couldn't learn a new skill unless
the user stated (in a single turn) both the task and how to solve it,
and they couldn't learn on their own. **Task-Centric Memory** overcomes
these limitations, allowing users to teach arbitrary agents (or teams)
more flexibly and reliably, and enabling agents to learn from their own
trial-and-error experiences.

This PR is large and complex. All of the files are new, and most of the
added components depend on the others to run at all. But the review
process can be accelerated if approached in the following order.
1. Start with the [Task-Centric Memory
README](https://github.com/microsoft/autogen/tree/agentic_memory/python/packages/autogen-ext/src/autogen_ext/task_centric_memory).
1. Install the memory extension locally, since it won't be in pypi until
it's merged. In the `agentic_memory` branch, and the `python/packages`
directory:
        - `pip install -e autogen-agentchat`
        - `pip install -e autogen-ext[openai]`
        - `pip install -e autogen-ext[task-centric-memory]`
2. Run the Quickstart sample code, then immediately open the
`./pagelogs/quick/0 Call Tree.html` file in a browser to view the work
in progress.
    3. Click through the web page links to see the details.
2. Continue through the rest of the main README to get a high-level
overview of the architecture.
3. Read through the [code samples
README](https://github.com/microsoft/autogen/tree/agentic_memory/python/samples/task_centric_memory),
running each of the 4 code samples while viewing their page logs.
4. Skim through the 4 code samples, along with their corresponding yaml
config files:
    1. `chat_with_teachable_agent.py`
    2. `eval_retrieval.py`
    3. `eval_teachability.py`
    4. `eval_learning_from_demonstration.py`
    5. `eval_self_teaching.py`
6. Read `task_centric_memory_controller.py`, referring back to the
previously generated page logs as needed. This is the most important and
complex file in the PR.
7. Read the remaining core files.
    1. `_task_centric_memory_bank.py`
    2. `_string_similarity_map.py`
    3. `_prompter.py`
8. Read the supporting files in the utils dir.
    1. `teachability.py`
    2. `apprentice.py`
    3. `grader.py`
    4. `page_logger.py`
    5. `_functions.py`
2025-03-04 09:56:49 -08:00
Victor Dibia 78ef148c88
Add ChromaDBVectorMemory in Extensions (#5308)
<!-- Thank you for your contribution! Please review
https://microsoft.github.io/autogen/docs/Contribute before opening a
pull request. -->

<!-- Please add a reviewer to the assignee section when you create a PR.
If you don't have the access to it, we will shortly find a reviewer and
assign them to your PR. -->

## Why are these changes needed?

<!-- Please give a short summary of the change and the problem this
solves. -->
Shows an example of how to use the `Memory` interface to implement a
just-in-time vector memory based on chromadb.

```python
import os
from pathlib import Path

from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.ui import Console
from autogen_core.memory import MemoryContent, MemoryMimeType
from autogen_ext.memory.chromadb import ChromaDBVectorMemory, PersistentChromaDBVectorMemoryConfig
from autogen_ext.models.openai import OpenAIChatCompletionClient

# Initialize ChromaDB memory with custom config
chroma_user_memory = ChromaDBVectorMemory(
    config=PersistentChromaDBVectorMemoryConfig(
        collection_name="preferences",
        persistence_path=os.path.join(str(Path.home()), ".chromadb_autogen"),
        k=2,  # Return top  k results
        score_threshold=0.4,  # Minimum similarity score
    )
)
# a HttpChromaDBVectorMemoryConfig is also supported for connecting to a remote ChromaDB server

# Add user preferences to memory
await chroma_user_memory.add(
    MemoryContent(
        content="The weather should be in metric units",
        mime_type=MemoryMimeType.TEXT,
        metadata={"category": "preferences", "type": "units"},
    )
)

await chroma_user_memory.add(
    MemoryContent(
        content="Meal recipe must be vegan",
        mime_type=MemoryMimeType.TEXT,
        metadata={"category": "preferences", "type": "dietary"},
    )
)


# Create assistant agent with ChromaDB memory
assistant_agent = AssistantAgent(
    name="assistant_agent",
    model_client=OpenAIChatCompletionClient(
        model="gpt-4o",
    ),
    tools=[get_weather],
    memory=[user_memory],
)

stream = assistant_agent.run_stream(task="What is the weather in New York?")
await Console(stream)

await user_memory.close()
```

```txt
 ---------- user ----------
What is the weather in New York?
---------- assistant_agent ----------
[MemoryContent(content='The weather should be in metric units', mime_type='MemoryMimeType.TEXT', metadata={'category': 'preferences', 'mime_type': 'MemoryMimeType.TEXT', 'type': 'units', 'score': 0.4342913043162201, 'id': '8a8d683c-5866-41e1-ac17-08c4fda6da86'}), MemoryContent(content='The weather should be in metric units', mime_type='MemoryMimeType.TEXT', metadata={'category': 'preferences', 'mime_type': 'MemoryMimeType.TEXT', 'type': 'units', 'score': 0.4342913043162201, 'id': 'f27af42c-cb63-46f0-b26b-ffcc09955ca1'})]
---------- assistant_agent ----------
[FunctionCall(id='call_a8U3YEj2dxA065vyzdfXDtNf', arguments='{"city":"New York","units":"metric"}', name='get_weather')]
---------- assistant_agent ----------
[FunctionExecutionResult(content='The weather in New York is 23 °C and Sunny.', call_id='call_a8U3YEj2dxA065vyzdfXDtNf', is_error=False)]
---------- assistant_agent ----------
The weather in New York is 23 °C and Sunny.
```

Note that MemoryContent object in the MemoryQuery events have useful
metadata like the score and id retrieved memories.

## Related issue number

<!-- For example: "Closes #1234" -->


## Checks

- [ ] I've included any doc changes needed for
https://microsoft.github.io/autogen/. See
https://microsoft.github.io/autogen/docs/Contribute#documentation to
build and test documentation locally.
- [ ] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [ ] I've made sure all auto checks have passed.
2025-03-01 07:41:01 -08:00
rylativity 5615f40a30
5663 ollama client host (#5674)
@ekzhu should likely be assigned as reviewer

## Why are these changes needed?

These changes address the bug reported in #5663. Prevents TypeError from
being thrown at inference time by ollama AsyncClient when `host` (and
other) kwargs are passed to autogen OllamaChatCompletionClient
constructor.

It also adds ollama as a named optional extra so that the ollama
requirements can be installed alongside autogen-ext (e.g. `pip install
autogen-ext[ollama]`

@ekzhu, I will need some help or guidance to ensure that the associated
test (which requires ollama and tiktoken as dependencies of the
OllamaChatCompletionClient) can run successfully in autogen's test
execution environment.

I have also left the "I've made sure all auto checks have passed" check
below unchecked as this PR is coming from my fork. (UPDATE: auto checks
appear to have passed after opening PR, so I have checked box below)

## Related issue number

Intended to close #5663 

## Checks

- [x] I've included any doc changes needed for
<https://microsoft.github.io/autogen/>. See
<https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to
build and test documentation locally.
- [x] I've added tests (if relevant) corresponding to the changes
introduced in this PR.
- [x] I've made sure all auto checks have passed.

---------

Co-authored-by: Ryan Stewart <ryanstewart@Ryans-MacBook-Pro.local>
Co-authored-by: Jack Gerrits <jackgerrits@users.noreply.github.com>
Co-authored-by: peterychang <49209570+peterychang@users.noreply.github.com>
2025-02-26 11:02:48 -05:00
Eric Zhu 6bc896f6e2
update versions to 0.4.8 (#5689) 2025-02-24 23:46:37 +00:00
Eric Zhu b04775f3dc
Update python version to v0.4.7 (#5558) 2025-02-14 20:08:38 -08:00
Victor Dibia cd085e6b89
Improve custom agentchat agent docs with model clients (gemini example) and serialization (#5468)
This PR improves documentation on custom agents 

- Shows example on how to create a custom agent that directly uses a
model client. In this case an example of a GeminiAssistantAgent that
directly uses the Gemini SDK model client.
- Shows that that CustomAgent can be easily added to any agentchat team 
- Shows how the same CustomAgent can be made declarative by inheriting
the Component interface and implementing the required methods.

Closes #5450
2025-02-10 16:29:43 -08:00
Eitan Yarmush 8a9f452136
Adding declarative HTTP tools to autogen ext (#5181)
## Why are these changes needed?
These changes are needed because currently there's no generic way to add
`tools` to autogen studio workflows using the existing DSL and schema
other than inline python.

This API will be quite verbose, and lacks a discovery mechanism, but it
unlocks a lot of programmatic use-cases.

## Related issue number
https://github.com/microsoft/autogen/issues/5170

Co-authored-by: Victor Dibia <victordibia@microsoft.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-02-10 20:27:27 +00:00
Eric Zhu 378b5ac09a
Update version to 0.4.6 (#5477) 2025-02-10 11:22:23 -08:00
Richárd Gyikó 5308b76d5f
Add MCP adapters to autogen-ext (#5251)
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-02-09 05:20:00 +00:00
Jack Gerrits f7f5507c70
Split out GRPC tests (#5431) 2025-02-07 16:57:30 +00:00
Eric Zhu d5007adba7
chore: add asyncio_atexit dependency to docker requirements (#5307)
Resolves #5281
2025-01-31 14:14:43 -08:00
Eric Zhu 71bf20b8a2
chore: update package versions to 0.4.5 and remove deprecated requirements (#5280) 2025-01-31 01:52:45 +00:00
Eric Zhu f656ff1e01
feat: Support R1 reasoning text in model create result; enhance API docs (#5262)
Resolves #5255 

---------

Co-authored-by: afourney <adamfo@microsoft.com>
2025-01-30 11:03:54 -08:00
Mohammad Mazraeh 2f1684b698
update dependencies to work with protobuf 5 (#5195)
Closes #5074

Signed-off-by: Mohammad Mazraeh <mazraeh.mohammad@gmail.com>
2025-01-28 22:11:54 -08:00
Eric Zhu b29d0bda2f
update versions to 0.4.4 and m1 cli to 0.2.3 (#5229) 2025-01-28 17:59:14 +00:00
Leonardo Pinheiro db2410c705
Feature/azure ai inference client (#5153)
* Rebase to latest main branch

* Moved _azure module to azure

* Validate extra_create_args in and json response

* Added Support for Github Models

* Added normalize_name and assert_valid name

* Added Tests for AzureAIChatCompletionClient

* WIP: Azure AI Client

* Added: object-level usage data
* Added: doc string
* Added: check existing response_format value
* Added: _validate_config and _create_client

* lint

* merge dependencies

* add tests for img and function calling

* support actual tests through env vars

* address mypy errors

* doc example fix

* fmt

* fix doc fmt

* Update python/packages/autogen-ext/src/autogen_ext/models/azure/_azure_ai_client.py

---------

Co-authored-by: Rohan Thacker <thackerrohan4@gmail.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
Co-authored-by: Leonardo Pinheiro <lpinheiro@microsoft.com>
2025-01-25 08:26:48 +10:00
Gerardo Moreno 89631966cb
RichConsole: Prettify m1 CLI console using rich #4806 (#5123) 2025-01-24 09:50:38 -08:00
Leon De Andrade 34bc82e24f
Jupyter Code Executor in v0.4 (alternative implementation) (#4885) 2025-01-18 21:11:40 +00:00
Leonardo Pinheiro 918292f51e
Semantic kernel model adapter (#4851)
* initial sk model adapter implementation

* add sk tool module

* implement streaming and update tests

* update lock

* linting

* add semantic kernel extras

* add docstring and format

* update dependencies and format/lint

* add model info to sk constructor

* update uv.lock

* customize prompt settings

* update uv.lock

* add docs

* fix sk docstring linting

* update create docstrings

* fmt and improve tool docstring

* update sk tool docs

* coerce doc json serialization failure

---------

Co-authored-by: Leonardo Pinheiro <lpinheiro@microsoft.com>
2025-01-18 18:57:20 +10:00
Leonardo Pinheiro a1fdbd9692
Use caching to run tests and report coverage (#5086)
* use caching to run tests and report coverage

* fix test step dep name

* try to fix cov fname

* add working dir to mv step

* update artifact download

* fmt

* reduce concurrency on ext test

---------

Co-authored-by: Leonardo Pinheiro <lpinheiro@microsoft.com>
2025-01-17 14:32:18 +00:00
Sachin Joglekar 8bd65c672f
Add ChatCompletionCache along with AbstractStore for caching completions (#4924)
* Add ChatCompletionCache along with AbstractStore for caching completions

* Addressing comments

* Improve interface for cachestore

* Improve documentation & revert protocol

* Make cache store typed, and improve docs

* remove unnecessary casts
2025-01-16 15:47:38 -08:00
Jack Gerrits 1a3ac626eb
Update version to 0.4.3 pre-emptively (#5066)
* Update version to 0.4.3

* lock

* update lock

* lock
2025-01-15 19:11:32 -05:00
Leonardo Pinheiro 95bd514a9a
Graphrag integration (#4612)
* add initial global search draft

* add graphrag dep

* fix local search embedding

* linting

* add from config constructor

* remove draft notebook

* update config factory and add docstrings

* add graphrag sample

* add sample prompts

* update readme

* update deps

* Add API docs

* Update python/samples/agentchat_graphrag/requirements.txt

* Update python/samples/agentchat_graphrag/requirements.txt

* update docstrings with snippet and doc ref

* lint

* improve set up instructions in docstring

* lint

* update lock

* Update python/packages/autogen-ext/src/autogen_ext/tools/graphrag/_global_search.py

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>

* Update python/packages/autogen-ext/src/autogen_ext/tools/graphrag/_local_search.py

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>

* add unit tests

* update lock

* update uv lock

* add docstring newlines

* stubs and typing on graphrag tests

* fix docstrings

* fix mypy error

* + linting and type fixes

* type fix graphrag sample

* Update python/packages/autogen-ext/src/autogen_ext/tools/graphrag/_global_search.py

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>

* Update python/packages/autogen-ext/src/autogen_ext/tools/graphrag/_local_search.py

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>

* Update python/samples/agentchat_graphrag/requirements.txt

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>

* update overrides

* fix docstring client imports

* additional docstring fix

* add docstring missing import

* use openai and fix db path

* use console for displaying messages

* add model config and gitignore

* update readme

* lint

* Update python/samples/agentchat_graphrag/README.md

* Update python/samples/agentchat_graphrag/README.md

* Comment remaining azure config

---------

Co-authored-by: Leonardo Pinheiro <lpinheiro@microsoft.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-01-15 21:04:17 +10:00
Jack Gerrits cf8446b37e
Fixup autogen-ext version (#5030)
* Update autogen-ext version

* lock
2025-01-13 21:31:41 +00:00
Jack Gerrits 91ec611338
Update version to 0.4.1 (#5029)
* Update version to 0.4.1

* lock

* dest dir

* remove website changes
2025-01-13 21:22:03 +00:00
Jack Gerrits c2721ff65b
Update all versions to 0.4.0 (#4941)
* Update all versions to 0.4.0

* update redirect

* install with upgrade for agentchat
2025-01-09 15:29:54 -05:00
afourney 7131dc945d
Added m1 cli package (#4949)
* Added m1 cli package
* update CI, install card, deprecations
* Update python/packages/magentic-one-cli/pyproject.toml
* fix mypy and pyright
* add package
* Suppress 'ResourceWarning: unclosed socket'

---------

Co-authored-by: Jack Gerrits
2025-01-08 14:05:08 -08:00
Jack Gerrits 310564908b
fix!: Move azure auth provider to separate module (#4912)
* Move azure auth provider to separate module

* Update lock

* fix component gen
2025-01-07 12:21:50 -05:00
Sachin Joglekar ecdade3d3e
Add coverage task & integrate with poe check (#4847)
* Add coverage task & integrate with poe check

* Adding workflow

---------

Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-01-02 09:49:18 -05:00
Jack Gerrits 8a83262a90
add azure deps to openai extra (#4871)
* add azure deps to openai extra

* lock
2024-12-31 15:23:39 -05:00
Jack Gerrits fb1094d9c3
Update to dev13 (#4862) 2024-12-30 17:12:51 -05:00
Eric Zhu a20ec102d3
AgentChat tutorial update to include model context usage and langchain tool (#4843)
* Doc update to include model context usage

* add langchain tools

* update langchain tool wrapper api doc

* updat

* update

* format

* add langchain experimental dev dep

* type

* Fix type

* Fix some types in langchain adapter

* type ignores
2024-12-30 09:09:33 -08:00
Eric Zhu d933b9ab5f
dev12 (#4839)
* dev12
2024-12-27 11:49:12 -08:00