autogen/python/samples/agentchat_chess_game
湛露先生 687946258f
Clean chess examples. (#6203)
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
Co-authored-by: Eric Zhu <ekzhu@users.noreply.github.com>
2025-04-04 10:57:40 -07:00
..
.gitignore feat: add o3 to model info; update chess example (#5311) 2025-01-31 15:07:14 -08:00
README.md Correct README command examples for chess game sample. (#6008) 2025-03-21 02:30:30 +00:00
main.py Clean chess examples. (#6203) 2025-04-04 10:57:40 -07:00
model_config_template.yaml sample: Update chainlit sample with streaming (#5304) 2025-01-31 22:20:11 +00:00

README.md

AgentChat Chess Game

This is a simple chess game that you can play with an AI agent.

Setup

Install the chess package with the following command:

pip install "chess"

To use OpenAI models or models hosted on OpenAI-compatible API endpoints, you need to install the autogen-ext[openai] package. You can install it with the following command:

pip install "autogen-ext[openai]"
# pip install "autogen-ext[openai,azure]" for Azure OpenAI models

Create a new file named model_config.yaml in the the same directory as the script to configure the model you want to use.

For example, to use gpt-4o model from OpenAI, you can use the following configuration:

provider: autogen_ext.models.openai.OpenAIChatCompletionClient
config:
  model: gpt-4o
  api_key: replace with your API key or skip it if you have environment variable OPENAI_API_KEY set

To use o3-mini-2025-01-31 model from OpenAI, you can use the following configuration:

provider: autogen_ext.models.openai.OpenAIChatCompletionClient
config:
  model: o3-mini-2025-01-31
  api_key: replace with your API key or skip it if you have environment variable OPENAI_API_KEY set

To use a locally hosted DeepSeek-R1:8b model using Ollama throught its compatibility endpoint, you can use the following configuration:

provider: autogen_ext.models.openai.OpenAIChatCompletionClient
config:
  model: deepseek-r1:8b
  base_url: http://localhost:11434/v1
  api_key: ollama
  model_info:
    function_calling: false
    json_output: false
    vision: false
    family: r1

For more information on how to configure the model and use other providers, please refer to the Models documentation.

Run

Run the following command to start the game:

python main.py

By default, the game will use a random agent to play against the AI agent. You can enable human vs AI mode by setting the --human flag:

python main.py --human