diff --git a/website/docs/Examples/AutoGen-OpenAI.md b/website/docs/Examples/AutoGen-Inference.md similarity index 100% rename from website/docs/Examples/AutoGen-OpenAI.md rename to website/docs/Examples/AutoGen-Inference.md diff --git a/website/docs/Getting-Started.md b/website/docs/Getting-Started.md index 46a848c4e..d8ae61830 100644 --- a/website/docs/Getting-Started.md +++ b/website/docs/Getting-Started.md @@ -37,6 +37,9 @@ user_proxy.initiate_chat(assistant, message="Plot a chart of META and TESLA stoc The figure below shows an example conversation flow with AutoGen. ![Agent Chat Example](/img/chat_example.png) +* [Code examples](/docs/Examples/AutoGen-AgentChat). +* [Documentation](/docs/Use-Cases/agent_chat). + #### Enhanced LLM Inferences Autogen also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers a drop-in replacement of `openai.Completion` or `openai.ChatCompletion` with powerful functionalites like tuning, caching, error handling, templating. For example, you can optimize generations by LLM with your own tuning data, success metrics and budgets. ```python @@ -54,11 +57,13 @@ config, analysis = autogen.Completion.tune( response = autogen.Completion.create(context=test_instance, **config) ``` +* [Code examples](/docs/Examples/AutoGen-Inference). +* [Documentation](/docs/Use-Cases/enhanced_inference). + ### Where to Go Next? -* Understand the use cases for [multi-agent conversation](/docs/Use-Cases/agent_chat). -* Understand the use cases for [enhanced LLM inference](/docs/Use-Cases/enhanced_inference). -* Find code examples from [Examples](/docs/Examples/AutoGen-AgentChat). +* Understand the use cases for [multi-agent conversation](/docs/Use-Cases/agent_chat) and [enhanced LLM inference](/docs/Use-Cases/enhanced_inference). +* Find [code examples](/docs/Examples/AutoGen-AgentChat). * Read [SDK](/docs/reference/agentchat/conversable_agent/). * Learn about [research](/docs/Research) around AutoGen. * Chat on [Discord](https://discord.gg/pAbnFJrkgZ).