Skip to main content
Build a complete Agent application from scratch in just 10 minutes. You’ll create an Agent, start the service, and integrate it into a web application.

Prerequisites

  • API key for your chosen LLM provider
  • Basic programming knowledge
  • Node.js 20+ installed (required for frontend development)
  • Python 3.11+ installed
  • pip or uv package manager
  • Basic Python knowledge
1

Create Project Structure

Create a new directory and initialize the project:
mkdir my-first-agent
cd my-first-agent
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
Install AG-Kit dependencies:
pip install ag_kit_py
2

Create Your Agent

Create your agent file with a simple OpenAI-based agent:Create src/agent.py:
import os
from langchain_core.messages import AIMessage, SystemMessage, convert_to_openai_messages
from langchain_core.messages.ai import add_ai_message_chunks
from langchain_core.runnables import RunnableConfig
from langgraph.checkpoint.memory import MemorySaver
from langgraph.config import get_stream_writer
from langgraph.graph import END, START, MessagesState, StateGraph
from langgraph.graph.state import CompiledStateGraph

from ag_kit_py.agents.langgraph import LangGraphAgent
from ag_kit_py.providers import create_provider_from_env
from ag_kit_py.server import AgentCreatorResult


def chat_node(state: dict, config: RunnableConfig = None) -> dict:
    """Generate AI response with streaming support."""
    writer = get_stream_writer()
    try:
        provider = create_provider_from_env("openai")
        chat_model = provider.get_langchain_model()

        if config is None:
            config = RunnableConfig(recursion_limit=25)

        system_message = SystemMessage(content="You are a helpful assistant. Be friendly and concise.")
        messages = [system_message, *convert_to_openai_messages(state["messages"])]

        chunks = []
        for chunk in chat_model.stream(messages, config):
            writer({"messages": [chunk]})
            chunks.append(chunk)

        if chunks:
            from langchain_core.messages import AIMessageChunk
            ai_chunks = [
                chunk if isinstance(chunk, AIMessageChunk) else AIMessageChunk(content=str(chunk))
                for chunk in chunks
            ]
            merged_message = add_ai_message_chunks(*ai_chunks)
            return {"messages": [merged_message]}
        return {"messages": []}
    except Exception as e:
        return {"messages": [AIMessage(content=f"Error: {str(e)}")]}


def build_chat_workflow() -> CompiledStateGraph:
    """Build LangGraph workflow."""
    graph = StateGraph(MessagesState)
    memory = MemorySaver()

    graph.add_node("ai_response", chat_node)
    graph.set_entry_point("ai_response")
    graph.add_edge(START, "ai_response")
    graph.add_edge("ai_response", END)

    return graph.compile(checkpointer=memory)


def create_my_agent() -> AgentCreatorResult:
    """Create agent instance."""
    workflow = build_chat_workflow()
    agent = LangGraphAgent(
        name="my-first-agent",
        description="A helpful AI assistant",
        graph=workflow,
    )
    return {"agent": agent}
For more advanced Agent configuration options, see the Core Agent reference.
3

Create Server

Create your server file to run your agent:Create src/server.py:
from ag_kit_py.server import AGKitAPIApp
from src.agent import create_my_agent

if __name__ == "__main__":
    AGKitAPIApp().run(
        create_my_agent,
        port=9000,
        enable_openai_endpoint=True,
    )
    print("Server running at http://localhost:9000")
For more server configuration options, see Run Agent.
4

Configure Environment

Create a .env file with your API key:AG-Kit supports multiple LLM providers through OpenAI-compatible APIs. Choose your preferred model:
OPENAI_API_KEY=sk-your_openai_api_key
OPENAI_MODEL=gpt-4o-mini
# OPENAI_BASE_URL is optional for OpenAI
Add development scripts:The server can be run directly with Python:
python src/server.py
Or add a pyproject.toml for project configuration:
[project]
name = "my-first-agent"
version = "0.1.0"
requires-python = ">=3.11"
dependencies = ["ag_kit_py"]
5

Start the Server

Start your agent server:
python src/server.py
You should see:
✓ Server running at http://localhost:9000
6

Test with cURL

Test your agent with a simple request:
curl --request POST \
  --url http://localhost:9000/send-message \
  --header 'Accept: text/event-stream' \
  --header 'Content-Type: application/json' \
  --data '{
    "messages": [
      {
        "role": "user",
        "content": "Hello, how are you?"
      }
    ],
    "conversationId": "test-conversation"
  }'
You should get a streaming response with events like:
data: {"type":"text","content":"Hello"}

data: {"type":"text","content":"!"}

data: {"type":"text","content":" I"}

data: {"type":"text","content":"'m"}

data: {"type":"text","content":" doing"}

data: {"type":"text","content":" well"}

data: {"type":"text","content":","}

data: {"type":"text","content":" thank"}

data: {"type":"text","content":" you"}

data: {"type":"text","content":" for"}

data: {"type":"text","content":" asking"}

data: {"type":"text","content":"."}

data: {"type":"text","content":" How"}

data: {"type":"text","content":" can"}

data: {"type":"text","content":" I"}

data: {"type":"text","content":" help"}

data: {"type":"text","content":" you"}

data: {"type":"text","content":" today"}

data: {"type":"text","content":"?"}

data: [DONE]
7

Create Frontend

Create a new directory for the frontend and set it up:
# Create frontend directory (in a separate terminal/location)
mkdir my-agent-frontend
cd my-agent-frontend

# Initialize npm project
npm init -y

# Install dependencies
npm install react react-dom @ag-kit/example-ui-web-shared @ag-kit/ui-react zod
npm install -D typescript @types/react @types/react-dom vite @vitejs/plugin-react
Create index.html:
<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8" />
  <meta name="viewport" content="width=device-width, initial-scale=1.0" />
  <title>My Agent Chat</title>
</head>
<body>
  <div id="root"></div>
  <script type="module" src="/src/main.tsx"></script>
</body>
</html>
Create src/main.tsx:
import React from 'react';
import { createRoot } from 'react-dom/client';
import { AgKitChat } from '@ag-kit/example-ui-web-shared';
import { useChat, clientTool } from '@ag-kit/ui-react';
import z from 'zod';
import '@ag-kit/example-ui-web-shared/style.css';

const suggestions = [
  { id: "1", text: "Hello, how are you?", category: "Greeting" },
  { id: "2", text: "What can you help me with?", category: "Help" },
  { id: "3", text: "Tell me a joke", category: "Fun" },
];

function App() {
  const { sendMessage, uiMessages, streaming } = useChat({
    url: 'http://localhost:9000/send-message',
    clientTools: [
      clientTool({
        name: "alert",
        description: "Alert the user",
        parameters: z.object({ message: z.string() }),
        handler: async ({ message }) => {
          await new Promise((resolve) => setTimeout(resolve, 1000));
          alert(message);
          return "done";
        },
      }),
    ],
  });

  return (
    <AgKitChat
      title="My First Agent"
      suggestions={suggestions}
      welcomeMessage="Welcome! Try selecting a suggestion or type your own message."
      uiMessages={uiMessages}
      onMessageSend={sendMessage}
      status={streaming ? "streaming" : "idle"}
    />
  );
}

const root = createRoot(document.getElementById('root')!);
root.render(<App />);
Create vite.config.ts:
import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react';

export default defineConfig({
  plugins: [react()],
  server: { port: 5173 }
});
Add to package.json:
{
  "type": "module",
  "scripts": {
    "dev": "vite",
    "build": "vite build",
    "preview": "vite preview"
  }
}
Start the frontend:
npm run dev
Open http://localhost:5173 in your browser!For more UI component options, see the UI Components reference.
8

Try It Out

Test your agent with these messages:
  1. “Hello, how are you?” - Basic greeting
  2. “What can you help me with?” - Capability exploration
  3. “Tell me about AI” - Knowledge demonstration
  4. “Write a short poem” - Creative task
  5. “Alert me with a message” - Test the custom tool (TypeScript frontend only)

What You’ve Built

Congratulations! You’ve created:
  1. An Agent with custom tools and state management
  2. A Backend Service that handles Agent requests
  3. A Frontend Application with real-time chat

Architecture

Next Steps

Now that you have a working Agent, explore more features:

Troubleshooting

  • Check that port 9000 is not in use
  • Verify your .env file has the API key
  • Check console for error messages
  • Ensure the Agent server is running
  • Check the server URL in your frontend configuration
  • Verify CORS settings in server config
  • Check your OpenAI API key is valid
  • Verify you have API credits
  • Check server logs for errors
  • Ensure Python 3.11+ is installed
  • Verify virtual environment is activated
  • Check that dependencies are installed correctly
  • Review Python error messages in console

Learn More