Quick Start
This guide walks you through running your first AGENT-K mission using the API or programmatically.
Prerequisites
Make sure you have completed the installation steps:
- Python 3.11+ with uv installed
uv synccompleted inbackend/- Environment variables configured (Kaggle + model API key)
Start the API Server
cd backend
python -m agent_k.ui.agui
The API server runs on http://localhost:9000.
Start a Mission (Chat Endpoint)
The /agentic_generative_ui/ endpoint accepts Vercel AI chat messages. When a mission intent is detected, it runs the mission and streams events.
curl -N -X POST http://localhost:9000/agentic_generative_ui/ \
-H "Content-Type: application/json" \
-d '{"id":"demo","messages":[{"role":"user","parts":[{"type":"text","text":"Find a Kaggle competition with a $10k prize"}]}]}'
Programmatic Usage
import asyncio
from agent_k import LycurgusOrchestrator
from agent_k.core.models import MissionCriteria
async def main():
async with LycurgusOrchestrator() as orchestrator:
result = await orchestrator.execute_mission(
competition_id="titanic",
criteria=MissionCriteria(
target_leaderboard_percentile=0.10,
max_evolution_rounds=50,
),
)
print(f"Final rank: {result.final_rank}")
asyncio.run(main())
Tooling Notes
web_searchandweb_fetchare built-in tools and only available for supported providers.memoryis only available for Anthropic models and stores files under.agent_k_memoryby default.- Kaggle operations use the Kaggle adapter when credentials are available; otherwise OpenEvolve is used.
Using the Dashboard
For a visual interface, start both servers:
./run.sh
Then open http://localhost:3000 in your browser.
Next Steps
- Concepts: Agents - Understand the multi-agent architecture
- Concepts: Toolsets - Learn about FunctionToolsets
- Examples: Multi-Agent Demo - Walkthrough using the core agents