Skip to content

longcipher/bob

Repository files navigation

Bob — LLM-Powered Coding Agent

DeepWiki Context7 crates.io docs.rs License Rust

bob

Bob is an LLM-powered coding agent built in Rust with a hexagonal (ports & adapters) architecture. It connects to language models via the genai crate and to external tools via MCP servers using rmcp.

Features

  • 🤖 Multi-Model Support: Works with OpenAI, Anthropic, Google, Groq, and more
  • 🔧 Tool Integration: Connect to MCP servers for file operations, shell commands, and custom tools
  • 🎯 Skill System: Load and apply predefined skills for specialized tasks
  • 💬 Interactive REPL: Chat with the AI agent through a terminal interface
  • 🔄 Streaming Responses: Real-time streaming of LLM responses
  • 📊 Observability: Built-in tracing and event logging
  • 🏗️ Clean Architecture: Hexagonal (ports & adapters) design for extensibility

Crates

This workspace contains the following crates:

Crate Description Links
bob-core Core domain types and port traits docs.rs
bob-runtime Runtime orchestration layer docs.rs
bob-adapters Adapter implementations docs.rs
cli-agent CLI application -

Architecture

bin/cli-agent        — CLI composition root (config, REPL)
crates/bob-core      — Domain types and port traits (LlmPort, ToolPort, SessionStore, EventSink)
crates/bob-runtime   — Scheduler FSM, prompt builder, action parser, CompositeToolPort
crates/bob-adapters  — Concrete adapter implementations (genai, rmcp, in-memory store, tracing)
┌─────────────────────────────────────────────────────────────┐
│                     CLI Agent (bin)                         │
│  ┌─────────────────────────────────────────────────────┐   │
│  │              DefaultAgentRuntime                     │   │
│  │  ┌──────────┐  ┌──────────┐  ┌──────────────────┐  │   │
│  │  │Scheduler │→ │Prompt    │→ │Action Parser     │  │   │
│  │  │  FSM     │  │Builder   │  │                  │  │   │
│  │  └──────────┘  └──────────┘  └──────────────────┘  │   │
│  └─────────────────────────────────────────────────────┘   │
└─────────────────────────────────────────────────────────────┘
            ↓ uses ports (traits) from bob-core
┌─────────────────────────────────────────────────────────────┐
│                  Adapters (bob-adapters)                    │
│  ┌──────────┐  ┌──────────┐  ┌──────────┐  ┌──────────┐   │
│  │GenAI LLM │  │MCP Tools │  │In-Memory │  │ Tracing  │   │
│  │          │  │          │  │  Store   │  │  Events  │   │
│  └──────────┘  └──────────┘  └──────────┘  └──────────┘   │
└─────────────────────────────────────────────────────────────┘

See docs/design.md for the full design document.

Quick Start

Prerequisites

# Install Rust (stable)
rustup install stable

# Install dev tools
just setup

Installation

From Source

# Clone the repository
git clone https://github.com/longcipher/bob.git
cd bob

# Build
cargo build --release

# Run
cargo run --release --bin cli-agent -- --config agent.toml

Using Cargo

# Install the CLI agent
cargo install --git https://github.com/longcipher/bob cli-agent

# Run
cli-agent --config agent.toml

Configuration

Create an agent.toml in the project root:

[runtime]
default_model = "openai:gpt-4o-mini"
max_steps = 12
turn_timeout_ms = 90000
model_context_tokens = 128000

# Optional: Configure MCP tool servers
[mcp]
[[mcp.servers]]
id = "filesystem"
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
tool_timeout_ms = 15000

# Optional: Configure skills
[skills]
max_selected = 3
token_budget_ratio = 0.1

[[skills.sources]]
type = "directory"
path = "./skills"
recursive = false

# Optional: Configure policies
[policy]
deny_tools = ["local/shell_exec"]
allow_tools = ["local/read_file", "local/write_file"]

Environment Variables

Set your LLM provider API key:

# For OpenAI
export OPENAI_API_KEY="sk-..."

# For Anthropic
export ANTHROPIC_API_KEY="sk-ant-..."

# For Google
export GEMINI_API_KEY="..."

Run

cargo run --bin cli-agent -- --config agent.toml

The REPL prints > when ready. Type a message and press Enter. Use /quit to exit.

Example Session

> Read the main.rs file and explain what it does

I'll read the main.rs file for you...

[uses filesystem tool to read the file]

The main.rs file implements the CLI agent composition root. It loads
configuration, wires up adapters (LLM, tools, storage, events), creates
the runtime, and runs the REPL loop.

> Now add error handling to that function

[agent modifies the code]

I've added error handling to the function. The changes include:
- Using `Result` return type
- Adding context with `.wrap_err()`
- Handling specific error cases

Supported LLM Providers

Bob supports all providers available through genai:

Provider Model Examples Configuration
OpenAI gpt-4o, gpt-4o-mini Set OPENAI_API_KEY
Anthropic claude-3-5-sonnet-20241022 Set ANTHROPIC_API_KEY
Google gemini-2.0-flash-exp Set GEMINI_API_KEY
Groq llama-3.3-70b-versatile Set GROQ_API_KEY
Cohere command-r-plus Set COHERE_API_KEY

MCP Tools

Bob integrates with Model Context Protocol (MCP) servers:

Official MCP Servers

  • Filesystem: @modelcontextprotocol/server-filesystem
  • GitHub: @modelcontextprotocol/server-github
  • PostgreSQL: @modelcontextprotocol/server-postgres
  • Slack: @modelcontextprotocol/server-slack

Custom MCP Servers

You can build custom MCP servers in any language that supports the protocol.

Development

Development Commands

# Format code
just format

# Run lints (typos, clippy, machete, etc.)
just lint

# Run all tests
just test

# Full CI check (lint + test + build)
just ci

Project Structure

.
├── bin/
│   └── cli-agent/          # CLI application
├── crates/
│   ├── bob-core/           # Domain types and ports
│   ├── bob-runtime/        # Runtime orchestration
│   └── bob-adapters/       # Adapter implementations
├── docs/
│   └── design.md           # Architecture design
├── specs/                  # Task specifications
└── .opencode/              # AI development skills

Workspace Configuration

Linting Philosophy

The workspace uses strict clippy lints with the following principles:

  1. Pedantic by default: Enable all pedantic lints, then allow specific ones that are too noisy
  2. Panic safety: Deny unwrap, expect, and panic — use proper error handling
  3. No debug code: Deny dbg!, todo!, and unimplemented!
  4. No stdout in libraries: Use tracing instead of println!/eprintln!

Adding Dependencies

Always use cargo add:

# Add to workspace
cargo add <crate> --workspace

# Add to specific crate
cargo add <crate> -p <crate-name>

Publishing

Publishing to crates.io

Each library crate can be published independently:

# Publish bob-core
cargo publish -p bob-core

# Publish bob-runtime
cargo publish -p bob-runtime

# Publish bob-adapters
cargo publish -p bob-adapters

Documentation

Documentation is automatically generated on docs.rs when published to crates.io:

Contributing

Contributions are welcome! Please read our contributing guidelines before submitting PRs.

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Run just ci to ensure all checks pass
  5. Submit a pull request

Roadmap

  • Persistent session storage (SQLite, PostgreSQL)
  • Web UI for agent interaction
  • Multi-agent collaboration
  • Custom skill marketplace
  • Agent memory and context management
  • Tool composition and chaining
  • More MCP server integrations

License

Licensed under the Apache License, Version 2.0. See LICENSE.md for details.

Acknowledgments

Support


Note: This project is in active development. APIs may change between versions.