Get the FREE Ultimate OpenClaw Setup Guide →
A

PromptLayer

Scanned

@theashbhat

npx machina-cli add skill @theashbhat/promptlayer --openclaw
Files (1)
SKILL.md
1.5 KB

PromptLayer

Interact with PromptLayer's REST API for prompt management, logging, evals, and observability.

Setup

Set PROMPTLAYER_API_KEY env var. Run scripts/setup.sh to configure, or add to ~/.openclaw/.env.

CLI — scripts/pl.sh

# Prompt Templates
pl.sh templates list [--name <filter>] [--label <label>]
pl.sh templates get <name|id> [--label prod] [--version 3]
pl.sh templates publish              # JSON on stdin
pl.sh templates labels               # List release labels

# Log an LLM request (JSON on stdin)
echo '{"provider":"openai","model":"gpt-4o",...}' | pl.sh log

# Tracking
pl.sh track-prompt <req_id> <prompt_name> [--version 1] [--vars '{}']
pl.sh track-score <req_id> <score_0_100> [--name accuracy]
pl.sh track-metadata <req_id> --json '{"user_id":"abc"}'
pl.sh track-group <req_id> <group_id>

# Datasets & Evaluations
pl.sh datasets list [--name <filter>]
pl.sh evals list [--name <filter>]
pl.sh evals run <eval_id>
pl.sh evals get <eval_id>

# Agents
pl.sh agents list
pl.sh agents run <agent_id> --input '{"key":"val"}'

API Path Groups

  • /prompt-templates — registry (list, get)
  • /rest/ — tracking, logging, publishing
  • /api/public/v2/ — datasets, evaluations

Full reference: references/api.md

Source

git clone https://clawhub.ai/theashbhat/promptlayerView on GitHub

Overview

PromptLayer provides a REST API and a CLI to manage prompt templates, log LLM requests, and run evaluations. This enables prompt versioning, A/B testing, observability, and reproducible evaluation pipelines across datasets or PromptLayer agents and workflows.

How This Skill Works

Configure PROMPTLAYER_API_KEY in your environment. Use the pl.sh CLI to publish templates, log each LLM request, and attach performance scores or metadata. The API paths cover /prompt-templates, /rest/ for logging and tracking, and /api/public/v2/ for datasets and evaluations, enabling end-to-end prompt management and observability.

When to Use It

  • When versioning prompts and running A/B tests to identify the best-performing version.
  • When you need consistent LLM observability and logging across prompts and requests.
  • When building prompt evaluation pipelines that tie prompts to evaluation results.
  • When managing datasets and evaluations linked to prompts for tracked experiments.
  • When orchestrating PromptLayer agents/workflows for automated prompt routing or scoring.

Quick Start

  1. Step 1: Set up authentication by exporting PROMPTLAYER_API_KEY or running scripts/setup.sh to configure your env file.
  2. Step 2: Publish or update a prompt template with pl.sh templates publish and manage versions/labels.
  3. Step 3: Log a request with pl.sh log and attach scoring/metadata using pl.sh track-score and pl.sh track-metadata.

Best Practices

  • Securely store PROMPTLAYER_API_KEY and rotate credentials regularly.
  • Publish prompts with explicit version numbers and labels to enable safe rollbacks.
  • Log every LLM request with pl.sh log using standardized JSON payloads.
  • Use pl.sh track-prompt, track-score, and track-metadata to connect requests to results.
  • Organize templates, datasets, and evals with clear naming conventions and labels.

Example Use Cases

  • Publish a new prompt template and run an A/B test with two versions using PromptLayer evals.
  • Log an OpenAI request via pl.sh log, capturing provider, model, and input payload.
  • After an evaluation, track the score with pl.sh track-score and review results with pl.sh evals get.
  • Associate a user query with a dataset and track-prompt for end-to-end observability.
  • Run an agent workflow that uses PromptLayer to route prompts and log outcomes.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers