Writing your first OpenClaw skill
An OpenClaw skill is a capability module the agent can invoke (e.g., run a script, call an API, or manage files. Your first skill needs a clear trigger (intent), input validation, a call to a tool or function, and a short response. This guide walks US developers through the structure, a minimal example, and how to test and ship it."
If you're in the US and want OpenClaw to do something it doesn't do out of the box, you write a skill. A skill is a piece of logic the agent calls when the user's intent matches (e.g., "get the weather" or "run the daily report." This post gives you the mental model, a minimal skill structure, and steps to implement and test your first one. Once skills are live, measuring which ones get used and how often they succeed helps you prioritize: US teams often use SingleAnalytics to track agent events and tie them to product and revenue in one place.
What is an OpenClaw skill?
A skill is a discrete capability that:
- Has a name and a description the agent uses to decide when to call it (e.g., "get_weather", "Returns current weather for a given city").
- Declares inputs (e.g.,
city,units) so the agent can extract them from the user's message or ask for missing ones. - Executes: runs a function, script, or API call.
- Returns a result the agent can turn into a natural-language reply (e.g., "72°F and sunny in San Francisco").
Skills are the main way to extend OpenClaw. The core handles routing and conversation; skills handle the "do something" part. Exact APIs depend on the OpenClaw version (e.g., plugin format, YAML vs code-only); the ideas below apply across implementations.
What you need before you start
- OpenClaw installed and running (see our installing OpenClaw step-by-step guide).
- A clear use case: e.g., "When I say 'run standup report,' generate the report and post a summary." Pick something small for your first skill.
- The skill interface docs for your OpenClaw version (registration, signature, how the agent discovers and invokes skills).
- An environment where you can run and test (local or dev server in the US).
Step 1 – Define the intent and inputs
Decide the trigger (when the skill runs) and the inputs (what the skill needs).
Example: "Get the time in a city."
- Intent: User asks for the time in a place (e.g., "What time is it in Tokyo?").
- Inputs:
cityortimezone(and optionally a name for the skill, e.g.,get_time).
Write a short description for the agent: "Returns the current time in a given city or timezone." The agent uses this to choose your skill when the user's message matches.
Step 2 – Create the skill module
Create a new file or folder for the skill (e.g., skills/get_time/skill.py or the structure your OpenClaw version uses). A minimal structure:
# skills/get_time/skill.py (conceptual; adapt to your OpenClaw version)
def get_time(city: str) -> str:
"""Returns current time in the given city."""
# 1. Resolve city to timezone (e.g., use a library or API)
# 2. Get current time in that timezone
# 3. Return a short string for the agent to relay
import datetime
import pytz
# Example: map city -> tz, then format
tz = pytz.timezone("America/Los_Angeles") # placeholder; use real lookup
now = datetime.datetime.now(tz)
return f"It's {now.strftime('%I:%M %p')} in {city}."
Your runtime may expect a class, a decorator, or a YAML manifest: check the project’s skill docs. The important part: one clear entry point, typed inputs, and a string (or structured) return the agent can use.
Step 3 – Register the skill with OpenClaw
Register so the agent can discover and call it. That might mean:
- Adding the skill to a config file or skill directory, or
- Calling a registration API at startup.
Example (conceptual):
# skills/get_time/manifest.yaml
name: get_time
description: Returns the current time in a given city or timezone.
inputs:
- name: city
type: string
required: true
entry: get_time.skill.get_time
Again, follow your OpenClaw version’s format. Once registered, the agent can match user messages like "What time is it in Austin?" to get_time and pass city="Austin".
Step 4 – Handle errors and edge cases
- Missing input: If
cityis missing, return a clear message: "I need a city name. Which city?" - Invalid input: If the city is unknown, don’t crash: return "I couldn’t find that city. Try a well-known city name."
- External failures: If you call an API and it fails, catch the error and return a user-friendly line like "Time service is temporarily unavailable."
The agent will relay your return value; keeping it short and clear improves the experience. For US users running many skills, emitting a skill_completed or skill_failed event (with skill name and maybe duration) helps you monitor reliability: tools like SingleAnalytics can ingest those events so you see which skills are used and how often they fail.
Step 5 – Test the skill
- Unit test: Call the skill function directly with valid and invalid inputs. Assert on the return value and that no uncaught exceptions escape.
- In-agent test: In chat, send messages that should trigger the skill ("What time is it in New York?") and confirm the reply is correct. Try edge cases: unknown city, empty input, weird formatting.
- Logs: Confirm the agent is routing to your skill (check OpenClaw logs). If it’s not, improve the skill description or the agent’s intent model so it reliably matches.
Step 6 – Optional but useful
- Idempotency: If the skill does something side-effectful (e.g., create a task), design so duplicate triggers don’t create duplicates (e.g., idempotency key or "already exists" check).
- Rate limiting: For skills that call external APIs, respect rate limits and cache when appropriate.
- Observability: Emit events (e.g.,
skill_invoked,skill_completed,skill_failed) with skill name and duration. US teams that centralize agent and product analytics in SingleAnalytics use this to see adoption and success rate per skill and to tie usage to business outcomes.
Summary
Writing your first OpenClaw skill means: define intent and inputs, implement one entry point that does the work and returns a short result, register the skill so the agent can find it, handle errors and edge cases, and test in isolation and in chat. Once you have one skill working, you can add more and reuse the same pattern. To see which skills drive the most value and where they fail, send agent and skill events to a unified analytics platform. SingleAnalytics gives US teams one place for that so you can improve what matters.