TimeCamp
Scanned@kamil-rudnicki
npx machina-cli add skill @kamil-rudnicki/timecamp --openclawTimeCamp Skill
Two tools: CLI for quick personal actions (timer, entries CRUD) and Data Pipeline for analytics/reports.
Bootstrap (clone if missing)
Before using either tool:
- Ask user where repos should live (default:
~/utils, but any location is valid). - If repos are missing in that chosen location, ask for confirmation to clone.
Example flow and commands:
# Ask first:
# "I don't see TimeCamp repos locally. Clone to ~/utils, or use a different location?"
REPOS_DIR=~/utils # replace if user picked a different path
mkdir -p "$REPOS_DIR"
if [ ! -d "$REPOS_DIR/timecamp-cli/.git" ]; then
git clone https://github.com/timecamp-org/timecamp-cli.git "$REPOS_DIR/timecamp-cli"
fi
if [ ! -d "$REPOS_DIR/good-enough-timecamp-data-pipeline/.git" ]; then
git clone https://github.com/timecamp-org/good-enough-timecamp-data-pipeline.git "$REPOS_DIR/good-enough-timecamp-data-pipeline"
fi
Tool 1: TimeCamp CLI (personal actions)
CLI at ~/utils/timecamp-cli, installed globally via npm link.
| Intent | Command |
|---|---|
| Current timer status | timecamp status |
| Start timer | timecamp start --task "Project A" --note "description" |
| Stop timer | timecamp stop |
| Today's entries | timecamp entries |
| Entries by date | timecamp entries --date 2026-02-04 |
| Entries date range | timecamp entries --from 2026-02-01 --to 2026-02-04 |
| All users entries | timecamp entries --from 2026-02-01 --to 2026-02-04 --all-users |
| Add entry | timecamp add-entry --date 2026-02-04 --start 09:00 --end 10:30 --duration 5400 --task "Project A" --note "description" |
| Update entry | timecamp update-entry --id 101234 --note "Updated" --duration 3600 |
| Remove entry | timecamp remove-entry --id 101234 |
| List tasks | timecamp tasks |
Tool 2: Data Pipeline (analytics & reports)
Python pipeline at ~/utils/good-enough-timecamp-data-pipeline. Use this for all analytics, reports, and bulk data fetching.
Run command
cd ~/utils/good-enough-timecamp-data-pipeline && \
uv run --with-requirements requirements.txt dlt_fetch_timecamp.py \
--from YYYY-MM-DD --to YYYY-MM-DD \
--datasets DATASETS \
--format jsonl \
--output ~/data/timecamp-data-pipeline
Available datasets
| Dataset | Description |
|---|---|
entries | Time entries with project/task details |
tasks | Projects & tasks hierarchy with breadcrumb paths |
computer_activities | Desktop app tracking data |
users | User details with group info and enabled status |
application_names | Application lookup table (ID → name, category) |
Formats: ``jsonl`
Output structure
Files land in ~/data/timecamp-data-pipeline/timecamp/*.jsonl.
Examples
cd ~/utils/good-enough-timecamp-data-pipeline && \
uv run --with-requirements requirements.txt dlt_fetch_timecamp.py \
--from 2026-02-11 --to 2026-02-14 \
--datasets entries,users,tasks \
--format jsonl --output ~/data/timecamp-data-pipeline
cd ~/utils/good-enough-timecamp-data-pipeline && \
uv run --with-requirements requirements.txt dlt_fetch_timecamp.py \
--from 2026-01-01 --to 2026-02-14 \
--datasets computer_activities,users,application_names \
--format jsonl --output ~/data/timecamp-data-pipeline
cd ~/utils/good-enough-timecamp-data-pipeline && \
uv run --with-requirements requirements.txt dlt_fetch_timecamp.py \
--from 2026-01-01 --to 2026-02-14 \
--datasets computer_activities,users,application_names,entries,tasks \
--format jsonl --output ~/data/timecamp-data-pipeline
Analytics with DuckDB
Query the persistent data store directly.
DUCKDB=~/.duckdb/cli/latest/duckdb
DATA=~/data/timecamp-data-pipeline/timecamp
# Hours per person
$DUCKDB -c "
SELECT user_name, round(sum(TRY_CAST(duration AS DOUBLE))/3600.0, 1) as hours
FROM read_json_auto('$DATA/entries*.jsonl')
GROUP BY user_name ORDER BY hours DESC
"
# Hours per person per day
$DUCKDB -c "
SELECT user_name, date, round(sum(TRY_CAST(duration AS DOUBLE))/3600.0, 1) as hours
FROM read_json_auto('$DATA/entries*.jsonl')
GROUP BY user_name, date ORDER BY user_name, date
"
# Top applications by time (join activities with app names)
$DUCKDB -c "
SELECT COALESCE(an.full_name, an.application_name, an.app_name, 'Unknown') as app,
round(sum(ca.time_span)/3600.0, 2) as hours
FROM read_json_auto('$DATA/computer_activities*.jsonl') ca
LEFT JOIN read_json_auto('$DATA/application_names*.jsonl') an
ON ca.application_id = an.application_id
GROUP BY 1 ORDER BY hours DESC LIMIT 20
"
# People who logged < 30h in a given week
$DUCKDB -c "
SELECT user_name, round(sum(TRY_CAST(duration AS DOUBLE))/3600.0, 1) as hours
FROM read_json_auto('$DATA/entries*.jsonl')
WHERE date BETWEEN '2026-02-03' AND '2026-02-07'
GROUP BY user_name
HAVING sum(TRY_CAST(duration AS DOUBLE))/3600.0 < 30
ORDER BY hours
"
Pattern
- Check existing data range with DuckDB, if data is missing, fetch it with the pipeline, if it's already there, use it
- Query with DuckDB:
$DUCKDB -c "SELECT ... FROM read_json_auto('$DATA/entries*.jsonl') ..."
Important Notes
- Duration (entries) is in seconds (3600 = 1h)
time_span(activities) is also in secondsapplications_cache.jsonin pipeline dir caches app name lookups- For JSONL output, DuckDB glob
*.jsonlcatches all files for all datasets
Safety
- Confirm before adding, updating, or removing entries
- Show the command before executing modifications
- When stopping a timer, show what was running first
Overview
TimeCamp skill provides a CLI for personal time tracking and a Python data pipeline for analytics. It lets you start and stop timers, CRUD time entries, and list tasks or entries by date range, while the data pipeline fetches entries, tasks, and activity data for reports.
How This Skill Works
Two tools drive TimeCamp: a CLI located at ~/utils/timecamp-cli, installed via npm link for quick personal actions (timer, entries CRUD), and a Data Pipeline at ~/utils/good-enough-timecamp-data-pipeline for analytics. Use the CLI to run commands like status, start, stop, and entry management; use the data pipeline to pull datasets with dlt_fetch_timecamp.py for a given date range and output JSONL files to ~/data/timecamp-data-pipeline.
When to Use It
- When you need to start a timer for a specific project with a descriptive note.
- When you want to view today’s entries or a particular date range to review work.
- When you need to add, update, or remove a time entry.
- When you want to generate analytics or bulk reports using the data pipeline.
- When you need to inspect tasks, breadcrumbs, or computer activities for dashboards.
Quick Start
- Step 1: Bootstrap TimeCamp repos in your chosen location (default ~/utils) and clone timecamp-cli and good-enough-timecamp-data-pipeline if missing.
- Step 2: Start a timer with timecamp start --task "Project A" --note "Working on feature X".
- Step 3: Run analytics with the data pipeline, e.g., navigate to ~/utils/good-enough-timecamp-data-pipeline and run the dlt_fetch_timecamp.py script with --from, --to, and --datasets, outputting to ~/data/timecamp-data-pipeline.
Best Practices
- Ensure TIMECAMP_API_KEY is set in your environment for all operations.
- Use descriptive task names and notes to keep entries meaningful.
- Use consistent date formats with --date or --from/--to for accuracy.
- Regularly fetch data with the data pipeline to keep analytics up to date.
- Check timer status before starting a new timer to avoid duplicates.
Example Use Cases
- Start a timer for Project A with a short note: timecamp start --task Project A --note Working on feature X
- Stop the current timer: timecamp stop
- Add a 2026-02-04 entry from 09:00 to 10:30 for Project A with a note
- List entries for a specific date: timecamp entries --date 2026-02-04
- Fetch analytics data for a range: run the data pipeline with --from 2026-02-01 --to 2026-02-14 and datasets entries,tasks