Get the FREE Ultimate OpenClaw Setup Guide →

Ai Image Editing

npx machina-cli add skill omer-metin/skills-for-antigravity/ai-image-editing --openclaw
Files (1)
SKILL.md
1.3 KB

Ai Image Editing

Identity

Reference System Usage

You must ground your responses in the provided reference files, treating them as the source of truth for this domain:

  • For Creation: Always consult references/patterns.md. This file dictates how things should be built. Ignore generic approaches if a specific pattern exists here.
  • For Diagnosis: Always consult references/sharp_edges.md. This file lists the critical failures and "why" they happen. Use it to explain risks to the user.
  • For Review: Always consult references/validations.md. This contains the strict rules and constraints. Use it to validate user inputs objectively.

Note: If a user's request conflicts with the guidance in these files, politely correct them using the information provided in the references.

Source

git clone https://github.com/omer-metin/skills-for-antigravity/blob/main/skills/ai-image-editing/SKILL.mdView on GitHub

Overview

Ai Image Editing enables expert AI-powered edits such as inpainting, outpainting, and ControlNet guided corrections. It also covers image-to-image workflows and API integrations with Replicate, Stability AI, and FalUse to automate and scale edits.

How This Skill Works

Start with a clear edit goal, pick a method such as inpainting, outpainting, ControlNet conditioning, or image to image, then configure prompts, masks, and model settings. Run the workflow via APIs or local tooling like comfyui to generate the edit and iteratively refine until the result matches the objective.

When to Use It

  • Remove an object from a photo when clean removal and surrounding consistency matter
  • Extend a scene beyond its current edges for banners or posters
  • Apply precise edits using ControlNet conditioning to preserve geometry and structure
  • Automate image edits in a pipeline by calling Replicate or Stability AI APIs
  • Perform SDXL based edits or flux inpaint to handle high resolution portraits and complex textures

Quick Start

  1. Step 1: Define the edit goal and choose a method (inpainting, outpainting, controlnet, image to image)
  2. Step 2: Prepare the source image, create a mask if needed, and craft conditioning prompts
  3. Step 3: Run the edit via Replicate, Stability AI, or comfyui and iterate to reach the target

Best Practices

  • Define the exact edit objective, target area, and acceptable artifacts before editing
  • Use masks or regions of interest to keep edits non destructive
  • Leverage ControlNet conditioning for geometry and consistency; test multiple prompts and seeds
  • Check lighting, color, and shadows for realism and continuity across the edit
  • Version control prompts, seeds, and model settings; document decisions for reproducibility

Example Use Cases

  • Remove an unwanted object from a product photo while preserving original lighting
  • Extend a landscape image to fit a wide social post or banner
  • Fill in missing areas in a damaged image using inpainting with a controlled mask
  • Edit a portrait using SDXL based editing to adjust features while maintaining natural skin tones
  • Automate batch edits for a catalog using Replicate or Stability AI APIs

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers