legnext-midjourney
Scannednpx machina-cli add skill leggiemint/legnext-skills/legnext-midjourney --openclawLegnext Midjourney
Generate professional AI images using Midjourney's capabilities through the Legnext API.
Setup
CRITICAL: This skill requires a Legnext API key. Before running, check if the user has configured their API key:
Check for existing configuration:
- Look for a
.envfile in the project directory or parent directories - Check for
LEGNEXT_API_KEY=<key>in the.envfile - Or check environment variable:
echo $LEGNEXT_API_KEY
If not found, inform the user they need to:
Option 1: Create a .env file (recommended)
# Create .env file in the project root
echo "LEGNEXT_API_KEY=your-api-key-here" > .env
Option 2: Set environment variable
export LEGNEXT_API_KEY=your-api-key-here
Get an API key from:
https://legnext.ai/app/api-keys
Verify API key:
python scripts/verify_api_key.py
The scripts will automatically detect the .env file and provide clear error messages if the API key is missing or invalid.
Quick Start
For simple image generation requests:
python scripts/generate_and_wait.py "a beautiful sunset over mountains --v 7 --ar 16:9"
This handles the complete workflow: submit task → poll status → return results.
Complete Workflow
1. Understand the User's Request
Identify what type of image the user wants:
- Subject matter (people, landscapes, objects, abstract)
- Style (photographic, illustrated, artistic)
- Mood and atmosphere
- Technical requirements (aspect ratio, quality)
2. Craft the Prompt
Transform the user's natural language request into an effective Midjourney prompt.
Use the 7-Element Framework for systematic prompts:
- Subject, Medium, Environment, Lighting, Color, Mood, Composition
Quick structure:
[Subject] [Description] [Environment] [Lighting] [Style] [Parameters]
Example transformations:
User request: "I need a professional headshot"
→ Prompt: professional headshot, studio lighting, neutral background, sharp focus, 85mm lens --ar 2:3 --style raw
User request: "Create a cyberpunk city scene"
→ Prompt: cyberpunk city at night, neon lights, flying cars, rain-soaked streets, cinematic, blade runner aesthetic --ar 16:9
Key principles:
- Be specific about what you want to SEE (not abstract concepts)
- Front-load important elements
- Trust V7's default quality — avoid junk words (4K, HDR, award-winning)
- Use
--style rawfor photorealism
For the complete 7-Element Framework and advanced techniques, see references/prompt_engineering.md.
For photography terminology (camera, lighting, film stocks), see references/photography.md.
3. Add Midjourney Parameters
Append parameters to optimize the generation:
Common parameters:
--v 7- Use Midjourney v7 (recommended)--ar 16:9- Aspect ratio (1:1, 16:9, 9:16, 3:2, etc.)--s 500- Stylization level (0-1000)--q 1- Quality (0.25, 0.5, 1, 2)--chaos 20- Variation amount (0-100)
Example:
a majestic lion in savanna --v 7 --ar 16:9 --s 500 --q 1
For complete parameter reference, consult references/midjourney_parameters.md.
4. Submit and Monitor
Use the appropriate script based on your needs:
Option A: Complete workflow (recommended)
python scripts/generate_and_wait.py "your prompt here"
This automatically:
- Submits the task to Legnext API
- Polls every 5 seconds for status updates
- Returns final results when complete
- Times out after 5 minutes if not completed
Option B: Manual control
Submit task:
python scripts/imagine.py "your prompt here"
# Returns: {"job_id": "uuid", "status": "pending"}
Check status:
python scripts/get_task.py <job_id>
# Returns: {"status": "processing|completed|failed", "output": {...}}
5. Handle Results
When the task completes successfully:
Output structure:
{
"job_id": "uuid",
"status": "completed",
"output": {
"images": ["url1", "url2", "url3", "url4"],
"seed": 123456789
}
}
Typically 4 image variations are generated.
Present the image URLs to the user. Images are:
- Accessible via HTTPS URLs
- Stored temporarily (download if needed for permanent storage)
- Usually high resolution
If the user wants variations of a specific result, note the seed value and include --seed <value> in future prompts for consistency.
Common Usage Patterns
Pattern 1: Single Image Request
User: "Generate a photo of a coffee shop interior"
Response workflow:
- Craft prompt:
cozy coffee shop interior, wooden furniture, plants, warm lighting, customers, rustic decor --ar 16:9 --v 7 - Run:
python scripts/generate_and_wait.py "..." - Present the 4 generated image URLs
- Ask if they'd like variations or adjustments
Pattern 2: Specific Style Request
User: "Create a logo design for a tech startup"
Response workflow:
- Craft prompt:
minimalist logo design for tech startup, modern, clean lines, geometric, professional, simple icon --v 7 --ar 1:1 --s 250 - Generate and present results
- If needed, iterate with
--seedfor consistency
Pattern 3: Batch Generation
User: "I need several variations of a mountain landscape"
Response workflow:
- Generate first set:
majestic mountain landscape, snow peaks, alpine lake, dramatic sky --ar 16:9 --v 7 - Use different
--chaosor--seedvalues for variations - Or adjust prompt slightly for different moods/times of day
Pattern 4: Iterative Refinement
User: "The image is too dark, can you make it brighter?"
Response workflow:
- Add lighting keywords: "bright, well-lit, sunny, vibrant colors"
- Keep successful elements from original prompt
- Adjust
--s(stylization) if needed - Use original
--seedwith modifications for consistency
Troubleshooting
API Key Issues
Error: LEGNEXT_API_KEY environment variable not set
→ User needs to set: export LEGNEXT_API_KEY=their_key
Task Timeout
Error: Task did not complete within 300 seconds
→ Complex prompts may take longer. Manually check with get_task.py <job_id>
Failed Tasks
Status: failed
→ Common causes:
- Invalid prompt (too short/long, forbidden content)
- Insufficient credits in Legnext account
- Invalid parameter combinations Check error details in the response
Generation Quality Issues
If results don't match expectations:
- Add more descriptive keywords
- Adjust
--s(stylization) parameter - Try different versions (
--v 6vs--v 7) - Consult
references/prompt_engineering.mdfor techniques
Advanced Features
Using Reference Images
Include image URLs in prompts:
https://example.com/reference.jpg a painting in this style --v 7
Negative Prompting
Exclude unwanted elements:
a bedroom interior --no clutter --no windows --v 7
Multi-Prompting
Weight different concepts:
cat::2 dog::1 playing together --v 7
This emphasizes "cat" twice as much as "dog".
Consistent Seeds
For variations of the same concept:
- Note the seed from a successful generation
- Use
--seed <value>in subsequent prompts - Modify other aspects while maintaining consistency
Reference Documentation
- Midjourney Parameters: See
references/midjourney_parameters.mdfor complete parameter list and usage - Prompt Engineering: See
references/prompt_engineering.mdfor advanced techniques and patterns - API Reference: See
references/api_reference.mdfor detailed API documentation
Scripts
This skill provides four Python scripts:
-
verify_api_key.py - Verify API key
- Check API key validity and account balance
- Usage:
python scripts/verify_api_key.py
-
generate_and_wait.py - Complete workflow (recommended)
- Submits task and waits for completion
- Usage:
python scripts/generate_and_wait.py "prompt"
-
imagine.py - Submit generation task
- Returns job_id immediately
- Usage:
python scripts/imagine.py "prompt"
-
get_task.py - Check task status
- Query any task by job_id
- Usage:
python scripts/get_task.py <job_id>
All scripts require LEGNEXT_API_KEY environment variable.
Best Practices
- Start with clear descriptions - More detail usually produces better results
- Use appropriate aspect ratios - Match the intended use case
- Iterate based on results - Refine prompts based on what works
- Save successful prompts - Build a library of effective patterns
- Mind the credits - Each generation consumes Legnext API points
- Download important images - Temporary storage may expire
Notes
- Generation typically takes 30-80 seconds
- Initial wait: 10s before first status check
- Polling: Every 5s, timeout after 3 minutes
- Each request usually generates 4 image variations
- Images are temporarily stored; download for permanent use
- API usage is tracked via points system in Legnext dashboard
Source
git clone https://github.com/leggiemint/legnext-skills/blob/main/legnext-midjourney/SKILL.mdView on GitHub Overview
Legnext Midjourney lets you generate professional AI images by driving Midjourney through the Legnext API. It guides prompt engineering with a 7-Element Framework, handles API calls, polls for task status, and retrieves final results from Midjourney.
How This Skill Works
User requests are converted into structured Midjourney prompts using the 7-Element Framework (Subject, Medium, Environment, Lighting, Color, Mood, Composition). The prompt is augmented with Midjourney parameters (for example --v 7, --ar, --s) and submitted via the generate_and_wait script, which polls every 5 seconds and returns results when complete, timing out after 5 minutes.
When to Use It
- You want to generate a Midjourney-style image from a natural-language description.
- You’re creating marketing, product visuals, or social content using AI-generated imagery.
- You need guided prompt engineering to craft effective Midjourney prompts.
- You want an end-to-end workflow that submits, polls, and retrieves results automatically.
- You need precise control over output with specific Midjourney parameters (v, ar, s, q).
Quick Start
- Step 1: Ensure you have a Legnext API key configured in your environment (via .env or export LEGNEXT_API_KEY).
- Step 2: Try a sample prompt like: python scripts/generate_and_wait.py "a beautiful sunset over mountains --v 7 --ar 16:9"
- Step 3: The script automatically submits the task, polls every 5 seconds, and returns the final results (times out after 5 minutes).
Best Practices
- Ensure the Legnext API key is configured before running (ENV or .env).
- Apply the 7-Element Framework to clearly specify Subject, Medium, Environment, Lighting, Color, Mood, and Composition.
- Front-load important details and avoid vague terms to improve image fidelity.
- Append Midjourney parameters (--v 7, --ar, --s, --q) as needed for your use case.
- Iterate with variations by adjusting prompts and parameters, then review results before finalizing.
Example Use Cases
- Professional headshot: 'professional headshot, studio lighting, neutral background, sharp focus, 85mm lens --ar 2:3 --style raw --v 7'
- Cyberpunk city scene: 'cyberpunk city at night, neon lights, flying cars, rain-soaked streets, cinematic --ar 16:9 --v 7'
- Product concept art: 'sleek wearable tech concept, clean white background, soft lighting, photorealistic --ar 1:1 --v 7'
- Nature landscape: 'majestic waterfall in a lush forest, mist, golden hour lighting, painterly style --ar 16:9 --v 7'
- Abstract logo concept: 'minimalist logo, bold shapes, black and white, vector style --ar 1:1 --v 7'