Get the FREE Ultimate OpenClaw Setup Guide →

latchbio-integration

npx machina-cli add skill Microck/ordinary-claude-skills/latchbio-integration --openclaw
Files (1)
SKILL.md
9.5 KB

LatchBio Integration

Overview

Latch is a Python framework for building and deploying bioinformatics workflows as serverless pipelines. Built on Flyte, create workflows with @workflow/@task decorators, manage cloud data with LatchFile/LatchDir, configure resources, and integrate Nextflow/Snakemake pipelines.

Core Capabilities

The Latch platform provides four main areas of functionality:

1. Workflow Creation and Deployment

  • Define serverless workflows using Python decorators
  • Support for native Python, Nextflow, and Snakemake pipelines
  • Automatic containerization with Docker
  • Auto-generated no-code user interfaces
  • Version control and reproducibility

2. Data Management

  • Cloud storage abstractions (LatchFile, LatchDir)
  • Structured data organization with Registry (Projects → Tables → Records)
  • Type-safe data operations with links and enums
  • Automatic file transfer between local and cloud
  • Glob pattern matching for file selection

3. Resource Configuration

  • Pre-configured task decorators (@small_task, @large_task, @small_gpu_task, @large_gpu_task)
  • Custom resource specifications (CPU, memory, GPU, storage)
  • GPU support (K80, V100, A100)
  • Timeout and storage configuration
  • Cost optimization strategies

4. Verified Workflows

  • Production-ready pre-built pipelines
  • Bulk RNA-seq, DESeq2, pathway analysis
  • AlphaFold and ColabFold for protein structure prediction
  • Single-cell tools (ArchR, scVelo, emptyDropsR)
  • CRISPR analysis, phylogenetics, and more

Quick Start

Installation and Setup

# Install Latch SDK
python3 -m uv pip install latch

# Login to Latch
latch login

# Initialize a new workflow
latch init my-workflow

# Register workflow to platform
latch register my-workflow

Prerequisites:

  • Docker installed and running
  • Latch account credentials
  • Python 3.8+

Basic Workflow Example

from latch import workflow, small_task
from latch.types import LatchFile

@small_task
def process_file(input_file: LatchFile) -> LatchFile:
    """Process a single file"""
    # Processing logic
    return output_file

@workflow
def my_workflow(input_file: LatchFile) -> LatchFile:
    """
    My bioinformatics workflow

    Args:
        input_file: Input data file
    """
    return process_file(input_file=input_file)

When to Use This Skill

This skill should be used when encountering any of the following scenarios:

Workflow Development:

  • "Create a Latch workflow for RNA-seq analysis"
  • "Deploy my pipeline to Latch"
  • "Convert my Nextflow pipeline to Latch"
  • "Add GPU support to my workflow"
  • Working with @workflow, @task decorators

Data Management:

  • "Organize my sequencing data in Latch Registry"
  • "How do I use LatchFile and LatchDir?"
  • "Set up sample tracking in Latch"
  • Working with latch:/// paths

Resource Configuration:

  • "Configure GPU for AlphaFold on Latch"
  • "My task is running out of memory"
  • "How do I optimize workflow costs?"
  • Working with task decorators

Verified Workflows:

  • "Run AlphaFold on Latch"
  • "Use DESeq2 for differential expression"
  • "Available pre-built workflows"
  • Using latch.verified module

Detailed Documentation

This skill includes comprehensive reference documentation organized by capability:

references/workflow-creation.md

Read this for:

  • Creating and registering workflows
  • Task definition and decorators
  • Supporting Python, Nextflow, Snakemake
  • Launch plans and conditional sections
  • Workflow execution (CLI and programmatic)
  • Multi-step and parallel pipelines
  • Troubleshooting registration issues

Key topics:

  • latch init and latch register commands
  • @workflow and @task decorators
  • LatchFile and LatchDir basics
  • Type annotations and docstrings
  • Launch plans with preset parameters
  • Conditional UI sections

references/data-management.md

Read this for:

  • Cloud storage with LatchFile and LatchDir
  • Registry system (Projects, Tables, Records)
  • Linked records and relationships
  • Enum and typed columns
  • Bulk operations and transactions
  • Integration with workflows
  • Account and workspace management

Key topics:

  • latch:/// path format
  • File transfer and glob patterns
  • Creating and querying Registry tables
  • Column types (string, number, file, link, enum)
  • Record CRUD operations
  • Workflow-Registry integration

references/resource-configuration.md

Read this for:

  • Task resource decorators
  • Custom CPU, memory, GPU configuration
  • GPU types (K80, V100, A100)
  • Timeout and storage settings
  • Resource optimization strategies
  • Cost-effective workflow design
  • Monitoring and debugging

Key topics:

  • @small_task, @large_task, @small_gpu_task, @large_gpu_task
  • @custom_task with precise specifications
  • Multi-GPU configuration
  • Resource selection by workload type
  • Platform limits and quotas

references/verified-workflows.md

Read this for:

  • Pre-built production workflows
  • Bulk RNA-seq and DESeq2
  • AlphaFold and ColabFold
  • Single-cell analysis (ArchR, scVelo)
  • CRISPR editing analysis
  • Pathway enrichment
  • Integration with custom workflows

Key topics:

  • latch.verified module imports
  • Available verified workflows
  • Workflow parameters and options
  • Combining verified and custom steps
  • Version management

Common Workflow Patterns

Complete RNA-seq Pipeline

from latch import workflow, small_task, large_task
from latch.types import LatchFile, LatchDir

@small_task
def quality_control(fastq: LatchFile) -> LatchFile:
    """Run FastQC"""
    return qc_output

@large_task
def alignment(fastq: LatchFile, genome: str) -> LatchFile:
    """STAR alignment"""
    return bam_output

@small_task
def quantification(bam: LatchFile) -> LatchFile:
    """featureCounts"""
    return counts

@workflow
def rnaseq_pipeline(
    input_fastq: LatchFile,
    genome: str,
    output_dir: LatchDir
) -> LatchFile:
    """RNA-seq analysis pipeline"""
    qc = quality_control(fastq=input_fastq)
    aligned = alignment(fastq=qc, genome=genome)
    return quantification(bam=aligned)

GPU-Accelerated Workflow

from latch import workflow, small_task, large_gpu_task
from latch.types import LatchFile

@small_task
def preprocess(input_file: LatchFile) -> LatchFile:
    """Prepare data"""
    return processed

@large_gpu_task
def gpu_computation(data: LatchFile) -> LatchFile:
    """GPU-accelerated analysis"""
    return results

@workflow
def gpu_pipeline(input_file: LatchFile) -> LatchFile:
    """Pipeline with GPU tasks"""
    preprocessed = preprocess(input_file=input_file)
    return gpu_computation(data=preprocessed)

Registry-Integrated Workflow

from latch import workflow, small_task
from latch.registry.table import Table
from latch.registry.record import Record
from latch.types import LatchFile

@small_task
def process_and_track(sample_id: str, table_id: str) -> str:
    """Process sample and update Registry"""
    # Get sample from registry
    table = Table.get(table_id=table_id)
    records = Record.list(table_id=table_id, filter={"sample_id": sample_id})
    sample = records[0]

    # Process
    input_file = sample.values["fastq_file"]
    output = process(input_file)

    # Update registry
    sample.update(values={"status": "completed", "result": output})
    return "Success"

@workflow
def registry_workflow(sample_id: str, table_id: str):
    """Workflow integrated with Registry"""
    return process_and_track(sample_id=sample_id, table_id=table_id)

Best Practices

Workflow Design

  1. Use type annotations for all parameters
  2. Write clear docstrings (appear in UI)
  3. Start with standard task decorators, scale up if needed
  4. Break complex workflows into modular tasks
  5. Implement proper error handling

Data Management

  1. Use consistent folder structures
  2. Define Registry schemas before bulk entry
  3. Use linked records for relationships
  4. Store metadata in Registry for traceability

Resource Configuration

  1. Right-size resources (don't over-allocate)
  2. Use GPU only when algorithms support it
  3. Monitor execution metrics and optimize
  4. Design for parallel execution when possible

Development Workflow

  1. Test locally with Docker before registration
  2. Use version control for workflow code
  3. Document resource requirements
  4. Profile workflows to determine actual needs

Troubleshooting

Common Issues

Registration Failures:

  • Ensure Docker is running
  • Check authentication with latch login
  • Verify all dependencies in Dockerfile
  • Use --verbose flag for detailed logs

Resource Problems:

  • Out of memory: Increase memory in task decorator
  • Timeouts: Increase timeout parameter
  • Storage issues: Increase ephemeral storage_gib

Data Access:

  • Use correct latch:/// path format
  • Verify file exists in workspace
  • Check permissions for shared workspaces

Type Errors:

  • Add type annotations to all parameters
  • Use LatchFile/LatchDir for file/directory parameters
  • Ensure workflow return type matches actual return

Additional Resources

Support

For issues or questions:

  1. Check documentation links above
  2. Search GitHub issues
  3. Ask in Slack community
  4. Contact support@latch.bio

Source

git clone https://github.com/Microck/ordinary-claude-skills/blob/main/skills_all/claude-scientific-skills/scientific-skills/latchbio-integration/SKILL.mdView on GitHub

Overview

Latch is a Python framework for building and deploying bioinformatics workflows as serverless pipelines. Built on Flyte, you create workflows with @workflow/@task decorators, manage cloud data with LatchFile/LatchDir, and integrate Nextflow/Snakemake pipelines. It provides Docker-based containers, auto-generated UIs, and a Registry for reproducible, production-ready pipelines.

How This Skill Works

Define tasks with pre-configured decorators like @small_task and @large_task, package code as Docker containers, and deploy as serverless workflows. Data is managed through LatchFile/LatchDir with a Registry for structured projects, tables, and records. You can configure resources (CPU, memory, GPU, storage) and integrate Nextflow or Snakemake pipelines.

When to Use It

  • Create a Latch workflow for RNA-seq analysis and deployment
  • Convert an existing Nextflow or Snakemake pipeline to run on Latch
  • Add GPU-accelerated tasks such as AlphaFold to a workflow
  • Organize sequencing data using Latch Registry and manage files with LatchFile/LatchDir
  • Explore production-ready, verified workflows like DESeq2 or ArchR in a serverless setup

Quick Start

  1. Step 1: Install the Latch SDK and log in (latch login)
  2. Step 2: Initialize a new workflow (latch init my-workflow) and create tasks with @workflow/@task
  3. Step 3: Register the workflow on the platform (latch register my-workflow) and run

Best Practices

  • Use the pre-configured task decorators (@small_task, @large_task, @small_gpu_task, @large_gpu_task) to match resource needs
  • Store and access data with LatchFile/LatchDir and organize it via Registry (Projects → Tables → Records)
  • Containerize steps with Docker to ensure reproducibility across environments
  • Test locally and leverage Verified Workflows where applicable for production reliability
  • Document inputs/outputs with clear type annotations and leverage latch init/register for deployment traceability

Example Use Cases

  • Bulk RNA-seq analysis using DESeq2 with a serverless deployment
  • Protein structure prediction using AlphaFold or ColabFold within a Latch workflow
  • Single-cell analysis pipelines integrating ArchR, scVelo, or emptyDropsR
  • CRISPR guide analysis and phylogenetics workflows
  • Converting and deploying a Nextflow or Snakemake pipeline to Latch

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers