Get the FREE Ultimate OpenClaw Setup Guide →

fabric-data-factory-perf-remediate

npx machina-cli add skill PatrickGallucci/fabric-skills/fabric-data-factory-perf-remediate --openclaw
Files (1)
SKILL.md
9.6 KB

Microsoft Fabric Data Factory Performance remediate

Systematic approach to diagnosing and resolving performance issues in Microsoft Fabric Data Factory pipelines, copy activities, and dataflows.

When to Use This Skill

  • Pipeline execution takes longer than expected
  • Copy activities are slow or appear stuck
  • Activities show "Not Started" status for extended periods
  • Capacity throttling errors (HTTP 430, TooManyRequestsForCapacity)
  • Throughput is lower than expected for copy operations
  • Dataflow Gen2 refresh is slow or timing out
  • Pipeline monitoring shows performance degradation over time
  • Need to optimize parallelism, DIU, or partitioning settings

Prerequisites

  • Access to Microsoft Fabric workspace with Contributor or higher role
  • Familiarity with the Fabric Monitoring Hub
  • Understanding of Fabric capacity SKUs and their limits
  • PowerShell 7+ for running diagnostic scripts

Diagnostic Workflow

Step 1: Identify the Bottleneck Category

Determine which category your issue falls into:

CategorySymptomsStart Here
Copy Activity SlowLow throughput, long transfer durationcopy-activity-tuning.md
Pipeline StuckActivity shows In Progress with no movementpipeline-stuck-resolution.md
Capacity ThrottlingHTTP 430 errors, jobs queuedcapacity-throttling-guide.md
Dataflow SlowDataflow Gen2 refresh takes too longdataflow-optimization.md
Spark Job QueueJobs stuck in "Not Started" statuscapacity-throttling-guide.md

Step 2: Collect Diagnostics

Run the diagnostic script to gather baseline metrics:

./scripts/Get-FabricPipelineDiagnostics.ps1 -WorkspaceId "<guid>" -PipelineName "MyPipeline"

Or manually collect from the Monitoring Hub:

  1. Open Fabric portal and navigate to Monitoring Hub
  2. Filter by pipeline name and time range
  3. Select the run details (glasses icon) for the slow run
  4. Capture the Duration Breakdown for copy activities
  5. Note the queue time, transfer time, and pre/post-copy script duration

Step 3: Apply Targeted Fixes

Based on the bottleneck category, apply the appropriate optimization from the reference guides.

Quick Fixes for Common Issues

Copy Activity Running Slowly

  1. Set Intelligent Throughput Optimization to Maximum (or custom 4-256)
  2. Configure Degree of Copy Parallelism based on source type
  3. Enable Partition Option for SQL sources (Dynamic Range or Physical)
  4. Pre-calculate partition upper/lower bounds to avoid overhead
  5. Enable Staging when sink is Fabric Warehouse

Pipeline Activity Stuck

  1. Cancel the stuck activity and retry
  2. Check source/sink connectivity and credentials
  3. Verify Fabric capacity is not in throttled state
  4. Review if payload exceeds 896 KB limit
  5. Check for connection timeout or network interruption

Capacity Throttling (HTTP 430)

  1. Check current Spark concurrency against SKU limits
  2. Cancel unnecessary active Spark jobs via Monitoring Hub
  3. Consider upgrading to a larger capacity SKU
  4. Distribute pipeline trigger times to avoid burst load
  5. Use job queueing for non-interactive Spark workloads

Dataflow Gen2 Performance

  1. Reduce data volume with query folding and filters
  2. Avoid unnecessary data type conversions
  3. Minimize the number of transformation steps
  4. Use staging for large datasets
  5. Check for connector-specific throttling

Capacity SKU Quick Reference

SKUMax Spark CoresQueue LimitEquivalent Power BI
F2Limited4-
F4Limited4-
F8Limited8-
F16Limited16-
F32Limited32-
F64Standard64P1
F128Standard128P2
F256Standard256P3
F512Standard512P4
F1024Large1024-
F2048Large2048-
TrialP1 equivN/A (no queue)P1

Copy Activity Performance Settings Reference

SettingPropertyRangeRecommendation
Intelligent Throughput OptimizationdataIntegrationUnitsAuto, Standard (64), Balanced (128), Maximum (256), Custom (4-256)Start with Auto, increase for large datasets
Degree of Copy ParallelismparallelCopies1-256Auto for most; limit to 32 for Fabric Warehouse sink
Partition OptionSource settingsNone, Physical, Dynamic RangeUse Dynamic Range for large SQL tables
Enable StagingenableStagingtrue/falseRequired for Fabric Warehouse sink
Source Retry CountsourceRetryCountIntegerSet 2-3 for transient failures
Fault ToleranceenableSkipIncompatibleRowtrue/falseEnable for non-critical loads

Error Code Quick Reference

ErrorMeaningAction
HTTP 430Capacity compute limit reachedReduce concurrent jobs or upgrade SKU
Payload too largeActivity config exceeds 896 KBReduce parameter sizes
TooManyRequestsForCapacitySpark compute or API rate limitCancel active jobs or wait
Connection timeoutSource/sink unreachableCheck network, credentials, firewall
Deflate64 unsupportedCompression format not supportedRe-compress with deflate algorithm

Monitoring Setup

Enable workspace monitoring for ongoing performance analysis:

  1. Go to Workspace Settings > Monitoring
  2. Add a Monitoring Eventhouse and enable Log workspace activity
  3. Query the ItemJobEventLogs table with KQL for pipeline-level insights

Example KQL query for failure trends:

ItemJobEventLogs
| where ItemKind == "Pipeline"
| summarize count() by JobStatus

See workspace-monitoring-setup.md for detailed configuration.

References

External Resources

Source

git clone https://github.com/PatrickGallucci/fabric-skills/blob/main/skills/fabric-data-factory-perf-remediate/SKILL.mdView on GitHub

Overview

Systematic approach to diagnosing and resolving performance issues in Microsoft Fabric Data Factory pipelines, copy activities, and dataflows. It covers bottleneck classification, tuning knobs such as parallelCopies, DIU, ITO, and partitioning, plus monitoring and dataflow optimization to prevent timeouts, stalls, and throttling.

How This Skill Works

Identify the bottleneck category (Copy Activity Slow, Pipeline Stuck, Capacity Throttling, Dataflow Slow, Spark Job Queue). Collect diagnostics with the Get-FabricPipelineDiagnostics.ps1 script or by inspecting the Monitoring Hub. Apply targeted fixes from the reference guides (copy activity tuning, capacity management, dataflow optimization) and validate improvements with fresh runs.

When to Use It

  • Pipeline execution is slower than expected
  • Copy activities are slow or appear stuck
  • Activities show In Progress or Not Started for extended periods
  • HTTP 430 / TooManyRequestsForCapacity throttling occurs
  • Dataflow Gen2 refresh is slow or timing out

Quick Start

  1. Step 1: Identify the bottleneck category from symptoms using the diagnostic workflow (Copy Activity Slow, Pipeline Stuck, Capacity Throttling, Dataflow Slow, Spark Job Queue)
  2. Step 2: Collect diagnostics with Get-FabricPipelineDiagnostics.ps1 or via Monitoring Hub (note queue time, transfer time, and breakdowns)
  3. Step 3: Apply targeted fixes from the reference guides and validate by re-running the pipeline and monitoring performance

Best Practices

  • Use Monitoring Hub to establish a performance baseline and capture duration breakdowns
  • Run the diagnostic script Get-FabricPipelineDiagnostics.ps1 to collect baseline metrics
  • Set Intelligent Throughput Optimization to Maximum (or a custom 4-256) and tune parallelism
  • Configure Degree of Copy Parallelism and enable Partition Option for SQL sources
  • Review capacity SKUs and quotas; adjust workspace capacity to match workload and reduce throttling

Example Use Cases

  • A slow pipeline with Not Started tasks is resolved by adjusting capacity throttling settings and increasing the capacity SKU
  • Copy throughput improves after enabling Intelligent Throughput Optimization and calibrating parallelism to source type
  • Spark job queueing is alleviated by aligning SKUs and monitoring queue times via the Monitoring Hub
  • Dataflow Gen2 refresh speeds up after enabling Partition Option for SQL sources
  • Long queue times disappear after applying targeted fixes from the capacity-throttling guide and re-running the diagnostic

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers