dataverse-python-advanced-patterns
npx machina-cli add skill github/awesome-copilot/dataverse-python-advanced-patterns --openclawYou are a Dataverse SDK for Python expert. Generate production-ready Python code that demonstrates:
- Error handling & retry logic — Catch DataverseError, check is_transient, implement exponential backoff.
- Batch operations — Bulk create/update/delete with proper error recovery.
- OData query optimization — Filter, select, orderby, expand, and paging with correct logical names.
- Table metadata — Create/inspect/delete custom tables with proper column type definitions (IntEnum for option sets).
- Configuration & timeouts — Use DataverseConfig for http_retries, http_backoff, http_timeout, language_code.
- Cache management — Flush picklist cache when metadata changes.
- File operations — Upload large files in chunks; handle chunked vs. simple upload.
- Pandas integration — Use PandasODataClient for DataFrame workflows when appropriate.
Include docstrings, type hints, and link to official API reference for each class/method used.
Source
git clone https://github.com/github/awesome-copilot/blob/main/plugins/dataverse-sdk-for-python/skills/dataverse-python-advanced-patterns/SKILL.mdView on GitHub Overview
This skill provides production-ready Python code exemplars for the Dataverse SDK, showcasing advanced patterns such as robust error handling with retries, batch operations, optimized OData queries, and metadata management. It equips developers to build resilient, scalable integrations with Dataverse while following best practices and including thorough documentation links to official references.
How This Skill Works
The examples demonstrate try/except patterns around DataverseError, using is_transient to decide on exponential backoff for retries. They include batch create/update/delete with proper error recovery, OData query optimization (filter, select, orderby, expand, paging) with correct logical names, and table metadata actions (create/inspect/delete) using IntEnum for option sets. Configurable timeouts and retries live in DataverseConfig (http_retries, http_backoff, http_timeout, language_code). Cache invalidation is shown by flushing the picklist cache after metadata changes, and large-file uploads use chunked transfer with a fallback to simple upload when appropriate. Pandas integration is illustrated via PandasODataClient for DataFrame workflows where it makes sense. Each class/method includes docstrings and references to official API docs.
When to Use It
- You need reliable, idempotent data import/export with large datasets, leveraging batch create/update/delete.
- Your integration requires robust error handling with exponential backoff for transient Dataverse errors.
- You manage custom tables and option sets and want type-safe column definitions using IntEnum.
- You must perform efficient data queries with OData options (filter, select, orderby, expand) and proper paging.
- You want to analyze or transform Dataverse data using Pandas and PandasODataClient in workflow pipelines.
Quick Start
- Step 1: Install the Dataverse Python SDK and import necessary modules (including DataverseConfig, DataverseError, PandasODataClient).
- Step 2: Create a DataverseConfig with http_retries, http_backoff, http_timeout, and language_code; instantiate your client.
- Step 3: Demonstrate a small workflow: batch create records and perform a simple OData query with select/expand, then print results.
Best Practices
- Configure http_retries, http_backoff, and http_timeout via DataverseConfig to control resilience and timeouts.
- Prefer batch operations for bulk data changes to reduce round-trips and improve error handling granularity.
- Use IntEnum for option set columns to enforce strong typing and reduce magic values in code.
- Flush the picklist/cache after any metadata changes to ensure downstream consumers see updated options.
- For large files, implement chunked uploads with a fallback to simple upload when file size is small or network is stable; document the choice clearly.
Example Use Cases
- Create a new custom table, add multiple records with a single batch operation, and verify all-or-nothing semantics with partial failure recovery.
- Upload a 100+ MB document in chunks, with retry on transient failures and a simple upload fallback for small files.
- Query accounts using $filter, $select, $orderby, and $expand, then page through results and map to a dataframe.
- Update a set of records via batch operation, handling per-record errors and performing a rollback if needed.
- Load a CSV into a Pandas DataFrame with PandasODataClient, perform transformations, and write back to Dataverse.