storage
Scannednpx machina-cli add skill beriberikix/zephyr-agent-skills/storage --openclawZephyr Storage
Implement reliable persistent data handling using Zephyr's storage subsystem and flash management utilities.
Core Workflows
1. NVS Storage
Utilize Non-Volatile Storage (NVS) for efficient, wear-leveled data persistence.
- Reference: nvs_storage.md
- Key Tools:
nvs_mount(),nvs_read(),nvs_write().
2. Flash Management
Configure and manage flash partitions and hardware page layouts.
- Reference: flash_management.md
- Key Tools:
fixed-partitions,FLASH_MAP,flash_get_page_info_by_offs().
Quick Start (NVS Write)
#include <zephyr/storage/nvs/nvs.h>
void save_data(struct nvs_fs *fs, uint16_t id, void *data, size_t len) {
nvs_write(fs, id, data, len);
}
Professional Patterns (Reliability)
- Settings Integration: Use NVS as the backend for the
settingssubsystem for a standard key-value configuration experience. - Collision Prevention: Define NVS Entry IDs in a centralized header file to prevent accidental overwrites across modules.
- Runtime Layout Checks: Always query the flash controller for page sizes (
flash_get_page_layout) rather than assuming hardcoded sector sizes.
Resources
- References:
nvs_storage.md: Using NVS for data blobs and integers.flash_management.md: Devicetree partitions and page information.
Source
git clone https://github.com/beriberikix/zephyr-agent-skills/blob/main/skills/storage/SKILL.mdView on GitHub Overview
This skill guides implementing reliable persistent data handling with Zephyr storage: NVS for wear-leveled persistence, Devicetree-based flash partition management, and runtime flash layout access. It's essential for durable configuration, partition reliability, and flexible storage schemes.
How This Skill Works
NVS stores data with nvs_mount(), then nvs_read() and nvs_write() manage persistent values. Flash management uses fixed partitions defined in Devicetree and referenced via FLASH_MAP, while tools like flash_get_page_info_by_offs() help determine hardware page sizes. For runtime layout, always query the flash controller instead of assuming static sector sizes to avoid layout mismatches.
When to Use It
- Implementing persistent device settings that survive reboots.
- Ensuring wear leveling and data integrity for stored configuration.
- Configuring and validating flash partitions in Devicetree for OTA, logs, or data stores.
- Querying runtime flash layout to safely size and allocate storage.
- Integrating storage with the Zephyr settings subsystem for a standard key-value store.
Quick Start
- Step 1: Include the NVS header and mount the NVS filesystem (nvs_mount).
- Step 2: Persist data with nvs_write(fs, id, data, len).
- Step 3: Verify persistence by reading back with nvs_read and handling errors.
Best Practices
- Settings Integration: use NVS as the backend for the settings subsystem for a standard key-value experience.
- Collision Prevention: define NVS Entry IDs in a centralized header to prevent accidental overwrites across modules.
- Runtime Layout Checks: always query the flash controller for page sizes (flash_get_page_layout) rather than hardcoding sector sizes.
- Mount Early: mount NVS early during init and verify the mount succeeded before use.
- Partition Alignment: keep Devicetree partitions and FLASH_MAP entries synchronized with hardware layout and tests.
Example Use Cases
- Persisting user preferences with NVS using nvs_write as shown in the Quick Start.
- Defining fixed partitions in Devicetree and validating them via FLASH_MAP and page info queries.
- Integrating NVS with Zephyr Settings to provide a standard configuration API.
- Performing a storage layout migration by validating runtime page sizes with flash_get_page_info_by_offs().
- Verifying data integrity by reading back data with nvs_read after a write to confirm persistence.