Get the FREE Ultimate OpenClaw Setup Guide →

sqs

Scanned
npx machina-cli add skill itsmostafa/aws-agent-skills/sqs --openclaw
Files (1)
SKILL.md
8.8 KB

AWS SQS

Amazon Simple Queue Service (SQS) is a fully managed message queuing service for decoupling and scaling microservices, distributed systems, and serverless applications.

Table of Contents

Core Concepts

Queue Types

TypeDescriptionUse Case
StandardAt-least-once, best-effort orderingHigh throughput
FIFOExactly-once, strict orderingOrder-sensitive processing

Key Settings

SettingDescriptionDefault
Visibility TimeoutTime message is hidden after receive30 seconds
Message RetentionHow long messages are kept4 days (max 14)
Delay SecondsDelay before message is available0
Max Message SizeMaximum message size256 KB

Dead-Letter Queue (DLQ)

Queue for messages that failed processing after maxReceiveCount attempts.

Common Patterns

Create a Standard Queue

AWS CLI:

aws sqs create-queue \
  --queue-name my-queue \
  --attributes '{
    "VisibilityTimeout": "60",
    "MessageRetentionPeriod": "604800",
    "ReceiveMessageWaitTimeSeconds": "20"
  }'

boto3:

import boto3

sqs = boto3.client('sqs')

response = sqs.create_queue(
    QueueName='my-queue',
    Attributes={
        'VisibilityTimeout': '60',
        'MessageRetentionPeriod': '604800',
        'ReceiveMessageWaitTimeSeconds': '20'  # Long polling
    }
)
queue_url = response['QueueUrl']

Create FIFO Queue

aws sqs create-queue \
  --queue-name my-queue.fifo \
  --attributes '{
    "FifoQueue": "true",
    "ContentBasedDeduplication": "true"
  }'

Configure Dead-Letter Queue

# Create DLQ
aws sqs create-queue --queue-name my-queue-dlq

# Get DLQ ARN
DLQ_ARN=$(aws sqs get-queue-attributes \
  --queue-url https://sqs.us-east-1.amazonaws.com/123456789012/my-queue-dlq \
  --attribute-names QueueArn \
  --query 'Attributes.QueueArn' --output text)

# Set redrive policy on main queue
aws sqs set-queue-attributes \
  --queue-url https://sqs.us-east-1.amazonaws.com/123456789012/my-queue \
  --attributes "{
    \"RedrivePolicy\": \"{\\\"deadLetterTargetArn\\\":\\\"${DLQ_ARN}\\\",\\\"maxReceiveCount\\\":\\\"3\\\"}\"
  }"

Send Messages

import boto3
import json

sqs = boto3.client('sqs')
queue_url = 'https://sqs.us-east-1.amazonaws.com/123456789012/my-queue'

# Send single message
sqs.send_message(
    QueueUrl=queue_url,
    MessageBody=json.dumps({'order_id': '12345', 'action': 'process'}),
    MessageAttributes={
        'MessageType': {
            'DataType': 'String',
            'StringValue': 'Order'
        }
    }
)

# Send to FIFO queue
sqs.send_message(
    QueueUrl='https://sqs.us-east-1.amazonaws.com/123456789012/my-queue.fifo',
    MessageBody=json.dumps({'order_id': '12345'}),
    MessageGroupId='order-12345',
    MessageDeduplicationId='unique-id-12345'
)

# Batch send (up to 10 messages)
sqs.send_message_batch(
    QueueUrl=queue_url,
    Entries=[
        {'Id': '1', 'MessageBody': json.dumps({'id': 1})},
        {'Id': '2', 'MessageBody': json.dumps({'id': 2})},
        {'Id': '3', 'MessageBody': json.dumps({'id': 3})}
    ]
)

Receive and Process Messages

import boto3
import json

sqs = boto3.client('sqs')
queue_url = 'https://sqs.us-east-1.amazonaws.com/123456789012/my-queue'

while True:
    # Long polling (wait up to 20 seconds)
    response = sqs.receive_message(
        QueueUrl=queue_url,
        MaxNumberOfMessages=10,
        WaitTimeSeconds=20,
        MessageAttributeNames=['All'],
        AttributeNames=['All']
    )

    messages = response.get('Messages', [])

    for message in messages:
        try:
            body = json.loads(message['Body'])
            print(f"Processing: {body}")

            # Process message...

            # Delete on success
            sqs.delete_message(
                QueueUrl=queue_url,
                ReceiptHandle=message['ReceiptHandle']
            )
        except Exception as e:
            print(f"Error processing message: {e}")
            # Message will become visible again after visibility timeout

Lambda Integration

# Create event source mapping
aws lambda create-event-source-mapping \
  --function-name my-function \
  --event-source-arn arn:aws:sqs:us-east-1:123456789012:my-queue \
  --batch-size 10 \
  --maximum-batching-window-in-seconds 5

Lambda handler:

def handler(event, context):
    for record in event['Records']:
        body = json.loads(record['body'])
        message_id = record['messageId']

        try:
            process_message(body)
        except Exception as e:
            # Raise to put message back in queue
            raise

    return {'batchItemFailures': []}

CLI Reference

Queue Management

CommandDescription
aws sqs create-queueCreate queue
aws sqs delete-queueDelete queue
aws sqs list-queuesList queues
aws sqs get-queue-urlGet queue URL by name
aws sqs get-queue-attributesGet queue settings
aws sqs set-queue-attributesUpdate queue settings

Messaging

CommandDescription
aws sqs send-messageSend single message
aws sqs send-message-batchSend up to 10 messages
aws sqs receive-messageReceive messages
aws sqs delete-messageDelete message
aws sqs delete-message-batchDelete up to 10 messages
aws sqs purge-queueDelete all messages

Visibility

CommandDescription
aws sqs change-message-visibilityChange timeout
aws sqs change-message-visibility-batchBatch change

Best Practices

Message Processing

  • Use long polling (WaitTimeSeconds=20) to reduce API calls
  • Delete messages promptly after successful processing
  • Configure appropriate visibility timeout (> processing time)
  • Implement idempotent consumers for at-least-once delivery

Dead-Letter Queues

  • Always configure DLQ for production queues
  • Set appropriate maxReceiveCount (usually 3-5)
  • Monitor DLQ depth with CloudWatch alarms
  • Process DLQ messages manually or with automation

FIFO Queues

  • Use message group IDs to partition ordering
  • Enable content-based deduplication or provide dedup IDs
  • Throughput: 300 msgs/sec without batching, 3000 with

Security

  • Use queue policies to control access
  • Enable encryption with SSE-SQS or SSE-KMS
  • Use VPC endpoints for private access

Troubleshooting

Messages Not Being Received

Causes:

  • Short polling returning empty
  • All messages in flight (visibility timeout)
  • Messages delayed (DelaySeconds)

Debug:

# Check queue attributes
aws sqs get-queue-attributes \
  --queue-url $QUEUE_URL \
  --attribute-names All

# Check approximate message counts
aws sqs get-queue-attributes \
  --queue-url $QUEUE_URL \
  --attribute-names \
    ApproximateNumberOfMessages,\
    ApproximateNumberOfMessagesNotVisible,\
    ApproximateNumberOfMessagesDelayed

Messages Going to DLQ

Causes:

  • Processing errors
  • Visibility timeout too short
  • Consumer not deleting messages

Redrive from DLQ:

# Enable redrive allow policy on source queue
aws sqs set-queue-attributes \
  --queue-url $MAIN_QUEUE_URL \
  --attributes '{"RedriveAllowPolicy": "{\"redrivePermission\":\"allowAll\"}"}'

# Start redrive
aws sqs start-message-move-task \
  --source-arn arn:aws:sqs:us-east-1:123456789012:my-queue-dlq \
  --destination-arn arn:aws:sqs:us-east-1:123456789012:my-queue

Duplicate Processing

Solutions:

  • Use FIFO queues for exactly-once
  • Implement idempotency in consumer
  • Track processed message IDs in database

Lambda Not Processing

# Check event source mapping
aws lambda list-event-source-mappings \
  --function-name my-function

# Check for errors
aws lambda get-event-source-mapping \
  --uuid <mapping-uuid>

References

Source

git clone https://github.com/itsmostafa/aws-agent-skills/blob/main/skills/sqs/SKILL.mdView on GitHub

Overview

Amazon SQS is a fully managed message queuing service for decoupling and scaling microservices, distributed systems, and serverless applications. It supports Standard and FIFO queues, dead-letter queues, visibility timeouts, and Lambda integration to automate processing.

How This Skill Works

Producers send messages to queues and consumers poll or long-poll for messages. When a message is received, it becomes invisible for the duration of the visibility timeout; after successful processing it is deleted. DLQs collect messages that fail repeatedly, while FIFO queues provide exactly-once delivery and strict ordering when needed.

When to Use It

  • You need asynchronous communication between microservices or serverless components.
  • You require high-throughput with at-least-once delivery (Standard) for non-critical tasks.
  • Order-sensitive processing requires strict, exactly-once delivery (FIFO).
  • You want to trigger Lambda or other consumers from queue events.
  • You need reliable retry and failure handling via a Dead-Letter Queue (DLQ).

Quick Start

  1. Step 1: Choose queue type (Standard or FIFO) and create the queue with appropriate attributes.
  2. Step 2: (Optional) Create and attach a Dead-Letter Queue and configure RedrivePolicy.
  3. Step 3: Start sending and receiving messages, using long polling and batch operations when possible.

Best Practices

  • Attach a DLQ and set a sensible maxReceiveCount to isolate and analyze failures.
  • Enable long polling by configuring ReceiveMessageWaitTimeSeconds to reduce costs.
  • Choose Standard vs FIFO based on throughput needs vs. strict ordering.
  • Tune VisibilityTimeout and MessageRetention to balance retries, timing, and storage.
  • Use batch operations (send_message_batch / receive_message_batch) to improve throughput and reduce costs.

Example Use Cases

  • Create a Standard Queue via AWS CLI with a 60s VisibilityTimeout and 20s long polling.
  • Create a FIFO Queue with ContentBasedDeduplication to ensure unique processing of ordered messages.
  • Configure a Dead-Letter Queue by creating a DLQ and applying a RedrivePolicy to the main queue.
  • Send a JSON payload message with a MessageBody and optional MessageAttributes.
  • Batch send three messages using send_message_batch for higher throughput.

Frequently Asked Questions

Add this skill to your agents
Sponsor this space

Reach thousands of developers