Aws
npx machina-cli add skill muhammederem/chief/aws --openclawAWS Cloud Services
Overview
Amazon Web Services (AWS) provides a comprehensive cloud platform including compute, storage, database, analytics, networking, deployment, and machine learning services.
Core Services
EC2 (Elastic Compute Cloud)
Launch Instance
import boto3
ec2 = boto3.client('ec2', region_name='us-west-2')
# Launch instance
response = ec2.run_instances(
ImageId='ami-0c55b159cbfafe1f0', # Amazon Linux 2
InstanceType='t2.micro',
MinCount=1,
MaxCount=1,
KeyName='my-key-pair',
SecurityGroupIds=['sg-1234567890abcdef0'],
SubnetId='subnet-12345678',
UserData='''
#!/bin/bash
yum update -y
yum install -y docker
service docker start
''',
TagSpecifications=[
{
'ResourceType': 'instance',
'Tags': [
{'Key': 'Name', 'Value': 'MyInstance'},
{'Key': 'Environment', 'Value': 'Dev'}
]
}
]
)
instance_id = response['Instances'][0]['InstanceId']
print(f"Launched instance: {instance_id}")
Manage Instances
# Describe instances
response = ec2.describe_instances(InstanceIds=[instance_id])
# Stop instance
ec2.stop_instances(InstanceIds=[instance_id])
# Terminate instance
ec2.terminate_instances(InstanceIds=[instance_id])
# Create AMI from instance
ec2.create_image(
InstanceId=instance_id,
Name='my-custom-ami',
Description='My custom AMI'
)
S3 (Simple Storage Service)
Upload/Download
s3 = boto3.client('s3')
# Upload file
s3.upload_file(
'local_file.txt',
'my-bucket',
'remote_file.txt',
ExtraArgs={'ContentType': 'text/plain'}
)
# Download file
s3.download_file('my-bucket', 'remote_file.txt', 'local_file.txt')
# List objects
response = s3.list_objects_v2(Bucket='my-bucket')
for obj in response.get('Contents', []):
print(obj['Key'])
Presigned URLs
# Generate presigned URL (valid for 1 hour)
url = s3.generate_presigned_url(
'get_object',
Params={'Bucket': 'my-bucket', 'Key': 'file.txt'},
ExpiresIn=3600
)
Lambda (Serverless Functions)
Create Function
lambda_client = boto3.client('lambda')
# Create function
response = lambda_client.create_function(
FunctionName='my-function',
Runtime='python3.11',
Role='arn:aws:iam::123456789012:role/lambda-role',
Handler='lambda_function.lambda_handler',
Code={
'ZipFile': b'''
def lambda_handler(event, context):
return {
'statusCode': 200,
'body': 'Hello from Lambda!'
}
'''
},
Timeout=30,
MemorySize=256,
)
# Invoke function
response = lambda_client.invoke(
FunctionName='my-function',
InvocationType='RequestResponse',
Payload=json.dumps({'key': 'value'})
)
result = json.load(response['Payload'])
print(result)
Deploy from S3
lambda_client.update_function_code(
FunctionName='my-function',
S3Bucket='my-bucket',
S3Key='lambda-deployment.zip'
)
IAM (Identity and Access Management)
Create Role
iam = boto3.client('iam')
# Create role
iam.create_role(
RoleName='lambda-role',
AssumeRolePolicyDocument=json.dumps({
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {"Service": "lambda.amazonaws.com"},
"Action": "sts:AssumeRole"
}
]
})
)
# Attach policy
iam.attach_role_policy(
RoleName='lambda-role',
PolicyArn='arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole'
)
SageMaker (ML Model Training & Deployment)
Training Job
sagemaker = boto3.client('sagemaker')
# Create training job
sagemaker.create_training_job(
TrainingJobName='my-training-job',
AlgorithmSpecification={
'TrainingImage': '763104351884.dkr.ecr.us-west-2.amazonaws.com/pytorch-training:2.1.0-cpu-py310',
'TrainingInputMode': 'File'
},
InputDataConfig=[
{
'ChannelName': 'training',
'DataSource': {
'S3DataSource': {
'S3DataType': 'S3Prefix',
'S3Uri': 's3://my-bucket/training-data/',
'S3DataDistributionType': 'FullyReplicated'
}
}
}
],
OutputDataConfig={
'S3OutputPath': 's3://my-bucket/output/'
},
ResourceConfig={
'InstanceType': 'ml.m5.xlarge',
'InstanceCount': 1,
'VolumeSizeInGB': 10
},
StoppingCondition={
'MaxRuntimeInSeconds': 86400,
'MaxWaitTimeInSeconds': 86400
},
RoleArn='arn:aws:iam::123456789012:role/SageMakerRole'
)
Deploy Model
# Create model
sagemaker.create_model(
ModelName='my-model',
PrimaryContainer={
'Image': '763104351884.dkr.ecr.us-west-2.amazonaws.com/pytorch-inference:2.1.0-cpu',
'ModelDataUrl': 's3://my-bucket/output/model.tar.gz'
},
ExecutionRoleArn='arn:aws:iam::123456789012:role/SageMakerRole'
)
# Create endpoint config
sagemaker.create_endpoint_config(
EndpointConfigName='my-endpoint-config',
ProductionVariants=[{
'VariantName': 'AllTraffic',
'ModelName': 'my-model',
'InitialInstanceCount': 1,
'InstanceType': 'ml.t2.medium'
}]
)
# Create endpoint
sagemaker.create_endpoint(
EndpointName='my-endpoint',
EndpointConfigName='my-endpoint-config'
)
RDS (Relational Database Service)
Create Database
rds = boto3.client('rds')
# Create DB instance
rds.create_db_instance(
DBInstanceIdentifier='my-database',
DBInstanceClass='db.t3.micro',
Engine='postgres',
MasterUsername='admin',
MasterUserPassword='password123',
AllocatedStorage=20,
VpcSecurityGroupIds=['sg-1234567890abcdef0'],
DBSubnetGroupName='my-db-subnet-group'
)
ECS (Elastic Container Service)
Task Definition
ecs = boto3.client('ecs')
# Register task definition
ecs.register_task_definition(
family='my-task',
containerDefinitions=[
{
'name': 'my-app',
'image': 'my-app:latest',
'memory': 512,
'cpu': 256,
'essential': True,
'portMappings': [
{'containerPort': 8000, 'protocol': 'tcp'}
],
'logConfiguration': {
'logDriver': 'awslogs',
'options': {
'awslogs-group': '/ecs/my-task',
'awslogs-region': 'us-west-2',
'awslogs-stream-prefix': 'ecs'
}
}
}
]
)
Run Task
ecs.run_task(
cluster='my-cluster',
taskDefinition='my-task',
launchType='FARGATE',
networkConfiguration={
'awsvpcConfiguration': {
'subnets': ['subnet-12345678'],
'securityGroups': ['sg-1234567890abcdef0'],
'assignPublicIp': 'ENABLED'
}
}
)
Infrastructure as Code
CloudFormation Template
AWSTemplateFormatVersion: '2010-09-09'
Description: 'Sample CloudFormation template'
Parameters:
Environment:
Type: String
Default: dev
AllowedValues:
- dev
- prod
Resources:
MyBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Sub '${Environment}-my-bucket'
MyFunction:
Type: AWS::Lambda::Function
Properties:
FunctionName: !Sub '${Environment}-my-function'
Runtime: python3.11
Handler: index.handler
Code:
ZipFile: |
def handler(event, context):
return {'statusCode': 200}
Role: !GetAtt MyFunctionRole.Arn
MyFunctionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
Terraform Configuration
# S3 Bucket
resource "aws_s3_bucket" "my_bucket" {
bucket = "my-unique-bucket-name"
tags = {
Environment = "dev"
}
}
# Lambda Function
resource "aws_lambda_function" "my_function" {
function_name = "my-function"
runtime = "python3.11"
handler = "index.handler"
role = aws_iam_role.lambda_role.arn
filename = "lambda_function.zip"
source_code_hash = filebase64sha256("lambda_function.zip")
}
# IAM Role
resource "aws_iam_role" "lambda_role" {
name = "lambda-role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "lambda.amazonaws.com"
}
}
]
})
}
# Attach policy
resource "aws_iam_role_policy_attachment" "lambda_basic" {
role = aws_iam_role.lambda_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
Monitoring & Logging
CloudWatch Logs
logs = boto3.client('logs')
# Create log group
logs.create_log_group(logGroupName='/aws/lambda/my-function')
# Put log event
logs.put_log_events(
logGroupName='/aws/lambda/my-function',
logStreamName='stream-name',
logEvents=[
{'timestamp': int(time.time() * 1000), 'message': 'Log message'}
]
)
CloudWatch Metrics
cloudwatch = boto3.client('cloudwatch')
# Put metric data
cloudwatch.put_metric_data(
Namespace='MyApp',
MetricData=[
{
'MetricName': 'RequestCount',
'Value': 1,
'Unit': 'Count',
'Dimensions': [
{'Name': 'Environment', 'Value': 'dev'}
]
}
]
)
Security Best Practices
1. IAM Security
- Follow principle of least privilege
- Use IAM roles instead of access keys
- Rotate credentials regularly
- Enable MFA for root account
2. Network Security
- Use security groups and NACLs
- Enable VPC Flow Logs
- Use private subnets for databases
- Implement bastion hosts
3. Data Security
- Enable S3 bucket encryption
- Use KMS for encryption
- Enable S3 bucket policies
- Enable CloudTrail for audit
4. Cost Optimization
- Use reserved instances for steady workloads
- Use spot instances for fault-tolerant workloads
- Enable S3 lifecycle policies
- Monitor costs with Cost Explorer
Common Patterns
Serverless API
- API Gateway → Lambda → DynamoDB
ML Pipeline
- SageMaker for training
- S3 for model storage
- Lambda for inference
- API Gateway for endpoints
Web Application
- EC2/ECS for compute
- RDS for database
- S3 + CloudFront for static assets
- Route 53 for DNS
Integration
- Docker: Containerize applications
- Kubernetes: EKS for orchestration
- CI/CD: CodePipeline, CodeBuild
- Monitoring: CloudWatch, X-Ray
Source
git clone https://github.com/muhammederem/chief/blob/main/.claude/skills/devops/aws/SKILL.mdView on GitHub Overview
Amazon Web Services offers a comprehensive cloud platform with compute, storage, database, analytics, networking, deployment, and machine learning services. The skill demonstrates programmatic usage of EC2, S3, Lambda, IAM, and SageMaker through boto3, enabling automated provisioning and management. This helps streamline infrastructure and ML workflows across AWS.
How This Skill Works
The skill uses boto3 clients (ec2, s3, lambda, iam, sagemaker) to provision and control AWS resources. It shows launching EC2 instances with specific AMIs, security groups, and user data, uploading and listing S3 objects, generating presigned URLs, creating and invoking Lambda functions, and defining IAM roles, plus initiating SageMaker training jobs for ML workloads.
When to Use It
- Setting up a development or test EC2 instance in a specific region with precise networking and tagging
- Uploading, listing, or downloading files in S3 and sharing them via time-limited presigned URLs
- Building serverless workflows by creating and invoking Lambda functions
- Defining IAM roles and attaching policies to grant secure, least-privilege access
- Training and deploying ML models with SageMaker
Quick Start
- Step 1: Initialize AWS clients (ec2, s3, lambda, iam, sagemaker) in region us-west-2
- Step 2: Launch an EC2 instance with run_instances using the provided ImageId, InstanceType, KeyName, SecurityGroupIds, SubnetId, and UserData to install Docker
- Step 3: Upload a file to S3, generate a presigned URL, and create/invoke a Lambda function as shown in the examples
Best Practices
- Tag EC2 instances (e.g., Name, Environment) to improve organization and cost tracking
- Use presigned URLs with expiration to grant time-limited S3 access
- Apply least-privilege IAM roles and policies for Lambda, EC2, and other services
- Isolate resources with proper Security Groups, Subnets, and region choices
- Automate tasks with boto3 scripts and regularly clean up test resources (terminate instances, delete test assets)
Example Use Cases
- Launch an EC2 instance in us-west-2 with a specific AMI, key pair, and Docker installation
- Upload a local file to S3, list bucket contents, and download objects as needed
- Generate a presigned URL to securely share a file for 1 hour
- Create and invoke a Python-based AWS Lambda function
- Create an IAM role and attach AWSLambdaBasicExecutionRole for Lambda execution