Amazon S3는 수백만 개의 애플리케이션을 위한 파일 스토리지의 핵심입니다. 이 가이드는 S3 SDK, 서명된 URL, 버킷 정책, CloudFront 통합을 다룹니다.
S3 기초: 버킷과 객체
S3는 데이터를 버킷 내의 객체로 저장합니다.
S3 Core Concepts:
Bucket — Top-level container (globally unique name, tied to a region)
Object — A file stored in a bucket (up to 5TB per object)
Key — The object's path/name: "uploads/2026/user-123/photo.jpg"
Prefix — Virtual folder: "uploads/2026/" (S3 has no real folders)
Region — Where the bucket lives: us-east-1, eu-west-1, ap-southeast-1
Storage Classes:
Standard — Frequently accessed data, 99.99% availability
Standard-IA — Infrequent access, lower cost, retrieval fee
One Zone-IA — Lower cost, single AZ (no cross-AZ replication)
Intelligent-Tiering — Auto-move between tiers based on access patterns
Glacier Instant — Archive with millisecond retrieval
Glacier Flexible — Archive with minutes-to-hours retrieval
Glacier Deep Archive — Cheapest storage, 12-48h retrievalAWS SDK v3로 파일 업로드
AWS SDK v3는 필요한 명령만 가져오는 모듈형 설계를 사용합니다.
// AWS SDK v3 — File Upload to S3
import {
S3Client,
PutObjectCommand,
GetObjectCommand,
DeleteObjectCommand,
ListObjectsV2Command,
} from '@aws-sdk/client-s3';
import { Upload } from '@aws-sdk/lib-storage'; // For multipart upload
import { fromEnv } from '@aws-sdk/credential-providers';
const s3 = new S3Client({
region: process.env.AWS_REGION || 'us-east-1',
credentials: fromEnv(), // Reads AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY
});
const BUCKET = process.env.S3_BUCKET_NAME!;
// 1. Simple upload (files < 5MB)
async function uploadFile(key: string, file: Buffer, contentType: string) {
const command = new PutObjectCommand({
Bucket: BUCKET,
Key: key, // e.g., 'uploads/2026/user-123/avatar.jpg'
Body: file,
ContentType: contentType,
// Optional: set cache headers
CacheControl: 'max-age=31536000',
// Optional: make publicly readable
// ACL: 'public-read',
// Optional: custom metadata
Metadata: {
'uploaded-by': 'server',
'original-name': 'avatar.jpg',
},
});
const result = await s3.send(command);
return {
url: `https://${BUCKET}.s3.amazonaws.com/${key}`,
etag: result.ETag,
versionId: result.VersionId,
};
}
// 2. Multipart upload for large files (recommended for > 100MB)
async function uploadLargeFile(key: string, stream: NodeJS.ReadableStream, contentType: string) {
const upload = new Upload({
client: s3,
params: {
Bucket: BUCKET,
Key: key,
Body: stream,
ContentType: contentType,
},
partSize: 10 * 1024 * 1024, // 10 MB parts
queueSize: 4, // 4 concurrent uploads
});
upload.on('httpUploadProgress', (progress) => {
console.log(`Uploaded: ${progress.loaded}/${progress.total} bytes`);
});
return upload.done();
}안전한 직접 업로드를 위한 서명된 URL
서명된 URL을 통해 클라이언트는 서버를 거치지 않고 S3에 직접 업로드할 수 있습니다.
// Presigned URLs — Direct Browser-to-S3 Upload
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import { PutObjectCommand, GetObjectCommand } from '@aws-sdk/client-s3';
import { randomUUID } from 'crypto';
// Generate presigned upload URL (browser uploads directly to S3)
async function generateUploadUrl(
fileName: string,
fileType: string,
userId: string
) {
const key = `uploads/${userId}/${randomUUID()}-${fileName}`;
const command = new PutObjectCommand({
Bucket: BUCKET,
Key: key,
ContentType: fileType,
// Optional: limit file size with Content-Length condition
// This must be enforced server-side with a policy
});
const signedUrl = await getSignedUrl(s3, command, {
expiresIn: 15 * 60, // 15 minutes
});
return {
uploadUrl: signedUrl,
key, // Return key so client can reference the uploaded file
expiresIn: 900, // seconds
};
}
// Express.js endpoint
app.post('/api/upload-url', authenticate, async (req, res) => {
const { fileName, fileType, fileSize } = req.body;
// Validate file type and size on server
const allowedTypes = ['image/jpeg', 'image/png', 'image/webp', 'application/pdf'];
if (!allowedTypes.includes(fileType)) {
return res.status(400).json({ error: 'File type not allowed' });
}
if (fileSize > 10 * 1024 * 1024) { // 10 MB limit
return res.status(400).json({ error: 'File too large' });
}
const { uploadUrl, key } = await generateUploadUrl(fileName, fileType, req.user.id);
res.json({ uploadUrl, key });
});
// Client-side: use the presigned URL to upload
async function uploadToS3(file, presignedUrl) {
const response = await fetch(presignedUrl, {
method: 'PUT',
body: file,
headers: {
'Content-Type': file.type,
},
});
if (!response.ok) throw new Error('Upload failed');
return response;
}CloudFront CDN 통합
CloudFront는 전세계 엣지 위치에서 S3 콘텐츠를 캐싱하는 AWS의 CDN입니다.
// CloudFront + S3 Setup
// 1. Bucket policy to allow CloudFront (Origin Access Control)
const bucketPolicy = {
Version: '2012-10-17',
Statement: [
{
Sid: 'AllowCloudFrontServicePrincipal',
Effect: 'Allow',
Principal: {
Service: 'cloudfront.amazonaws.com',
},
Action: 's3:GetObject',
Resource: `arn:aws:s3:::my-bucket/*`,
Condition: {
StringEquals: {
'AWS:SourceArn': 'arn:aws:cloudfront::123456789:distribution/ABCDEF123456',
},
},
},
],
};
// 2. Generate signed URLs for private CloudFront content
import { getSignedUrl } from '@aws-sdk/cloudfront-signer';
function generateCloudFrontSignedUrl(key: string, expirySeconds = 3600) {
const url = `https://${process.env.CLOUDFRONT_DOMAIN}/${key}`;
const expiryDate = new Date();
expiryDate.setSeconds(expiryDate.getSeconds() + expirySeconds);
return getSignedUrl({
url,
keyPairId: process.env.CLOUDFRONT_KEY_PAIR_ID!,
privateKey: process.env.CLOUDFRONT_PRIVATE_KEY!,
dateLessThan: expiryDate.toISOString(),
});
}
// 3. Invalidate CloudFront cache when S3 objects change
import { CloudFrontClient, CreateInvalidationCommand } from '@aws-sdk/client-cloudfront';
const cloudfront = new CloudFrontClient({ region: 'us-east-1' });
async function invalidateCache(paths: string[]) {
const command = new CreateInvalidationCommand({
DistributionId: process.env.CLOUDFRONT_DISTRIBUTION_ID!,
InvalidationBatch: {
CallerReference: Date.now().toString(),
Paths: {
Quantity: paths.length,
Items: paths.map(p => `/${p}`),
},
},
});
return cloudfront.send(command);
}
// Usage: invalidate a specific file after update
await invalidateCache(['uploads/profile-pictures/user-123.jpg']);
// Wildcard: invalidate all files in a folder
await invalidateCache(['uploads/*']);자주 묻는 질문
S3 버킷에 공개 액세스를 허용해야 하나요?
정적 웹사이트 호스팅이나 진정으로 공개된 자산에만 허용하세요. 버킷을 비공개로 유지하고 서명된 URL을 사용하세요.
서명된 URL은 얼마나 유효해야 하나요?
업로드 URL은 15-30분이 일반적입니다. 최대 유효 기간은 7일입니다.
S3 스토리지 비용을 줄이는 방법은?
S3 Intelligent-Tiering과 수명 주기 규칙을 사용하세요.
멀티파트 업로드란?
멀티파트 업로드는 대용량 파일을 병렬로 업로드되는 부분으로 분할합니다. 100MB 이상의 파일에 권장됩니다.