DevToolBox무료
블로그

Cron 작업 스케줄러: Cron 표현식으로 작업 예약 — 완전 가이드

14분 읽기by DevToolBox

TL;DR

A cron expression has five fields — minute, hour, day-of-month, month, day-of-week — and controls when a scheduled task runs. In Node.js, use node-cron with a timezone option. In Python, use schedule for simple jobs or APScheduler for production workloads. On Linux, edit crontab -e or drop scripts into /etc/cron.d/. For serverless environments use Vercel Cron or GitHub Actions schedule. For distributed systems, prefer a queue-backed scheduler like pg-boss or BullMQ to guarantee exactly-once execution. Always monitor jobs with a dead man's switch pattern. Use our online cron expression tester to validate any schedule instantly.

Cron Expression Syntax — The Five (and Six) Field Format

A standard cron expression contains five whitespace-separated fields. Each field represents a time unit, and the scheduler fires the job when all fields match the current time simultaneously.

FieldPositionAllowed ValuesSpecial Characters
Minute1st0 – 59* , - /
Hour2nd0 – 23* , - /
Day of Month3rd1 – 31* , - / ? L W
Month4th1 – 12 or JAN–DEC* , - /
Day of Week5th0 – 7 (0 and 7 = Sunday) or SUN–SAT* , - / ? L #

Special Characters

  • * — matches every value in the field (wildcard)
  • , — list separator: 1,3,5 in the day-of-week field means Mon, Wed, Fri
  • - — range: 9-17 in the hour field means 9 AM through 5 PM
  • / — step: */15 in minutes means every 15 minutes; 0/5 means 0, 5, 10, …
  • ? — no specific value (used in day-of-month or day-of-week to avoid conflicts in Quartz-style cron)
  • L — last: L in day-of-month means last day of month; 5L means last Friday
  • W — nearest weekday: 15W means the weekday nearest to the 15th
  • # — nth weekday: 2#3 means the third Monday of the month

Common Cron Expressions Reference Table

ExpressionMeaning
* * * * *Every minute
*/5 * * * *Every 5 minutes
0 * * * *Every hour (at :00)
0 0 * * *Every day at midnight (UTC)
0 9 * * 1-5Weekdays at 9:00 AM
0 2 * * *Every day at 2:00 AM
30 8 * * 1Every Monday at 8:30 AM
0 0 1 * *First day of each month at midnight
0 0 1 1 *January 1st at midnight (yearly)
0 */6 * * *Every 6 hours
0 9-17 * * 1-5Every hour from 9 AM to 5 PM, weekdays
*/15 9-17 * * 1-5Every 15 min during business hours

Special Strings

Many schedulers accept human-readable aliases that expand to standard expressions:

@reboot      # Run once, at startup
@yearly      # 0 0 1 1 *    — once a year
@annually    # 0 0 1 1 *    — same as @yearly
@monthly     # 0 0 1 * *    — first of every month
@weekly      # 0 0 * * 0    — every Sunday at midnight
@daily       # 0 0 * * *    — every day at midnight
@midnight    # 0 0 * * *    — same as @daily
@hourly      # 0 * * * *    — every hour

Validate any expression instantly with the DevToolBox cron expression tester, which shows the next 10 run times and describes the schedule in plain English.

Node.js Cron Scheduling with node-cron

node-cron is the most popular Node.js scheduling library with over 3 million weekly npm downloads. It runs cron jobs within the Node.js process and supports a six-field format (including seconds) as an extension to the standard five fields.

Installation and Basic Usage

# Install node-cron
npm install node-cron
# TypeScript types (included in package since v3)
npm install --save-dev @types/node-cron   # only if using older v2
// scheduler.ts
import cron from 'node-cron';

// Run every minute
const task = cron.schedule('* * * * *', () => {
  console.log('Running every minute:', new Date().toISOString());
});

// Run at 2:30 AM every day
cron.schedule('30 2 * * *', async () => {
  await cleanupExpiredSessions();
  console.log('Daily cleanup complete');
});

// Run every weekday at 9 AM New York time
cron.schedule('0 9 * * 1-5', sendDailyDigestEmail, {
  timezone: 'America/New_York',
});

// Six-field format: add seconds as the first field
// Run every 30 seconds
cron.schedule('*/30 * * * * *', () => {
  checkHealthEndpoints();
});

Managing Tasks — Start, Stop, and Destroy

import cron from 'node-cron';

// Create a task but do NOT start it immediately
const task = cron.schedule(
  '0 * * * *',
  () => { processHourlyReports(); },
  { scheduled: false, timezone: 'UTC' }
);

// Start the task manually
task.start();

// Temporarily stop without destroying
task.stop();

// Re-start after stopping
task.start();

// Permanently destroy — removes from event loop
task.destroy();
console.log('Task destroyed. Status:', task.getStatus());
// → 'destroyed'

// Validate expression before scheduling
const isValid = cron.validate('0 */2 * * *');
console.log('Valid:', isValid); // → true

Error Handling and Overlap Prevention

import cron from 'node-cron';

let isRunning = false;

cron.schedule('*/5 * * * *', async () => {
  // Prevent overlap: skip if previous run is still active
  if (isRunning) {
    console.warn('Previous job still running — skipping this tick');
    return;
  }
  isRunning = true;

  try {
    await syncDataFromExternalAPI();
    console.log('Sync successful:', new Date().toISOString());
  } catch (error) {
    // Send to error tracking (Sentry, Datadog, etc.)
    console.error('Cron job failed:', error);
    await notifyOncall(error);
  } finally {
    isRunning = false;
  }
}, { timezone: 'UTC' });

Modern Runtimes: Bun.js and Deno Cron Support

Both Bun and Deno provide first-class built-in cron APIs, eliminating the need for third-party packages in those ecosystems.

Bun.cron() — Native Scheduling in Bun

// Bun v1.1+ — no npm install needed
// File: scheduler.ts

// Basic usage: run every hour
Bun.cron('hourly-report', '0 * * * *', async (job) => {
  console.log('Job name:', job.name); // 'hourly-report'
  console.log('Last run:', job.lastRun);
  console.log('Next run:', job.nextRun);
  await generateHourlyReport();
});

// Run every weekday morning at 8 AM UTC
Bun.cron('morning-digest', '0 8 * * 1-5', async () => {
  const digest = await buildMorningDigest();
  await sendEmailBatch(digest.recipients, digest.content);
});

// Cancel a running cron job
const controller = Bun.cron('temp-task', '*/1 * * * *', () => {
  processQueue();
});

// Stop after 10 minutes
setTimeout(() => {
  controller.unref(); // Stop keeping the process alive
}, 10 * 60 * 1000);

Deno.cron() — Scheduled Tasks in Deno Deploy

// Deno 1.38+ with --unstable-cron flag
// deno run --unstable-cron scheduler.ts

// Run every day at midnight UTC
Deno.cron('daily-cleanup', '0 0 * * *', async () => {
  console.log('Running daily cleanup');
  await deleteExpiredTokens();
  await archiveOldLogs();
});

// Run every 15 minutes during business hours
Deno.cron('health-check', '*/15 9-17 * * 1-5', async () => {
  const status = await fetch('https://api.example.com/health');
  if (!status.ok) {
    await alertOncallTeam(status.status);
  }
});

// Deno Deploy cron: define in deno.json
// {
//   "tasks": {
//     "cron": "deno run --unstable-cron scheduler.ts"
//   }
// }

// Note: Deno.cron requires --allow-net for fetch calls
// deno run --unstable-cron --allow-net scheduler.ts

Python Cron Scheduling with the schedule Library

The schedule library provides a human-friendly API for periodic task execution in Python. It is lightweight, dependency-free, and ideal for simple automation scripts and data pipelines.

Installation and Basic Scheduling

pip install schedule
# scheduler.py
import schedule
import time
import logging

logging.basicConfig(level=logging.INFO, format='%(asctime)s %(message)s')

def sync_database():
    logging.info("Syncing database...")
    # ... your logic here

def send_daily_report():
    logging.info("Sending daily report...")
    # ... your logic here

def cleanup_temp_files():
    logging.info("Cleaning up temp files...")
    # ... your logic here

# Run every 10 minutes
schedule.every(10).minutes.do(sync_database)

# Run every hour
schedule.every().hour.do(sync_database)

# Run every day at 10:30 AM
schedule.every().day.at("10:30").do(send_daily_report)

# Run every Monday at 9 AM
schedule.every().monday.at("09:00").do(send_weekly_digest)

# Run every week
schedule.every().week.do(cleanup_temp_files)

# Run every 5-10 minutes randomly (jitter)
schedule.every(5).to(10).minutes.do(poll_api)

# Main loop
if __name__ == "__main__":
    logging.info("Scheduler started")
    while True:
        schedule.run_pending()
        time.sleep(1)  # check every second

Running in a Background Thread

# background_scheduler.py
import schedule
import time
import threading
import logging

def run_scheduler():
    """Run in a background thread — non-blocking for web apps."""
    while True:
        schedule.run_pending()
        time.sleep(1)

def job_with_error_handling():
    try:
        risky_operation()
    except Exception as e:
        logging.error(f"Job failed: {e}")
        # Optionally: ping failure endpoint
        # requests.get("https://hc-ping.com/YOUR-UUID/fail")

# Register jobs
schedule.every(30).minutes.do(job_with_error_handling)
schedule.every().day.at("02:00").do(nightly_backup)

# Start background thread — daemon=True exits with main thread
scheduler_thread = threading.Thread(target=run_scheduler, daemon=True)
scheduler_thread.start()

# Your main application continues here
# e.g., Flask/FastAPI app, CLI tool, etc.
print("Application running with background scheduler")

Production Python Scheduling with APScheduler

APScheduler (Advanced Python Scheduler) is a production-grade scheduling library that supports multiple job stores (memory, SQLAlchemy, Redis, MongoDB), multiple executors (thread pool, process pool, AsyncIO), and three types of schedules: interval, cron, and date.

BackgroundScheduler with IntervalTrigger and CronTrigger

pip install apscheduler
# apscheduler_example.py
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.interval import IntervalTrigger
from apscheduler.triggers.cron import CronTrigger
from apscheduler.jobstores.sqlalchemy import SQLAlchemyJobStore
from apscheduler.executors.pool import ThreadPoolExecutor
import logging
import atexit

logging.basicConfig(level=logging.INFO)

# Configure persistent job store with SQLite
jobstores = {
    'default': SQLAlchemyJobStore(url='sqlite:///jobs.db')
}

executors = {
    'default': ThreadPoolExecutor(max_workers=10),
}

job_defaults = {
    'coalesce': True,           # Combine missed runs into one
    'max_instances': 1,         # Prevent overlap
    'misfire_grace_time': 60,   # Run late jobs up to 60s after due time
}

scheduler = BackgroundScheduler(
    jobstores=jobstores,
    executors=executors,
    job_defaults=job_defaults,
    timezone='UTC'
)

def sync_inventory():
    logging.info("Syncing inventory from warehouse API...")
    # fetch_and_update_inventory()

def generate_daily_report():
    logging.info("Generating daily sales report...")
    # build_and_email_report()

# IntervalTrigger: run every 15 minutes
scheduler.add_job(
    sync_inventory,
    trigger=IntervalTrigger(minutes=15),
    id='inventory_sync',
    name='Inventory Sync',
    replace_existing=True,
)

# CronTrigger: run at 7:00 AM on weekdays in New York time
scheduler.add_job(
    generate_daily_report,
    trigger=CronTrigger(
        hour=7, minute=0,
        day_of_week='mon-fri',
        timezone='America/New_York'
    ),
    id='daily_report',
    name='Daily Sales Report',
    replace_existing=True,
)

# Start the scheduler
scheduler.start()

# Ensure scheduler shuts down cleanly when app exits
atexit.register(lambda: scheduler.shutdown())

APScheduler with FastAPI (AsyncScheduler)

# APScheduler 4.x async integration with FastAPI
from apscheduler import AsyncScheduler
from apscheduler.triggers.cron import CronTrigger
from contextlib import asynccontextmanager
from fastapi import FastAPI

async def nightly_cleanup():
    print("Running nightly cleanup...")

@asynccontextmanager
async def lifespan(app: FastAPI):
    async with AsyncScheduler() as scheduler:
        await scheduler.add_schedule(
            nightly_cleanup,
            CronTrigger(hour=2, minute=0),
            id='nightly_cleanup'
        )
        await scheduler.start_in_background()
        yield  # FastAPI serves requests here

app = FastAPI(lifespan=lifespan)

@app.get("/jobs")
async def list_jobs():
    return {"status": "scheduler running"}

Linux Crontab — System-Level Scheduled Tasks

The Linux cron daemon (crond) is the original and most reliable cron implementation, running as a system service since the 1970s. On modern Linux systems (Ubuntu, Debian, RHEL), it is cron (Vixie cron) or cronie.

Editing the User Crontab

# Open your user's crontab in the default editor (usually vi/nano)
crontab -e

# List current user's crontab
crontab -l

# Remove current user's crontab (use with caution!)
crontab -r

# Edit another user's crontab (requires root)
crontab -u username -e
# Crontab format:
# MIN HOUR DOM MON DOW COMMAND

# Set environment variables at the top
SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=admin@example.com   # Send output to this email (empty string = no email)

# Run backup script every day at 2 AM
0 2 * * * /home/deploy/scripts/backup.sh >> /var/log/backup.log 2>&1

# Run weekly report every Sunday at midnight (redirect output)
0 0 * * 0 /opt/scripts/weekly-report.sh > /var/log/weekly-report.log 2>&1

# Ping monitoring service after successful run
0 3 * * * /opt/scripts/sync.sh && curl -fsS --retry 3 https://hc-ping.com/UUID

# Run with specific user environment
0 6 * * * cd /var/www/myapp && /usr/bin/node scripts/cleanup.js

# Run every 5 minutes, append to log
*/5 * * * * /opt/scripts/check-queue.sh >> /var/log/queue-check.log 2>&1

System-Wide Cron: /etc/cron.d/ and Drop-in Directories

# System cron locations:
# /etc/crontab          — system-wide crontab (includes USERNAME field)
# /etc/cron.d/          — drop-in crontab files (same format as /etc/crontab)
# /etc/cron.daily/      — scripts run once daily by run-parts
# /etc/cron.weekly/     — scripts run once weekly
# /etc/cron.monthly/    — scripts run once monthly
# /etc/cron.hourly/     — scripts run once hourly

# /etc/cron.d/myapp-jobs (note: includes USERNAME field)
# MIN HOUR DOM MON DOW USER COMMAND
0 2 * * * deploy /var/www/myapp/scripts/nightly-backup.sh
*/15 * * * * www-data /var/www/myapp/scripts/cache-warm.sh

# Drop a script into /etc/cron.daily/ — must be executable, no extension
cat > /etc/cron.daily/myapp-cleanup << 'EOF'
#!/bin/bash
set -euo pipefail
/var/www/myapp/scripts/cleanup.sh >> /var/log/myapp-cleanup.log 2>&1
EOF
chmod +x /etc/cron.daily/myapp-cleanup

# Verify cron service is running
systemctl status cron    # Ubuntu/Debian
systemctl status crond   # RHEL/CentOS

# Check cron logs
journalctl -u cron -n 50
grep CRON /var/log/syslog | tail -20

Next.js Cron Jobs with Vercel Cron

Vercel Cron Jobs allow you to schedule serverless function invocations using cron expressions, defined in vercel.json. Each cron job calls a configured GET endpoint on your Next.js app at the specified schedule.

vercel.json Configuration

// vercel.json
{
  "crons": [
    {
      "path": "/api/cron/daily-cleanup",
      "schedule": "0 2 * * *"
    },
    {
      "path": "/api/cron/send-digests",
      "schedule": "0 8 * * 1-5"
    },
    {
      "path": "/api/cron/refresh-cache",
      "schedule": "*/15 * * * *"
    }
  ]
}

Next.js App Router Cron Handler with CRON_SECRET

// src/app/api/cron/daily-cleanup/route.ts
import { NextRequest, NextResponse } from 'next/server';

// Vercel sets this environment variable automatically on cron invocations
const CRON_SECRET = process.env.CRON_SECRET;

export const runtime = 'nodejs';  // or 'edge' for Edge runtime
export const dynamic = 'force-dynamic';

export async function GET(req: NextRequest) {
  // Verify the request is from Vercel Cron
  const authHeader = req.headers.get('authorization');
  if (authHeader !== `Bearer ${CRON_SECRET}`) {
    return NextResponse.json(
      { error: 'Unauthorized' },
      { status: 401 }
    );
  }

  try {
    const startTime = Date.now();

    // Your cron job logic here
    await deleteExpiredSessions();
    await archiveOldPosts();
    await purgeOrphanedFiles();

    const duration = Date.now() - startTime;
    console.log(`Daily cleanup completed in ${duration}ms`);

    return NextResponse.json({
      success: true,
      duration,
      timestamp: new Date().toISOString(),
    });
  } catch (error) {
    console.error('Cron job failed:', error);
    return NextResponse.json(
      { error: 'Internal Server Error' },
      { status: 500 }
    );
  }
}

// Declare functions for TypeScript (replace with real implementations)
async function deleteExpiredSessions() { /* ... */ }
async function archiveOldPosts() { /* ... */ }
async function purgeOrphanedFiles() { /* ... */ }
# .env.local — generate a strong random secret
CRON_SECRET=your-random-secret-here

# Generate a secure secret:
# openssl rand -hex 32

GitHub Actions Scheduled Workflows

GitHub Actions supports cron-triggered workflows via the schedule event. The scheduler uses UTC timezone exclusively. The minimum supported interval is 5 minutes, and scheduled jobs on public repositories may be delayed during peak load.

Basic Scheduled Workflow with workflow_dispatch Fallback

# .github/workflows/nightly-build.yml
name: Nightly Build & Test

on:
  schedule:
    # Run at 2:00 AM UTC every day (Mon-Fri)
    - cron: '0 2 * * 1-5'
  # Also allow manual trigger (useful for testing)
  workflow_dispatch:
    inputs:
      debug_mode:
        description: 'Enable debug mode'
        required: false
        default: 'false'
        type: boolean

jobs:
  nightly-test:
    runs-on: ubuntu-latest
    # Skip if workflow is stale (no pushes in 60 days)
    # GitHub may auto-disable but this is an extra guard
    steps:
      - name: Check out repository
        uses: actions/checkout@v4

      - name: Set up Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'

      - name: Install dependencies
        run: npm ci

      - name: Run full test suite
        run: npm test -- --reporter=github

      - name: Build production bundle
        run: npm run build

      - name: Notify on failure
        if: failure()
        uses: slackapi/slack-github-action@v1
        with:
          payload: |
            {
              "text": "Nightly build failed on ${{ github.ref }}"
            }
        env:
          SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}

Scheduled Data Sync with Output Caching

# .github/workflows/data-sync.yml
name: Weekly Data Sync

on:
  schedule:
    - cron: '0 6 * * 1'  # Monday 6:00 AM UTC
  workflow_dispatch:

jobs:
  sync:
    runs-on: ubuntu-latest
    permissions:
      contents: write  # Allow pushing updated data files

    steps:
      - uses: actions/checkout@v4

      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version: '3.12'

      - name: Install dependencies
        run: pip install requests pandas

      - name: Run data sync
        env:
          API_KEY: ${{ secrets.DATA_API_KEY }}
        run: python scripts/sync_data.py

      - name: Commit and push if data changed
        run: |
          git config user.name 'github-actions[bot]'
          git config user.email 'github-actions[bot]@users.noreply.github.com'
          git add data/
          git diff --staged --quiet || git commit -m "chore: weekly data sync [skip ci]"
          git push

Distributed Cron — pg-boss and BullMQ for Exactly-Once Execution

In a distributed system with multiple application instances (e.g., a Kubernetes deployment with 3 replicas), a naive cron job running in every instance would execute 3 times per tick. You need a distributed lock or a queue-backed scheduler to guarantee exactly-once execution.

pg-boss — PostgreSQL-Backed Job Queue with Cron

npm install pg-boss
// pg-boss-scheduler.ts
import PgBoss from 'pg-boss';

const boss = new PgBoss({
  connectionString: process.env.DATABASE_URL,
  // Retention: keep completed jobs for 7 days
  deleteAfterDays: 7,
  // Retry failed jobs up to 3 times
  retryLimit: 3,
  retryDelay: 60,  // 60 seconds between retries
});

boss.on('error', (err) => console.error('pg-boss error:', err));

await boss.start();

// Schedule a recurring job — pg-boss uses node-cron expressions
// Only ONE instance across all workers will execute each tick
await boss.schedule(
  'daily-report',           // job name
  '0 7 * * 1-5',           // cron expression (UTC)
  {},                       // data payload (can be any JSON)
  {
    tz: 'America/Chicago',  // timezone
    singletonKey: 'daily-report',  // prevent duplicate scheduling
  }
);

// Worker: process the job (only one instance wins the lock)
await boss.work('daily-report', async (jobs) => {
  for (const job of jobs) {
    console.log('Processing daily-report job:', job.id);

    try {
      await generateAndSendDailyReport();
      // pg-boss auto-completes on successful return
    } catch (error) {
      console.error('Job failed:', error);
      throw error; // pg-boss will retry based on retryLimit
    }
  }
});

// One-time job with idempotency key — prevents duplicate processing
await boss.send('send-welcome-email', {
  userId: '123',
  email: 'user@example.com'
}, {
  singletonKey: 'welcome-123',  // won't create duplicate if key exists
  expireInHours: 24,
});

async function generateAndSendDailyReport() {
  // ... implementation
}

BullMQ — Redis-Backed Repeatable Jobs

npm install bullmq ioredis
// bullmq-scheduler.ts
import { Queue, Worker, QueueEvents } from 'bullmq';
import { Redis } from 'ioredis';

const redis = new Redis({
  host: process.env.REDIS_HOST || 'localhost',
  port: Number(process.env.REDIS_PORT) || 6379,
  maxRetriesPerRequest: null,  // Required by BullMQ
});

// Create a queue
const reportQueue = new Queue('report-jobs', {
  connection: redis,
  defaultJobOptions: {
    attempts: 3,
    backoff: { type: 'exponential', delay: 5000 },
    removeOnComplete: { count: 100 },
    removeOnFail: { count: 500 },
  },
});

// Add a repeatable cron job (stored in Redis, survives restarts)
await reportQueue.upsertJobScheduler(
  'weekly-sales-report',     // scheduler ID
  {
    pattern: '0 8 * * 1',    // cron expression — every Monday 8 AM
    tz: 'America/New_York',  // timezone
  },
  {
    name: 'generate-report',
    data: { type: 'sales', format: 'pdf' },
  }
);

// Worker: processes jobs one at a time (concurrency: 1 prevents overlap)
const worker = new Worker(
  'report-jobs',
  async (job) => {
    console.log(`Processing job ${job.id}: ${job.name}`);
    const { type, format } = job.data;

    // Update progress (visible in Bull Dashboard)
    await job.updateProgress(10);
    const data = await fetchReportData(type);

    await job.updateProgress(50);
    const pdf = await renderReport(data, format);

    await job.updateProgress(90);
    await emailReport(pdf);

    await job.updateProgress(100);
    return { success: true, generatedAt: new Date().toISOString() };
  },
  {
    connection: redis,
    concurrency: 1,  // Process one job at a time
  }
);

worker.on('completed', (job) => {
  console.log(`Job ${job.id} completed`);
});

worker.on('failed', (job, err) => {
  console.error(`Job ${job?.id} failed:`, err.message);
});

async function fetchReportData(_type: string) { return {}; }
async function renderReport(_data: unknown, _format: string) { return Buffer.from(''); }
async function emailReport(_pdf: Buffer) { /* ... */ }

Monitoring Cron Jobs — Dead Man's Switch and Alerting

The most common cron failure mode is silent failure: the job runs but produces no output, throws an error that is swallowed, or the server reboots and the job simply stops. Traditional monitoring that only watches for errors misses these cases entirely.

The dead man's switch (or heartbeat) pattern inverts the alert logic: instead of alerting when something goes wrong, you alert when the expected ping is not received within the expected window. This catches silent failures, server crashes, misconfigured schedules, and timeout-induced hangs.

Healthchecks.io Ping Pattern

#!/bin/bash
# bash-cron-with-monitoring.sh
# Wrap your cron job with start/success/fail pings

PING_URL="https://hc-ping.com/YOUR-UUID-HERE"

# Signal job start (optional but useful for duration tracking)
curl -fsS --retry 3 "$PING_URL/start" > /dev/null 2>&1

# Run your actual job — capture exit code
/opt/scripts/nightly-backup.sh
EXIT_CODE=$?

if [ $EXIT_CODE -eq 0 ]; then
  # Ping success endpoint
  curl -fsS --retry 3 "$PING_URL" > /dev/null 2>&1
  echo "$(date): Job completed successfully"
else
  # Ping failure endpoint — triggers immediate alert
  curl -fsS --retry 3 "$PING_URL/fail" > /dev/null 2>&1
  echo "$(date): Job FAILED with exit code $EXIT_CODE" >&2
  exit $EXIT_CODE
fi

Node.js: Ping Monitoring from Your Cron Handler

// cron-with-monitoring.ts
import cron from 'node-cron';

const HC_PING_URL = process.env.HC_PING_URL!;  // e.g. https://hc-ping.com/UUID

async function safePing(endpoint: string = '') {
  try {
    await fetch(`${HC_PING_URL}${endpoint}`, {
      method: 'HEAD',
      signal: AbortSignal.timeout(10_000),
    });
  } catch {
    // Monitoring ping failure should never crash the job
    console.warn('Failed to send monitoring ping');
  }
}

async function nightlyDataSync() {
  await safePing('/start');   // Signal job started

  const timeoutId = setTimeout(async () => {
    await safePing('/fail');  // Timeout exceeded — alert!
    process.exit(1);
  }, 25 * 60 * 1000);        // 25 minute timeout guard

  try {
    await fetchAndSyncAllRecords();
    await rebuildSearchIndex();
    await invalidateCDNCache();

    clearTimeout(timeoutId);
    await safePing();         // Signal success
    console.log('Nightly sync completed:', new Date().toISOString());
  } catch (error) {
    clearTimeout(timeoutId);
    await safePing('/fail');  // Signal failure
    throw error;
  }
}

// Schedule with timezone
cron.schedule('0 2 * * *', nightlyDataSync, {
  timezone: 'UTC',
});

async function fetchAndSyncAllRecords() { /* ... */ }
async function rebuildSearchIndex() { /* ... */ }
async function invalidateCDNCache() { /* ... */ }

Python: Monitoring Integration with Requests

# python-cron-monitoring.py
import schedule
import time
import requests
import logging
from contextlib import contextmanager

HC_PING_URL = "https://hc-ping.com/YOUR-UUID-HERE"

@contextmanager
def monitored_job(ping_url: str, timeout_seconds: int = 300):
    """Context manager that pings healthchecks.io on success/failure."""
    requests.get(f"{ping_url}/start", timeout=5)
    try:
        yield
        requests.get(ping_url, timeout=5)  # Success ping
    except Exception as e:
        logging.error(f"Job failed: {e}")
        requests.get(f"{ping_url}/fail", timeout=5)  # Failure ping
        raise

def nightly_etl_job():
    """ETL job with integrated monitoring."""
    with monitored_job(HC_PING_URL, timeout_seconds=1800):
        extract_from_source_database()
        transform_data()
        load_to_data_warehouse()
        send_completion_notification()
        logging.info("ETL pipeline completed successfully")

def extract_from_source_database(): pass
def transform_data(): pass
def load_to_data_warehouse(): pass
def send_completion_notification(): pass

# Schedule the monitored job
schedule.every().day.at("01:00").do(nightly_etl_job)

if __name__ == "__main__":
    logging.basicConfig(
        level=logging.INFO,
        format='%(asctime)s [%(levelname)s] %(message)s',
        handlers=[
            logging.FileHandler('/var/log/etl-cron.log'),
            logging.StreamHandler()
        ]
    )
    logging.info("ETL scheduler started")
    while True:
        schedule.run_pending()
        time.sleep(30)

Monitoring Tools Comparison

ToolFree TierAlert ChannelsBest For
Healthchecks.io20 checks freeEmail, Slack, PagerDuty, SMSMost devs — simple & reliable
Dead Man's Snitch1 snitch freeEmail, Slack, PagerDutySimple heartbeat monitoring
Cronitor5 monitors freeEmail, SMS, Slack, webhookCron + API + CI monitoring
OpsGenieFree trialEmail, SMS, push, callEnterprise on-call management
Grafana AlertingOpen sourceAll (webhook, email, etc.)Self-hosted observability

Test Your Cron Expression Online

Paste any cron expression to instantly see the next 10 scheduled run times, validate the syntax, and get a plain-English description of the schedule — no sign-up required.

Open Cron Expression Tester

Key Takeaways

  • Cron expressions have 5 fields (minute, hour, day-of-month, month, day-of-week); many schedulers support a 6th field for seconds. Use */N for step values and @daily/@hourly shorthands when supported.
  • In Node.js, use node-cron with an explicit timezone option. Always add error handling and overlap prevention with a boolean flag or distributed lock.
  • Bun has Bun.cron() built in (v1.1+); Deno has Deno.cron() behind --unstable-cron. No third-party packages needed.
  • In Python, use schedule for simple scripts and APScheduler with BackgroundScheduler for production apps. Set max_instances=1 and misfire_grace_time on all jobs.
  • On Linux, use crontab -e for user jobs and drop files into /etc/cron.d/ for system services. Always log output with >> /var/log/... 2>&1.
  • Vercel Cron calls a GET endpoint; protect it with a CRON_SECRET bearer token checked in the handler. GitHub Actions uses UTC; minimum interval is 5 minutes.
  • For distributed systems, use pg-boss (Postgres-backed) or BullMQ (Redis-backed) to get exactly-once execution with retries, persistence, and visibility — not raw cron in each process.
  • Always monitor with the dead man's switch pattern: ping a service like Healthchecks.io on success. If the ping is missed, you get an alert. This catches silent failures, server crashes, and schedule drift that error-only monitoring misses.

Frequently Asked Questions

What does the cron expression * * * * * mean?

The expression * * * * * represents five fields separated by spaces: minute (0-59), hour (0-23), day of month (1-31), month (1-12), and day of week (0-7, where both 0 and 7 are Sunday). A * in any field means "every valid value." So * * * * * runs a job every single minute of every hour of every day. A more practical example: 0 2 * * * runs at 2:00 AM every day; 30 8 * * 1 runs at 8:30 AM every Monday.

What are the special cron strings like @daily and @hourly?

Most cron implementations support shorthand strings: @reboot (run once at startup), @yearly or @annually (0 0 1 1 *), @monthly (0 0 1 * *), @weekly (0 0 * * 0), @daily or @midnight (0 0 * * *), and @hourly (0 * * * *). These are more readable than raw cron expressions for common schedules. Not all cron schedulers support them — node-cron and APScheduler do, but raw Linux crontab only supports @reboot, @yearly, @annually, @monthly, @weekly, @daily, @midnight, and @hourly.

How do I run a cron job every 5 minutes?

Use the step value syntax with a slash: */5 * * * * runs every 5 minutes. Similarly, */15 * * * * runs every 15 minutes, and */30 * * * * runs every 30 minutes. For every 2 hours, use 0 */2 * * *. The step syntax can also be combined with ranges: 0-30/5 * * * * runs every 5 minutes within the first 30 minutes of each hour. In node-cron and most modern schedulers, you can also use seconds as a sixth field: */5 * * * * * runs every 5 seconds.

How do I handle timezone-aware cron jobs in Node.js?

In node-cron, pass the timezone option: cron.schedule("0 9 * * *", handler, { timezone: "America/New_York" }). This uses IANA timezone names (e.g., "Europe/London", "Asia/Tokyo"). Without a timezone, node-cron uses the system timezone of the server, which can cause unexpected behavior in cloud environments. Always explicitly set the timezone for production jobs. The node-cron package relies on the system's IANA timezone database — ensure your server has tzdata installed.

What is the difference between node-cron and node-schedule?

node-cron (github.com/node-cron/node-cron) uses standard cron expression syntax and runs tasks within the Node.js process. node-schedule (github.com/node-schedule/node-schedule) supports cron expressions plus Date objects and recurrence rules, making it more flexible for one-time future tasks. For simple recurring schedules, node-cron is lighter and more straightforward. For complex scheduling logic (e.g., "every third Tuesday of odd months"), node-schedule or Agenda.js are better choices. Neither persists jobs across restarts — for persistence, use a database-backed scheduler like pg-boss or BullMQ.

How do I prevent cron job overlap — what if the previous run is still executing?

Job overlap occurs when a cron job is triggered again before the previous execution finishes. Solutions: (1) Use a mutex or distributed lock (Redis SETNX or Redlock) to skip execution if another instance is running. (2) In node-cron, use scheduled: false and call start() manually, tracking state with a boolean. (3) For APScheduler in Python, set max_instances=1 on the job. (4) In BullMQ, jobs are queued and processed one at a time per worker. (5) In pg-boss, jobs can use singleton keys to prevent duplicates. Always add timeout detection to release stuck locks.

How do GitHub Actions scheduled workflows work with cron?

GitHub Actions supports scheduled triggers via the schedule event with cron syntax: on: { schedule: [{ cron: "0 8 * * 1" }] }. Note: GitHub uses UTC timezone — always convert local times to UTC. Cron jobs on public repositories may be delayed during high load periods. GitHub may skip scheduled workflows on repos with no commits in 60 days. Important: the minimum interval is 5 minutes (* * * * * is not allowed). For reliability-critical jobs, add workflow_dispatch so you can trigger manually, and use actions/checkout with a specific ref to pin to known-good code.

What is the best way to monitor cron jobs and get alerted on failures?

The most reliable pattern is the "dead man's switch" or heartbeat check: your cron job pings a monitoring service (like Healthchecks.io or Dead Man's Snitch) on successful completion. If the ping is not received within the expected window, the service sends an alert. Implementation: add a curl or HTTP request at the end of your script: curl -fsS --retry 3 "https://hc-ping.com/YOUR-UUID". For failures, ping the /fail endpoint. This approach catches silent failures, timeouts, and server crashes — not just script errors. Tools: Healthchecks.io (free tier available), Dead Man's Snitch, Cronitor, OpsGenie, and Grafana alerting.

Cron Job Scheduler: Schedule Tasks with Cron Expressions — Complete GuideSchedule tasks with cron expressions. Complete guide covering cron syntax, Node.js node-cron, Bun/Deno, Python schedule & APScheduler, Linux crontab, Next.js Vercel Cron, GitHub Actions, distributed queues, and monitoring.

𝕏 Twitterin LinkedIn
도움이 되었나요?

최신 소식 받기

주간 개발 팁과 새 도구 알림을 받으세요.

스팸 없음. 언제든 구독 해지 가능.

Try These Related Tools

⏱️Unix Timestamp Converter{ }JSON Formatter

Related Articles

Cron 표현식 생성기 & 온라인 테스터: 완전 가이드 (2026)

Cron 표현식을 완벽하게 마스터하세요. 구문, 30+ 예제, 특수 문자, crontab vs systemd, Docker/Kubernetes의 cron, 타임존 처리, 디버깅 기법을 다룹니다.

Docker Compose 가이드: docker-compose.yml 파일 작성 — 완전 가이드

Docker Compose를 처음부터 마스터: yml 구조, 서비스, 네트워크, 볼륨, 헬스 체크, 조건부 depends_on, 프로필(dev/prod), 오버라이드 파일, 명령어, 멀티 스테이지 빌드, 프로덕션 모범 사례.

Nginx 설정 가이드: nginx.conf 파일 작성 — 완전 가이드

nginx.conf 구성을 처음부터 마스터하세요. nginx.conf 구조(main, events, http, server, location 블록), 정적 파일 제공, 리버스 프록시, Let's Encrypt SSL/TLS, HTTP/2, 로드 밸런싱, gzip, 캐싱, 속도 제한, 보안 헤더, 로깅, 디버깅을 다룹니다.