TL;DR
A cron expression has five fields — minute, hour, day-of-month, month, day-of-week — and controls when a scheduled task runs. In Node.js, use node-cron with a timezone option. In Python, use schedule for simple jobs or APScheduler for production workloads. On Linux, edit crontab -e or drop scripts into /etc/cron.d/. For serverless environments use Vercel Cron or GitHub Actions schedule. For distributed systems, prefer a queue-backed scheduler like pg-boss or BullMQ to guarantee exactly-once execution. Always monitor jobs with a dead man's switch pattern. Use our online cron expression tester to validate any schedule instantly.
Cron Expression Syntax — The Five (and Six) Field Format
A standard cron expression contains five whitespace-separated fields. Each field represents a time unit, and the scheduler fires the job when all fields match the current time simultaneously.
| Field | Position | Allowed Values | Special Characters |
|---|---|---|---|
| Minute | 1st | 0 – 59 | * , - / |
| Hour | 2nd | 0 – 23 | * , - / |
| Day of Month | 3rd | 1 – 31 | * , - / ? L W |
| Month | 4th | 1 – 12 or JAN–DEC | * , - / |
| Day of Week | 5th | 0 – 7 (0 and 7 = Sunday) or SUN–SAT | * , - / ? L # |
Special Characters
*— matches every value in the field (wildcard),— list separator:1,3,5in the day-of-week field means Mon, Wed, Fri-— range:9-17in the hour field means 9 AM through 5 PM/— step:*/15in minutes means every 15 minutes;0/5means 0, 5, 10, …?— no specific value (used in day-of-month or day-of-week to avoid conflicts in Quartz-style cron)L— last:Lin day-of-month means last day of month;5Lmeans last FridayW— nearest weekday:15Wmeans the weekday nearest to the 15th#— nth weekday:2#3means the third Monday of the month
Common Cron Expressions Reference Table
| Expression | Meaning |
|---|---|
* * * * * | Every minute |
*/5 * * * * | Every 5 minutes |
0 * * * * | Every hour (at :00) |
0 0 * * * | Every day at midnight (UTC) |
0 9 * * 1-5 | Weekdays at 9:00 AM |
0 2 * * * | Every day at 2:00 AM |
30 8 * * 1 | Every Monday at 8:30 AM |
0 0 1 * * | First day of each month at midnight |
0 0 1 1 * | January 1st at midnight (yearly) |
0 */6 * * * | Every 6 hours |
0 9-17 * * 1-5 | Every hour from 9 AM to 5 PM, weekdays |
*/15 9-17 * * 1-5 | Every 15 min during business hours |
Special Strings
Many schedulers accept human-readable aliases that expand to standard expressions:
@reboot # Run once, at startup
@yearly # 0 0 1 1 * — once a year
@annually # 0 0 1 1 * — same as @yearly
@monthly # 0 0 1 * * — first of every month
@weekly # 0 0 * * 0 — every Sunday at midnight
@daily # 0 0 * * * — every day at midnight
@midnight # 0 0 * * * — same as @daily
@hourly # 0 * * * * — every hourValidate any expression instantly with the DevToolBox cron expression tester, which shows the next 10 run times and describes the schedule in plain English.
Node.js Cron Scheduling with node-cron
node-cron is the most popular Node.js scheduling library with over 3 million weekly npm downloads. It runs cron jobs within the Node.js process and supports a six-field format (including seconds) as an extension to the standard five fields.
Installation and Basic Usage
# Install node-cron
npm install node-cron
# TypeScript types (included in package since v3)
npm install --save-dev @types/node-cron # only if using older v2// scheduler.ts
import cron from 'node-cron';
// Run every minute
const task = cron.schedule('* * * * *', () => {
console.log('Running every minute:', new Date().toISOString());
});
// Run at 2:30 AM every day
cron.schedule('30 2 * * *', async () => {
await cleanupExpiredSessions();
console.log('Daily cleanup complete');
});
// Run every weekday at 9 AM New York time
cron.schedule('0 9 * * 1-5', sendDailyDigestEmail, {
timezone: 'America/New_York',
});
// Six-field format: add seconds as the first field
// Run every 30 seconds
cron.schedule('*/30 * * * * *', () => {
checkHealthEndpoints();
});Managing Tasks — Start, Stop, and Destroy
import cron from 'node-cron';
// Create a task but do NOT start it immediately
const task = cron.schedule(
'0 * * * *',
() => { processHourlyReports(); },
{ scheduled: false, timezone: 'UTC' }
);
// Start the task manually
task.start();
// Temporarily stop without destroying
task.stop();
// Re-start after stopping
task.start();
// Permanently destroy — removes from event loop
task.destroy();
console.log('Task destroyed. Status:', task.getStatus());
// → 'destroyed'
// Validate expression before scheduling
const isValid = cron.validate('0 */2 * * *');
console.log('Valid:', isValid); // → trueError Handling and Overlap Prevention
import cron from 'node-cron';
let isRunning = false;
cron.schedule('*/5 * * * *', async () => {
// Prevent overlap: skip if previous run is still active
if (isRunning) {
console.warn('Previous job still running — skipping this tick');
return;
}
isRunning = true;
try {
await syncDataFromExternalAPI();
console.log('Sync successful:', new Date().toISOString());
} catch (error) {
// Send to error tracking (Sentry, Datadog, etc.)
console.error('Cron job failed:', error);
await notifyOncall(error);
} finally {
isRunning = false;
}
}, { timezone: 'UTC' });Modern Runtimes: Bun.js and Deno Cron Support
Both Bun and Deno provide first-class built-in cron APIs, eliminating the need for third-party packages in those ecosystems.
Bun.cron() — Native Scheduling in Bun
// Bun v1.1+ — no npm install needed
// File: scheduler.ts
// Basic usage: run every hour
Bun.cron('hourly-report', '0 * * * *', async (job) => {
console.log('Job name:', job.name); // 'hourly-report'
console.log('Last run:', job.lastRun);
console.log('Next run:', job.nextRun);
await generateHourlyReport();
});
// Run every weekday morning at 8 AM UTC
Bun.cron('morning-digest', '0 8 * * 1-5', async () => {
const digest = await buildMorningDigest();
await sendEmailBatch(digest.recipients, digest.content);
});
// Cancel a running cron job
const controller = Bun.cron('temp-task', '*/1 * * * *', () => {
processQueue();
});
// Stop after 10 minutes
setTimeout(() => {
controller.unref(); // Stop keeping the process alive
}, 10 * 60 * 1000);Deno.cron() — Scheduled Tasks in Deno Deploy
// Deno 1.38+ with --unstable-cron flag
// deno run --unstable-cron scheduler.ts
// Run every day at midnight UTC
Deno.cron('daily-cleanup', '0 0 * * *', async () => {
console.log('Running daily cleanup');
await deleteExpiredTokens();
await archiveOldLogs();
});
// Run every 15 minutes during business hours
Deno.cron('health-check', '*/15 9-17 * * 1-5', async () => {
const status = await fetch('https://api.example.com/health');
if (!status.ok) {
await alertOncallTeam(status.status);
}
});
// Deno Deploy cron: define in deno.json
// {
// "tasks": {
// "cron": "deno run --unstable-cron scheduler.ts"
// }
// }
// Note: Deno.cron requires --allow-net for fetch calls
// deno run --unstable-cron --allow-net scheduler.tsPython Cron Scheduling with the schedule Library
The schedule library provides a human-friendly API for periodic task execution in Python. It is lightweight, dependency-free, and ideal for simple automation scripts and data pipelines.
Installation and Basic Scheduling
pip install schedule# scheduler.py
import schedule
import time
import logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s %(message)s')
def sync_database():
logging.info("Syncing database...")
# ... your logic here
def send_daily_report():
logging.info("Sending daily report...")
# ... your logic here
def cleanup_temp_files():
logging.info("Cleaning up temp files...")
# ... your logic here
# Run every 10 minutes
schedule.every(10).minutes.do(sync_database)
# Run every hour
schedule.every().hour.do(sync_database)
# Run every day at 10:30 AM
schedule.every().day.at("10:30").do(send_daily_report)
# Run every Monday at 9 AM
schedule.every().monday.at("09:00").do(send_weekly_digest)
# Run every week
schedule.every().week.do(cleanup_temp_files)
# Run every 5-10 minutes randomly (jitter)
schedule.every(5).to(10).minutes.do(poll_api)
# Main loop
if __name__ == "__main__":
logging.info("Scheduler started")
while True:
schedule.run_pending()
time.sleep(1) # check every secondRunning in a Background Thread
# background_scheduler.py
import schedule
import time
import threading
import logging
def run_scheduler():
"""Run in a background thread — non-blocking for web apps."""
while True:
schedule.run_pending()
time.sleep(1)
def job_with_error_handling():
try:
risky_operation()
except Exception as e:
logging.error(f"Job failed: {e}")
# Optionally: ping failure endpoint
# requests.get("https://hc-ping.com/YOUR-UUID/fail")
# Register jobs
schedule.every(30).minutes.do(job_with_error_handling)
schedule.every().day.at("02:00").do(nightly_backup)
# Start background thread — daemon=True exits with main thread
scheduler_thread = threading.Thread(target=run_scheduler, daemon=True)
scheduler_thread.start()
# Your main application continues here
# e.g., Flask/FastAPI app, CLI tool, etc.
print("Application running with background scheduler")Production Python Scheduling with APScheduler
APScheduler (Advanced Python Scheduler) is a production-grade scheduling library that supports multiple job stores (memory, SQLAlchemy, Redis, MongoDB), multiple executors (thread pool, process pool, AsyncIO), and three types of schedules: interval, cron, and date.
BackgroundScheduler with IntervalTrigger and CronTrigger
pip install apscheduler# apscheduler_example.py
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.interval import IntervalTrigger
from apscheduler.triggers.cron import CronTrigger
from apscheduler.jobstores.sqlalchemy import SQLAlchemyJobStore
from apscheduler.executors.pool import ThreadPoolExecutor
import logging
import atexit
logging.basicConfig(level=logging.INFO)
# Configure persistent job store with SQLite
jobstores = {
'default': SQLAlchemyJobStore(url='sqlite:///jobs.db')
}
executors = {
'default': ThreadPoolExecutor(max_workers=10),
}
job_defaults = {
'coalesce': True, # Combine missed runs into one
'max_instances': 1, # Prevent overlap
'misfire_grace_time': 60, # Run late jobs up to 60s after due time
}
scheduler = BackgroundScheduler(
jobstores=jobstores,
executors=executors,
job_defaults=job_defaults,
timezone='UTC'
)
def sync_inventory():
logging.info("Syncing inventory from warehouse API...")
# fetch_and_update_inventory()
def generate_daily_report():
logging.info("Generating daily sales report...")
# build_and_email_report()
# IntervalTrigger: run every 15 minutes
scheduler.add_job(
sync_inventory,
trigger=IntervalTrigger(minutes=15),
id='inventory_sync',
name='Inventory Sync',
replace_existing=True,
)
# CronTrigger: run at 7:00 AM on weekdays in New York time
scheduler.add_job(
generate_daily_report,
trigger=CronTrigger(
hour=7, minute=0,
day_of_week='mon-fri',
timezone='America/New_York'
),
id='daily_report',
name='Daily Sales Report',
replace_existing=True,
)
# Start the scheduler
scheduler.start()
# Ensure scheduler shuts down cleanly when app exits
atexit.register(lambda: scheduler.shutdown())APScheduler with FastAPI (AsyncScheduler)
# APScheduler 4.x async integration with FastAPI
from apscheduler import AsyncScheduler
from apscheduler.triggers.cron import CronTrigger
from contextlib import asynccontextmanager
from fastapi import FastAPI
async def nightly_cleanup():
print("Running nightly cleanup...")
@asynccontextmanager
async def lifespan(app: FastAPI):
async with AsyncScheduler() as scheduler:
await scheduler.add_schedule(
nightly_cleanup,
CronTrigger(hour=2, minute=0),
id='nightly_cleanup'
)
await scheduler.start_in_background()
yield # FastAPI serves requests here
app = FastAPI(lifespan=lifespan)
@app.get("/jobs")
async def list_jobs():
return {"status": "scheduler running"}Linux Crontab — System-Level Scheduled Tasks
The Linux cron daemon (crond) is the original and most reliable cron implementation, running as a system service since the 1970s. On modern Linux systems (Ubuntu, Debian, RHEL), it is cron (Vixie cron) or cronie.
Editing the User Crontab
# Open your user's crontab in the default editor (usually vi/nano)
crontab -e
# List current user's crontab
crontab -l
# Remove current user's crontab (use with caution!)
crontab -r
# Edit another user's crontab (requires root)
crontab -u username -e# Crontab format:
# MIN HOUR DOM MON DOW COMMAND
# Set environment variables at the top
SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=admin@example.com # Send output to this email (empty string = no email)
# Run backup script every day at 2 AM
0 2 * * * /home/deploy/scripts/backup.sh >> /var/log/backup.log 2>&1
# Run weekly report every Sunday at midnight (redirect output)
0 0 * * 0 /opt/scripts/weekly-report.sh > /var/log/weekly-report.log 2>&1
# Ping monitoring service after successful run
0 3 * * * /opt/scripts/sync.sh && curl -fsS --retry 3 https://hc-ping.com/UUID
# Run with specific user environment
0 6 * * * cd /var/www/myapp && /usr/bin/node scripts/cleanup.js
# Run every 5 minutes, append to log
*/5 * * * * /opt/scripts/check-queue.sh >> /var/log/queue-check.log 2>&1System-Wide Cron: /etc/cron.d/ and Drop-in Directories
# System cron locations:
# /etc/crontab — system-wide crontab (includes USERNAME field)
# /etc/cron.d/ — drop-in crontab files (same format as /etc/crontab)
# /etc/cron.daily/ — scripts run once daily by run-parts
# /etc/cron.weekly/ — scripts run once weekly
# /etc/cron.monthly/ — scripts run once monthly
# /etc/cron.hourly/ — scripts run once hourly
# /etc/cron.d/myapp-jobs (note: includes USERNAME field)
# MIN HOUR DOM MON DOW USER COMMAND
0 2 * * * deploy /var/www/myapp/scripts/nightly-backup.sh
*/15 * * * * www-data /var/www/myapp/scripts/cache-warm.sh
# Drop a script into /etc/cron.daily/ — must be executable, no extension
cat > /etc/cron.daily/myapp-cleanup << 'EOF'
#!/bin/bash
set -euo pipefail
/var/www/myapp/scripts/cleanup.sh >> /var/log/myapp-cleanup.log 2>&1
EOF
chmod +x /etc/cron.daily/myapp-cleanup
# Verify cron service is running
systemctl status cron # Ubuntu/Debian
systemctl status crond # RHEL/CentOS
# Check cron logs
journalctl -u cron -n 50
grep CRON /var/log/syslog | tail -20Next.js Cron Jobs with Vercel Cron
Vercel Cron Jobs allow you to schedule serverless function invocations using cron expressions, defined in vercel.json. Each cron job calls a configured GET endpoint on your Next.js app at the specified schedule.
vercel.json Configuration
// vercel.json
{
"crons": [
{
"path": "/api/cron/daily-cleanup",
"schedule": "0 2 * * *"
},
{
"path": "/api/cron/send-digests",
"schedule": "0 8 * * 1-5"
},
{
"path": "/api/cron/refresh-cache",
"schedule": "*/15 * * * *"
}
]
}Next.js App Router Cron Handler with CRON_SECRET
// src/app/api/cron/daily-cleanup/route.ts
import { NextRequest, NextResponse } from 'next/server';
// Vercel sets this environment variable automatically on cron invocations
const CRON_SECRET = process.env.CRON_SECRET;
export const runtime = 'nodejs'; // or 'edge' for Edge runtime
export const dynamic = 'force-dynamic';
export async function GET(req: NextRequest) {
// Verify the request is from Vercel Cron
const authHeader = req.headers.get('authorization');
if (authHeader !== `Bearer ${CRON_SECRET}`) {
return NextResponse.json(
{ error: 'Unauthorized' },
{ status: 401 }
);
}
try {
const startTime = Date.now();
// Your cron job logic here
await deleteExpiredSessions();
await archiveOldPosts();
await purgeOrphanedFiles();
const duration = Date.now() - startTime;
console.log(`Daily cleanup completed in ${duration}ms`);
return NextResponse.json({
success: true,
duration,
timestamp: new Date().toISOString(),
});
} catch (error) {
console.error('Cron job failed:', error);
return NextResponse.json(
{ error: 'Internal Server Error' },
{ status: 500 }
);
}
}
// Declare functions for TypeScript (replace with real implementations)
async function deleteExpiredSessions() { /* ... */ }
async function archiveOldPosts() { /* ... */ }
async function purgeOrphanedFiles() { /* ... */ }# .env.local — generate a strong random secret
CRON_SECRET=your-random-secret-here
# Generate a secure secret:
# openssl rand -hex 32GitHub Actions Scheduled Workflows
GitHub Actions supports cron-triggered workflows via the schedule event. The scheduler uses UTC timezone exclusively. The minimum supported interval is 5 minutes, and scheduled jobs on public repositories may be delayed during peak load.
Basic Scheduled Workflow with workflow_dispatch Fallback
# .github/workflows/nightly-build.yml
name: Nightly Build & Test
on:
schedule:
# Run at 2:00 AM UTC every day (Mon-Fri)
- cron: '0 2 * * 1-5'
# Also allow manual trigger (useful for testing)
workflow_dispatch:
inputs:
debug_mode:
description: 'Enable debug mode'
required: false
default: 'false'
type: boolean
jobs:
nightly-test:
runs-on: ubuntu-latest
# Skip if workflow is stale (no pushes in 60 days)
# GitHub may auto-disable but this is an extra guard
steps:
- name: Check out repository
uses: actions/checkout@v4
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run full test suite
run: npm test -- --reporter=github
- name: Build production bundle
run: npm run build
- name: Notify on failure
if: failure()
uses: slackapi/slack-github-action@v1
with:
payload: |
{
"text": "Nightly build failed on ${{ github.ref }}"
}
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}Scheduled Data Sync with Output Caching
# .github/workflows/data-sync.yml
name: Weekly Data Sync
on:
schedule:
- cron: '0 6 * * 1' # Monday 6:00 AM UTC
workflow_dispatch:
jobs:
sync:
runs-on: ubuntu-latest
permissions:
contents: write # Allow pushing updated data files
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.12'
- name: Install dependencies
run: pip install requests pandas
- name: Run data sync
env:
API_KEY: ${{ secrets.DATA_API_KEY }}
run: python scripts/sync_data.py
- name: Commit and push if data changed
run: |
git config user.name 'github-actions[bot]'
git config user.email 'github-actions[bot]@users.noreply.github.com'
git add data/
git diff --staged --quiet || git commit -m "chore: weekly data sync [skip ci]"
git pushDistributed Cron — pg-boss and BullMQ for Exactly-Once Execution
In a distributed system with multiple application instances (e.g., a Kubernetes deployment with 3 replicas), a naive cron job running in every instance would execute 3 times per tick. You need a distributed lock or a queue-backed scheduler to guarantee exactly-once execution.
pg-boss — PostgreSQL-Backed Job Queue with Cron
npm install pg-boss// pg-boss-scheduler.ts
import PgBoss from 'pg-boss';
const boss = new PgBoss({
connectionString: process.env.DATABASE_URL,
// Retention: keep completed jobs for 7 days
deleteAfterDays: 7,
// Retry failed jobs up to 3 times
retryLimit: 3,
retryDelay: 60, // 60 seconds between retries
});
boss.on('error', (err) => console.error('pg-boss error:', err));
await boss.start();
// Schedule a recurring job — pg-boss uses node-cron expressions
// Only ONE instance across all workers will execute each tick
await boss.schedule(
'daily-report', // job name
'0 7 * * 1-5', // cron expression (UTC)
{}, // data payload (can be any JSON)
{
tz: 'America/Chicago', // timezone
singletonKey: 'daily-report', // prevent duplicate scheduling
}
);
// Worker: process the job (only one instance wins the lock)
await boss.work('daily-report', async (jobs) => {
for (const job of jobs) {
console.log('Processing daily-report job:', job.id);
try {
await generateAndSendDailyReport();
// pg-boss auto-completes on successful return
} catch (error) {
console.error('Job failed:', error);
throw error; // pg-boss will retry based on retryLimit
}
}
});
// One-time job with idempotency key — prevents duplicate processing
await boss.send('send-welcome-email', {
userId: '123',
email: 'user@example.com'
}, {
singletonKey: 'welcome-123', // won't create duplicate if key exists
expireInHours: 24,
});
async function generateAndSendDailyReport() {
// ... implementation
}BullMQ — Redis-Backed Repeatable Jobs
npm install bullmq ioredis// bullmq-scheduler.ts
import { Queue, Worker, QueueEvents } from 'bullmq';
import { Redis } from 'ioredis';
const redis = new Redis({
host: process.env.REDIS_HOST || 'localhost',
port: Number(process.env.REDIS_PORT) || 6379,
maxRetriesPerRequest: null, // Required by BullMQ
});
// Create a queue
const reportQueue = new Queue('report-jobs', {
connection: redis,
defaultJobOptions: {
attempts: 3,
backoff: { type: 'exponential', delay: 5000 },
removeOnComplete: { count: 100 },
removeOnFail: { count: 500 },
},
});
// Add a repeatable cron job (stored in Redis, survives restarts)
await reportQueue.upsertJobScheduler(
'weekly-sales-report', // scheduler ID
{
pattern: '0 8 * * 1', // cron expression — every Monday 8 AM
tz: 'America/New_York', // timezone
},
{
name: 'generate-report',
data: { type: 'sales', format: 'pdf' },
}
);
// Worker: processes jobs one at a time (concurrency: 1 prevents overlap)
const worker = new Worker(
'report-jobs',
async (job) => {
console.log(`Processing job ${job.id}: ${job.name}`);
const { type, format } = job.data;
// Update progress (visible in Bull Dashboard)
await job.updateProgress(10);
const data = await fetchReportData(type);
await job.updateProgress(50);
const pdf = await renderReport(data, format);
await job.updateProgress(90);
await emailReport(pdf);
await job.updateProgress(100);
return { success: true, generatedAt: new Date().toISOString() };
},
{
connection: redis,
concurrency: 1, // Process one job at a time
}
);
worker.on('completed', (job) => {
console.log(`Job ${job.id} completed`);
});
worker.on('failed', (job, err) => {
console.error(`Job ${job?.id} failed:`, err.message);
});
async function fetchReportData(_type: string) { return {}; }
async function renderReport(_data: unknown, _format: string) { return Buffer.from(''); }
async function emailReport(_pdf: Buffer) { /* ... */ }Monitoring Cron Jobs — Dead Man's Switch and Alerting
The most common cron failure mode is silent failure: the job runs but produces no output, throws an error that is swallowed, or the server reboots and the job simply stops. Traditional monitoring that only watches for errors misses these cases entirely.
The dead man's switch (or heartbeat) pattern inverts the alert logic: instead of alerting when something goes wrong, you alert when the expected ping is not received within the expected window. This catches silent failures, server crashes, misconfigured schedules, and timeout-induced hangs.
Healthchecks.io Ping Pattern
#!/bin/bash
# bash-cron-with-monitoring.sh
# Wrap your cron job with start/success/fail pings
PING_URL="https://hc-ping.com/YOUR-UUID-HERE"
# Signal job start (optional but useful for duration tracking)
curl -fsS --retry 3 "$PING_URL/start" > /dev/null 2>&1
# Run your actual job — capture exit code
/opt/scripts/nightly-backup.sh
EXIT_CODE=$?
if [ $EXIT_CODE -eq 0 ]; then
# Ping success endpoint
curl -fsS --retry 3 "$PING_URL" > /dev/null 2>&1
echo "$(date): Job completed successfully"
else
# Ping failure endpoint — triggers immediate alert
curl -fsS --retry 3 "$PING_URL/fail" > /dev/null 2>&1
echo "$(date): Job FAILED with exit code $EXIT_CODE" >&2
exit $EXIT_CODE
fiNode.js: Ping Monitoring from Your Cron Handler
// cron-with-monitoring.ts
import cron from 'node-cron';
const HC_PING_URL = process.env.HC_PING_URL!; // e.g. https://hc-ping.com/UUID
async function safePing(endpoint: string = '') {
try {
await fetch(`${HC_PING_URL}${endpoint}`, {
method: 'HEAD',
signal: AbortSignal.timeout(10_000),
});
} catch {
// Monitoring ping failure should never crash the job
console.warn('Failed to send monitoring ping');
}
}
async function nightlyDataSync() {
await safePing('/start'); // Signal job started
const timeoutId = setTimeout(async () => {
await safePing('/fail'); // Timeout exceeded — alert!
process.exit(1);
}, 25 * 60 * 1000); // 25 minute timeout guard
try {
await fetchAndSyncAllRecords();
await rebuildSearchIndex();
await invalidateCDNCache();
clearTimeout(timeoutId);
await safePing(); // Signal success
console.log('Nightly sync completed:', new Date().toISOString());
} catch (error) {
clearTimeout(timeoutId);
await safePing('/fail'); // Signal failure
throw error;
}
}
// Schedule with timezone
cron.schedule('0 2 * * *', nightlyDataSync, {
timezone: 'UTC',
});
async function fetchAndSyncAllRecords() { /* ... */ }
async function rebuildSearchIndex() { /* ... */ }
async function invalidateCDNCache() { /* ... */ }Python: Monitoring Integration with Requests
# python-cron-monitoring.py
import schedule
import time
import requests
import logging
from contextlib import contextmanager
HC_PING_URL = "https://hc-ping.com/YOUR-UUID-HERE"
@contextmanager
def monitored_job(ping_url: str, timeout_seconds: int = 300):
"""Context manager that pings healthchecks.io on success/failure."""
requests.get(f"{ping_url}/start", timeout=5)
try:
yield
requests.get(ping_url, timeout=5) # Success ping
except Exception as e:
logging.error(f"Job failed: {e}")
requests.get(f"{ping_url}/fail", timeout=5) # Failure ping
raise
def nightly_etl_job():
"""ETL job with integrated monitoring."""
with monitored_job(HC_PING_URL, timeout_seconds=1800):
extract_from_source_database()
transform_data()
load_to_data_warehouse()
send_completion_notification()
logging.info("ETL pipeline completed successfully")
def extract_from_source_database(): pass
def transform_data(): pass
def load_to_data_warehouse(): pass
def send_completion_notification(): pass
# Schedule the monitored job
schedule.every().day.at("01:00").do(nightly_etl_job)
if __name__ == "__main__":
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s [%(levelname)s] %(message)s',
handlers=[
logging.FileHandler('/var/log/etl-cron.log'),
logging.StreamHandler()
]
)
logging.info("ETL scheduler started")
while True:
schedule.run_pending()
time.sleep(30)Monitoring Tools Comparison
| Tool | Free Tier | Alert Channels | Best For |
|---|---|---|---|
| Healthchecks.io | 20 checks free | Email, Slack, PagerDuty, SMS | Most devs — simple & reliable |
| Dead Man's Snitch | 1 snitch free | Email, Slack, PagerDuty | Simple heartbeat monitoring |
| Cronitor | 5 monitors free | Email, SMS, Slack, webhook | Cron + API + CI monitoring |
| OpsGenie | Free trial | Email, SMS, push, call | Enterprise on-call management |
| Grafana Alerting | Open source | All (webhook, email, etc.) | Self-hosted observability |
Test Your Cron Expression Online
Paste any cron expression to instantly see the next 10 scheduled run times, validate the syntax, and get a plain-English description of the schedule — no sign-up required.
Open Cron Expression TesterKey Takeaways
- Cron expressions have 5 fields (minute, hour, day-of-month, month, day-of-week); many schedulers support a 6th field for seconds. Use
*/Nfor step values and@daily/@hourlyshorthands when supported. - In Node.js, use
node-cronwith an explicittimezoneoption. Always add error handling and overlap prevention with a boolean flag or distributed lock. - Bun has
Bun.cron()built in (v1.1+); Deno hasDeno.cron()behind--unstable-cron. No third-party packages needed. - In Python, use
schedulefor simple scripts andAPSchedulerwithBackgroundSchedulerfor production apps. Setmax_instances=1andmisfire_grace_timeon all jobs. - On Linux, use
crontab -efor user jobs and drop files into/etc/cron.d/for system services. Always log output with>> /var/log/... 2>&1. - Vercel Cron calls a GET endpoint; protect it with a
CRON_SECRETbearer token checked in the handler. GitHub Actions uses UTC; minimum interval is 5 minutes. - For distributed systems, use pg-boss (Postgres-backed) or BullMQ (Redis-backed) to get exactly-once execution with retries, persistence, and visibility — not raw cron in each process.
- Always monitor with the dead man's switch pattern: ping a service like Healthchecks.io on success. If the ping is missed, you get an alert. This catches silent failures, server crashes, and schedule drift that error-only monitoring misses.
Frequently Asked Questions
What does the cron expression * * * * * mean?
The expression * * * * * represents five fields separated by spaces: minute (0-59), hour (0-23), day of month (1-31), month (1-12), and day of week (0-7, where both 0 and 7 are Sunday). A * in any field means "every valid value." So * * * * * runs a job every single minute of every hour of every day. A more practical example: 0 2 * * * runs at 2:00 AM every day; 30 8 * * 1 runs at 8:30 AM every Monday.
What are the special cron strings like @daily and @hourly?
Most cron implementations support shorthand strings: @reboot (run once at startup), @yearly or @annually (0 0 1 1 *), @monthly (0 0 1 * *), @weekly (0 0 * * 0), @daily or @midnight (0 0 * * *), and @hourly (0 * * * *). These are more readable than raw cron expressions for common schedules. Not all cron schedulers support them — node-cron and APScheduler do, but raw Linux crontab only supports @reboot, @yearly, @annually, @monthly, @weekly, @daily, @midnight, and @hourly.
How do I run a cron job every 5 minutes?
Use the step value syntax with a slash: */5 * * * * runs every 5 minutes. Similarly, */15 * * * * runs every 15 minutes, and */30 * * * * runs every 30 minutes. For every 2 hours, use 0 */2 * * *. The step syntax can also be combined with ranges: 0-30/5 * * * * runs every 5 minutes within the first 30 minutes of each hour. In node-cron and most modern schedulers, you can also use seconds as a sixth field: */5 * * * * * runs every 5 seconds.
How do I handle timezone-aware cron jobs in Node.js?
In node-cron, pass the timezone option: cron.schedule("0 9 * * *", handler, { timezone: "America/New_York" }). This uses IANA timezone names (e.g., "Europe/London", "Asia/Tokyo"). Without a timezone, node-cron uses the system timezone of the server, which can cause unexpected behavior in cloud environments. Always explicitly set the timezone for production jobs. The node-cron package relies on the system's IANA timezone database — ensure your server has tzdata installed.
What is the difference between node-cron and node-schedule?
node-cron (github.com/node-cron/node-cron) uses standard cron expression syntax and runs tasks within the Node.js process. node-schedule (github.com/node-schedule/node-schedule) supports cron expressions plus Date objects and recurrence rules, making it more flexible for one-time future tasks. For simple recurring schedules, node-cron is lighter and more straightforward. For complex scheduling logic (e.g., "every third Tuesday of odd months"), node-schedule or Agenda.js are better choices. Neither persists jobs across restarts — for persistence, use a database-backed scheduler like pg-boss or BullMQ.
How do I prevent cron job overlap — what if the previous run is still executing?
Job overlap occurs when a cron job is triggered again before the previous execution finishes. Solutions: (1) Use a mutex or distributed lock (Redis SETNX or Redlock) to skip execution if another instance is running. (2) In node-cron, use scheduled: false and call start() manually, tracking state with a boolean. (3) For APScheduler in Python, set max_instances=1 on the job. (4) In BullMQ, jobs are queued and processed one at a time per worker. (5) In pg-boss, jobs can use singleton keys to prevent duplicates. Always add timeout detection to release stuck locks.
How do GitHub Actions scheduled workflows work with cron?
GitHub Actions supports scheduled triggers via the schedule event with cron syntax: on: { schedule: [{ cron: "0 8 * * 1" }] }. Note: GitHub uses UTC timezone — always convert local times to UTC. Cron jobs on public repositories may be delayed during high load periods. GitHub may skip scheduled workflows on repos with no commits in 60 days. Important: the minimum interval is 5 minutes (* * * * * is not allowed). For reliability-critical jobs, add workflow_dispatch so you can trigger manually, and use actions/checkout with a specific ref to pin to known-good code.
What is the best way to monitor cron jobs and get alerted on failures?
The most reliable pattern is the "dead man's switch" or heartbeat check: your cron job pings a monitoring service (like Healthchecks.io or Dead Man's Snitch) on successful completion. If the ping is not received within the expected window, the service sends an alert. Implementation: add a curl or HTTP request at the end of your script: curl -fsS --retry 3 "https://hc-ping.com/YOUR-UUID". For failures, ping the /fail endpoint. This approach catches silent failures, timeouts, and server crashes — not just script errors. Tools: Healthchecks.io (free tier available), Dead Man's Snitch, Cronitor, OpsGenie, and Grafana alerting.
Cron任务调度器:使用Cron表达式调度任务完整指南 — 使用Cron表达式调度任务。含Cron语法、Node.js node-cron、Bun/Deno、Python schedule与APScheduler、Linux crontab、Next.js Vercel Cron、GitHub Actions、分布式队列和监控的完整指南。