API Endpoints (all authenticated): - GET /api/subscriptions - current subscription status - POST /api/subscriptions/checkout - create Stripe subscription - POST /api/subscriptions/cancel - schedule cancellation at period end - POST /api/subscriptions/reactivate - cancel pending cancellation - PUT /api/subscriptions/payment-method - update payment method - GET /api/subscriptions/invoices - billing history Grace Period Job: - Daily cron at 2:30 AM to check expired grace periods - Downgrades to free tier when 30-day grace period expires - Syncs tier to user_profiles.subscription_tier Email Templates: - payment_failed_immediate (first failure) - payment_failed_7day (7 days before grace ends) - payment_failed_1day (1 day before grace ends) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Scheduler Module
Centralized cron job scheduler using node-cron for background tasks.
Overview
The scheduler runs periodic background jobs. In blue-green deployments, multiple backend containers may run simultaneously, so all jobs MUST use distributed locking to prevent duplicate execution.
Registered Jobs
| Job | Schedule | Description |
|---|---|---|
| Notification processing | 8 AM daily | Process scheduled notifications |
| Account purge | 2 AM daily | GDPR compliance - purge deleted accounts |
| Backup check | Every minute | Check for due scheduled backups |
| Retention cleanup | 4 AM daily | Clean up old backups (also runs after each backup) |
Distributed Locking Requirement
All scheduled jobs MUST use the lockService from core/config/redis.ts to prevent duplicate execution when multiple containers are running.
Pattern for New Jobs
import { randomUUID } from 'crypto';
import { lockService } from '../../core/config/redis';
import { logger } from '../../core/logging/logger';
export async function processMyJob(): Promise<void> {
const lockKey = 'job:my-job-name';
const lockValue = randomUUID();
const lockTtlSeconds = 300; // 5 minutes - adjust based on expected job duration
// Try to acquire lock
const acquired = await lockService.acquireLock(lockKey, lockTtlSeconds, lockValue);
if (!acquired) {
logger.debug('Job already running in another container, skipping');
return;
}
try {
logger.info('Starting my job');
// Do work...
logger.info('My job completed');
} catch (error) {
logger.error('My job failed', { error });
throw error;
} finally {
// Always release the lock
await lockService.releaseLock(lockKey, lockValue);
}
}
Lock Key Conventions
Use descriptive, namespaced lock keys:
| Pattern | Example | Use Case |
|---|---|---|
job:{name} |
job:notification-processor |
Global jobs (run once) |
job:{name}:{id} |
backup:schedule:uuid-here |
Per-entity jobs |
Lock TTL Guidelines
Set TTL longer than the expected job duration, but short enough to recover from crashes:
| Job Duration | Recommended TTL |
|---|---|
| < 10 seconds | 60 seconds |
| < 1 minute | 5 minutes |
| < 5 minutes | 15 minutes |
| Long-running | 30 minutes + heartbeat |
Adding New Jobs
- Create job file in the feature's
jobs/directory - Implement distributed locking (see pattern above)
- Register in
core/scheduler/index.ts - Update this README with the new job
Blue-Green Deployment Behavior
When both blue and green containers are running:
- Both schedulers trigger at the same time
- Both attempt to acquire the lock
- Only one succeeds (atomic Redis operation)
- The other skips the job execution
- Lock is released when job completes
This ensures exactly-once execution regardless of how many containers are running.