I have ten active projects. Each one has scheduled tasks. The X engagement engine runs every 30 minutes. The skill audit runs weekly. Blog content gets queued on a schedule. SEO indexing pings Google on a cadence. Heartbeat checks make sure every service is still alive. For months, these cron jobs were scattered across different platforms, different configs, different monitoring tools. I had no single view of what was running, when it last ran, or whether it succeeded.
So I built Mission Control. It's the brain of the MGT ecosystem: a cron registry, executor, heartbeat monitor, and execution history dashboard. All in one place, all controllable from an admin panel. This is how it works and why every solo dev managing multiple projects needs something like it.
The Problem with Scattered Crons
Let me paint the picture of what I was dealing with before Mission Control.
The X engagement cron was defined in a Next.js API route, triggered by an external cron service. The SEO agent ran on its own schedule through a different trigger. The skill audit was a manual process I kept forgetting to run. Blog content generation was "whenever I remembered." Heartbeat monitoring was a UptimeRobot dashboard I checked once a week, maybe.
When something broke, I'd find out hours later. Or worse, I wouldn't find out at all until a user reported that the bot was down or the engagement loop had stalled. There was no execution history, so debugging meant digging through logs across multiple services trying to piece together what happened and when.
The real pain point wasn't any single cron failing. It was the cognitive load of tracking all of them. Every morning I'd think: did the engagement cron run? Is the SEO agent still pinging Google? Did the heartbeat catch that Railway went down last night? I was the monitoring system, and I'm not reliable at 7am.
What Mission Control Does
Mission Control is four things in one.
Cron Registry: Every scheduled task across the entire MGT ecosystem is registered in one database. The registry stores the cron's name, description, schedule (cron expression), the API endpoint it triggers, whether it's enabled or disabled, and metadata like category and priority.
Executor: When a cron is due, Mission Control hits the registered endpoint. It doesn't run the task itself. It triggers the service that owns the task. This is important. Mission Control is an orchestrator, not a monolith. Each service stays independent. Mission Control just makes sure it gets called on time.
Heartbeat Monitor: Nine services are registered in the heartbeat dashboard. Each one has a health check URL. Mission Control pings them on a schedule and tracks status (healthy, degraded, down), response time, and uptime percentage. If a service goes down, it shows up immediately in the dashboard.
Execution History: Every cron execution is logged. Start time, end time, duration, status (success, failure, timeout), and any output or error messages. I can see at a glance when each cron last ran, how long it took, and whether it succeeded. No more digging through logs.
The Database Schema
Mission Control runs on 4 database tables, 3 enums, and 9 indexes. Here's the structure.
The cron_jobs table is the registry. Each row is a scheduled task with fields for name, description, cron_expression, endpoint_url, http_method, headers (JSON), body (JSON), enabled (boolean), category, priority, and timestamps. The cron expression uses standard 5-field syntax (minute, hour, day-of-month, month, day-of-week).
The cron_executions table logs every run. Foreign key to cron_jobs. Fields for started_at, completed_at, duration_ms, status (enum: pending, running, success, failure, timeout), response_code, response_body, and error_message. This table grows fast, so I have a retention policy that prunes entries older than 30 days.
The heartbeat_services table tracks monitored services. Name, health_check_url, expected_status_code, check_interval_seconds, current_status (enum: healthy, degraded, down, unknown), last_checked_at, last_healthy_at, uptime_percentage, and response_time_ms.
The heartbeat_history table logs every health check, similar to cron_executions but for heartbeats. Status, response_time, checked_at, and any error details.
The 9 indexes cover the queries I actually run: looking up crons by category, finding the most recent execution for each cron, filtering heartbeats by status, and querying execution history by date range. Without these indexes, the dashboard would crawl as the execution history grows.
The 8 Seeded Crons
Mission Control launched with 8 cron jobs pre-configured.
- X Engagement (every 30 min): The big one. 8 phases: health check, DM check, metric snapshot, reply chain sweep, prospect engagement, like sweep, queue replenish, and run summary. This drives the X growth engine that took the account from 0 to 500+ followers.
- Skill Audit (weekly, Sunday 09:00 UTC): Scores every skill in the system against KPIs, generates improvement proposals, and flags stale feedback rules. Proposal-only, never auto-applies changes.
- Blog Content Queue: Checks the content queue for scheduled posts, builds them if source material is ready, and stages them for review.
- SEO Indexing: Pings Google Search Console with updated URLs, checks indexing status, and tracks which pages are discovered vs indexed.
- Heartbeat Sweep: Runs the health check against all 9 registered services and updates the dashboard.
- Content Factory: Triggers the MGT Factory pipeline for scheduled content automation across social platforms.
- Analytics Snapshot: Pulls key metrics from Plausible and stores daily snapshots for trend analysis.
- Deployment Monitor: Checks Vercel and Railway deployment statuses and logs any failures or warnings.
Each cron can be enabled or disabled independently from the admin panel. No code changes needed. Just flip the toggle.
The 9 Heartbeat Services
The heartbeat dashboard monitors the entire MGT ecosystem.
- MGT Website (moderngrindtech.com)
- 2K Hub (production app)
- VIBE CRM (multi-tenant SaaS)
- Service Plug (serviceplug.net)
- Holy Services Bot (Railway)
- MGT Factory (content automation)
- X Engine (engagement automation)
- SEO Agent (indexing service)
- Mission Control itself (yes, it monitors itself)
Each service has a health check endpoint that returns a standard response: status, version, uptime, and any degradation warnings. The heartbeat monitor hits these endpoints, records the response time, and updates the dashboard. If a service doesn't respond within 10 seconds, it's marked as down.
The self-monitoring is worth explaining. Mission Control checks its own health endpoint through an external route. If the check fails, it means Mission Control itself is down, which means the heartbeat data stops updating, which is a visible signal in the dashboard (timestamps stop advancing). It's a dead man's switch pattern.
The Admin Panel
Everything in Mission Control is controllable from a web-based admin panel. No SSH, no config files, no redeployments.
The cron registry page shows all registered crons in a table: name, schedule, last run, last status, next scheduled run, and an enable/disable toggle. Clicking a cron opens its detail view with the full execution history, a manual trigger button, and configuration editing.
The manual trigger button is one of the most useful features. When I'm debugging a cron or testing a new one, I can fire it immediately without waiting for the schedule. The execution gets logged the same way a scheduled run would, so I can see the results in the history.
The heartbeat page is a grid of service cards. Green for healthy, yellow for degraded, red for down, gray for unknown. Each card shows the service name, current status, response time, uptime percentage, and last checked timestamp. Clicking a card shows the check history with a timeline chart.
Why This Matters for Solo Devs
If you're running one project, you don't need Mission Control. A simple cron job and an uptime monitor will do.
But when you're running ten projects, the complexity compounds. Each project adds 2-3 scheduled tasks. Each task can fail independently. Each failure has different symptoms and different fixes. Without a central registry, you spend your mornings playing detective instead of building.
Mission Control turned my morning routine from "check seven dashboards and three log files" to "open one page and scan for red." If everything is green, I move on to building. If something is red, I know exactly what failed, when it failed, and what the error was.
The execution history is the other underrated piece. When a cron starts failing intermittently, the history lets me see patterns. Does it fail at the same time every day? Does it fail after a specific service deploys? Is the failure rate increasing? Without history, every failure feels random. With history, patterns emerge.
The Architecture Philosophy
Mission Control is deliberately simple. Four tables, a handful of API routes, and a dashboard. No message queues, no worker pools, no distributed locking. Those things are important for systems that run thousands of crons across multiple servers. I run eight crons across one ecosystem.
The key architectural decision was making Mission Control an orchestrator, not an executor. It doesn't run your tasks. It tells your services to run their own tasks. This means each service owns its own logic, its own error handling, its own retry behavior. Mission Control just tracks whether the trigger was sent and what came back.
This separation matters because it means I can add a new cron without touching Mission Control's code. I just add a row to the cron_jobs table with the endpoint URL and schedule. The service that owns the endpoint handles the rest. Mission Control is a registry and a trigger. That's it. And that simplicity is what makes it reliable.
What's Next
Mission Control is running internally and handling real ops jobs. Still in development, but already useful. Things I want to add next:
Alerting is the obvious one. Right now, I see failures when I check the dashboard. I want Slack or Discord notifications when a cron fails or a service goes down. The execution history has the data. I just need to pipe it to a notification channel.
Dependency graphs are on the roadmap too. Some crons depend on others. The blog content queue should only run after the content factory has fresh material. Right now, that ordering is handled by schedule timing (content factory runs 30 minutes before the queue check). Explicit dependencies would be cleaner.
And I want to open source the core. The registry, executor, and heartbeat patterns are generic enough that any solo dev managing multiple projects could use them. The MGT-specific parts (X engagement, skill audit, etc.) would stay private, but the framework itself? That should be shared.
If you're a solo dev drowning in scattered cron jobs and monitoring dashboards, build your own Mission Control. It doesn't have to be fancy. Four tables, a trigger loop, and a status page. That's the minimum viable brain. Everything else is polish. Check out the full project portfolio to see what Mission Control keeps running, or get in touch if you need a similar system built for your operation.