You design the happy path. You draw the boxes on the whiteboard. User → API → Database. Clean arrows. Simple flow. Ship it.
Six months later, the system is held together by 47 try/catch blocks, 3 different logging formats, auth checks scattered across every controller, and retry logic that retries forever because nobody set a limit.
Welcome to cross-cutting concerns — the things that touch every part of your system but belong to none of them.
They're the responsibilities that cut across your entire application rather than living inside a single module:
These aren't features. Nobody puts "add logging" on the roadmap. But they determine whether your system survives contact with production.
15 lines of cross-cutting logic. 2 lines of actual business logic. And this pattern is copy-pasted across every endpoint, each copy slightly different.
Pull the concerns out of business logic. Stack them as middleware layers.
Each concern is defined once, tested once, and applied consistently.
For services, use decorators or aspects:
The function body is pure business logic. Everything else is composed around it.
In a microservices world, push cross-cutting concerns to the edge:
The API gateway becomes the single place where auth, rate limiting, logging, and tracing are configured. Individual services don't even know these exist.
For advanced setups, a service mesh like Istio or Linkerd handles cross-cutting concerns at the network level:
The application code doesn't need to know about any of it.
Before going to production, every system needs answers to these:
| Concern | Question | Common Solution | |---------|----------|----------------| | Auth | How do we verify identity? | JWT, OAuth2, API keys | | Logging | What format? Where does it go? | Structured JSON → ELK/Datadog | | Tracing | Can we follow a request across services? | OpenTelemetry, Jaeger | | Retries | What's the retry policy? | Exponential backoff with jitter | | Rate Limiting | How do we prevent abuse? | Token bucket at the gateway | | Circuit Breaking | What happens when downstream is dead? | Fail fast after N errors | | Validation | Where do we validate input? | Edge validation + schema enforcement | | Error Handling | What does the client see when things fail? | Consistent error envelope |
Cross-cutting concerns should be:
The moment you see auth checks, logging, or retry logic inside business functions, you've scattered your concerns. The system will work today. It won't survive next year.
The boring infrastructure decisions are the ones that matter most at 3am.
— blanho
API Gateway handles the outside chaos. Service mesh handles the inside chaos.
CRUD overwrites history. Event Sourcing remembers everything. Here's when that matters and when it's overkill.
Redundant pipelines, intelligent segment selection, and a custom storage layer — inside Netflix's Live Origin architecture.
// Every controller looks like this
async function getUser(req: Request) {
// Auth check (copy-pasted from another controller)
const token = req.headers.authorization;
if (!token) return res.status(401).json({ error: "Unauthorized" });
const user = verifyToken(token);
if (!user) return res.status(403).json({ error: "Forbidden" });
// Logging (different format than other controllers)
console.log(`[${new Date().toISOString()}] getUser called by ${user.id}`);
try {
// Rate limit check (hand-rolled, probably wrong)
const count = await redis.incr(`rate:${user.id}`);
if (count > 100) return res.status(429).json({ error: "Too many requests" });
const result = await db.findUser(req.params.id);
// More logging (inconsistent with the one above)
logger.info("User fetched", { userId: req.params.id });
return res.json(result);
} catch (err) {
// Error handling (different from every other controller)
console.error("Error:", err);
return res.status(500).json({ error: "Something went wrong" });
}
}// Each concern is a separate, testable middleware
const pipeline = [
rateLimiter({ maxRequests: 100, window: "1m" }),
authenticate(),
authorize("users:read"),
requestLogger(),
errorHandler(),
];
// Business logic is just business logic
async function getUser(req: Request) {
const user = await db.findUser(req.params.id);
return res.json(user);
}
app.get("/users/:id", ...pipeline, getUser);@authenticate
@rate_limit(max_calls=100, period=60)
@retry(max_attempts=3, backoff="exponential")
@log_execution
@trace
def get_user(user_id: str) -> User:
return db.find_user(user_id)# API Gateway handles these for ALL services:
gateway:
authentication: jwt-validation
rate_limiting: 1000/min per client
logging: structured-json
tracing: opentelemetry
circuit_breaker: 50% error threshold
# Services only handle business logic
user-service:
GET /users/{id}: "Just fetch the user. That's it."