Over 27 years, I've migrated systems from VB6 to .NET, Classic ASP to Node.js, and monoliths to microservices. I've seen migrations succeed spectacularly and fail catastrophically. The difference isn't the tech—it's the strategy.
Here's everything I've learned about migrating legacy systems to modern architectures without destroying your business in the process.
The Big Rewrite is a Trap
The most tempting and most dangerous migration strategy: "Let's rewrite from scratch!"
Why Rewrites Fail
- Business keeps evolving: By the time you finish, requirements changed
- Hidden complexity: Legacy code contains years of bug fixes and edge cases
- No feedback loop: Can't release until complete
- Team burnout: Rewriting existing functionality is demoralizing
- Opportunity cost: 2 years building, 0 years shipping features
I've seen companies spend 3 years on rewrites, only to abandon them and go back to the legacy system.
The Strangler Fig Pattern
Instead of rewriting, gradually replace:
Legacy System (100% traffic)
↓
Legacy (90%) + New (10%)
↓
Legacy (50%) + New (50%)
↓
Legacy (10%) + New (90%)
↓
New System (100% traffic)
Named after strangler fig vines that eventually replace their host tree.
Step 1: Add a Routing Layer
Route requests between legacy and new systems:
// api-gateway/router.ts
import express from 'express';
import { createProxyMiddleware } from 'http-proxy-middleware';
const app = express();
// Feature flags determine routing
const featureFlags = {
'new-projects-api': {
enabled: true,
rolloutPercentage: 20, // 20% of traffic
},
};
app.use('/api/projects', (req, res, next) => {
const useNewSystem =
featureFlags['new-projects-api'].enabled &&
Math.random() < featureFlags['new-projects-api'].rolloutPercentage / 100;
if (useNewSystem) {
// Route to new system
createProxyMiddleware({
target: 'http://new-api:3000',
changeOrigin: true,
})(req, res, next);
} else {
// Route to legacy system
createProxyMiddleware({
target: 'http://legacy-api:8080',
changeOrigin: true,
})(req, res, next);
}
});
// Legacy system handles everything else
app.use('/', createProxyMiddleware({
target: 'http://legacy-api:8080',
changeOrigin: true,
}));Start at 1% traffic, gradually increase. Roll back instantly if issues arise.
Step 2: Create Anti-Corruption Layer
Legacy systems have weird data models. Don't let that leak into new code.
// legacy-adapter/project-adapter.ts
export class LegacyProjectAdapter {
// Legacy system returns bizarre format
fromLegacy(legacyProject: any): Project {
return {
id: legacyProject.PROJECT_ID,
name: legacyProject.NAME,
status: this.mapStatus(legacyProject.STATUS_CODE),
clientId: legacyProject.CLIENT_FK,
budget: parseFloat(legacyProject.BUDGET_AMT || '0'),
createdAt: new Date(legacyProject.CREATED_DATE),
// Map 20+ weird fields to clean domain model
};
}
// Clean model to legacy format
toLegacy(project: Project): any {
return {
PROJECT_ID: project.id,
NAME: project.name,
STATUS_CODE: this.unmapStatus(project.status),
CLIENT_FK: project.clientId,
BUDGET_AMT: project.budget.toString(),
CREATED_DATE: project.createdAt.toISOString(),
};
}
private mapStatus(legacyStatus: string): ProjectStatus {
const mapping: Record<string, ProjectStatus> = {
'A': 'active',
'C': 'completed',
'H': 'on-hold',
'X': 'cancelled',
};
return mapping[legacyStatus] || 'active';
}
}New code only sees clean Project domain model. Legacy weirdness is isolated.
Step 3: Dual-Write Strategy
Write to both systems during migration:
export class ProjectService {
constructor(
private legacyAdapter: LegacyProjectAdapter,
private newRepository: ProjectRepository,
private featureFlags: FeatureFlagService
) {}
async create(dto: CreateProjectDTO): Promise<Project> {
// Create in new system
const project = await this.newRepository.create(dto);
if (this.featureFlags.isEnabled('dual-write-projects')) {
try {
// Also write to legacy system for consistency
await this.legacyAdapter.create(project);
} catch (error) {
// Log but don't fail - new system is source of truth
logger.error('Failed to sync to legacy', error);
}
}
return project;
}
}Both systems stay in sync. When migration is complete, remove dual writes.
Step 4: Migrate Data Incrementally
Don't migrate all data at once. Do it in phases:
// migration-scripts/migrate-projects.ts
async function migrateProjects() {
let offset = 0;
const batchSize = 1000;
while (true) {
// Fetch batch from legacy
const legacyProjects = await legacyDB.query(
`SELECT * FROM PROJECTS LIMIT ${batchSize} OFFSET ${offset}`
);
if (legacyProjects.length === 0) break;
// Transform and insert into new system
const projects = legacyProjects.map(p => adapter.fromLegacy(p));
await newDB.insert(projects).values(projects);
console.log(`Migrated ${offset + legacyProjects.length} projects`);
offset += batchSize;
// Don't overload database
await sleep(1000);
}
}Run during off-hours. If it fails, restart from last offset.
Step 5: Validate Data Consistency
Ensure both systems return same data:
async function validateProjects() {
const projectIds = await getRandomProjectIds(100);
for (const id of projectIds) {
const legacyProject = await legacyAPI.getProject(id);
const newProject = await newAPI.getProject(id);
const legacyNormalized = adapter.fromLegacy(legacyProject);
if (!deepEqual(legacyNormalized, newProject)) {
logger.error('Data mismatch', {
projectId: id,
legacy: legacyNormalized,
new: newProject,
diff: diff(legacyNormalized, newProject),
});
}
}
}
// Run hourly in production
setInterval(validateProjects, 60 * 60 * 1000);Catches data inconsistencies early.
Step 6: Feature Parity Testing
Ensure new system can do everything legacy does:
describe('Project API Parity', () => {
it('should support all legacy query parameters', async () => {
const legacyQueries = [
'/projects?status=active',
'/projects?client_id=123',
'/projects?sort=name&order=desc',
'/projects?search=website',
];
for (const query of legacyQueries) {
const legacyResponse = await legacyAPI.get(query);
const newResponse = await newAPI.get(query);
expect(newResponse.length).toBeGreaterThan(0);
expect(newResponse).toMatchSchema(legacyResponse);
}
});
it('should maintain backward compatibility', async () => {
// New API accepts both old and new formats
const oldFormat = { project_id: '123', project_name: 'Test' };
const newFormat = { id: '123', name: 'Test' };
await expect(newAPI.create(oldFormat)).resolves.toBeDefined();
await expect(newAPI.create(newFormat)).resolves.toBeDefined();
});
});Real-World Migration: OneTravel.com
When I worked on OneTravel.com, we migrated from a .NET monolith to Node.js microservices.
Our Approach
Phase 1 (Months 1-3): Infrastructure
- Set up new Node.js services
- Create API gateway for routing
- Build anti-corruption layer
Phase 2 (Months 4-9): Migrate Features
- Month 4: Search (10% traffic, then 100%)
- Month 6: Booking (5% traffic, then 100%)
- Month 8: User accounts (20% traffic, then 100%)
Phase 3 (Months 10-12): Data Migration
- Migrate historical data
- Validate consistency
- Remove dual writes
Phase 4 (Month 13): Decommission Legacy
- Turn off old system
- Celebrate!
What Worked
- Gradual rollout: Caught issues with 1% traffic, not 100%
- Feature flags: Instant rollback if problems
- Parallel running: Legacy was fallback for 6 months
- Team buy-in: Shipped features in new system early, kept team motivated
What Didn't
- Underestimated data: Data migration took 3x longer than expected
- Hidden dependencies: Discovered 15+ integrations we didn't know about
- Performance testing: New system was slower initially, needed optimization
Migration Anti-Patterns to Avoid
1. Big Bang Deployment
❌ Friday 5pm: Switch everyone to new system
❌ Monday 9am: Everything breaks
❌ Tuesday: Roll back, project cancelled
Always migrate gradually with rollback capability.
2. Ignoring Edge Cases
// ❌ "The legacy system checks for null, but we'll use TypeScript so it's fine"
// Reality: Legacy system has data like this:
{
project_name: null, // Should be impossible
status: 'Q', // Undocumented status
budget: 'TBD', // Number field with string?!
created_date: '0000-00-00' // Invalid date
}Legacy systems contain years of edge cases. Handle them.
3. Assuming Perfect Documentation
Legacy systems are never documented. Plan for discovery:
// Document as you migrate
async function migrateProjects() {
const projects = await legacyDB.query('SELECT * FROM PROJECTS');
// Discover actual data
const statusValues = new Set(projects.map(p => p.STATUS_CODE));
console.log('Found status values:', statusValues);
// Output: Set { 'A', 'C', 'H', 'X', 'P', 'Q', '?' }
// ^ What's 'P', 'Q', '?'? Not documented!
}4. No Rollback Plan
// ✅ Always have killswitch
const config = {
useNewSystem: process.env.USE_NEW_SYSTEM === 'true',
rolloutPercentage: parseInt(process.env.ROLLOUT_PERCENTAGE || '0'),
};
// Can change these via environment variables instantlyTools and Techniques
Database Migration Tools
- Flyway: Version-controlled SQL migrations
- Liquibase: Database-agnostic migrations
- Prisma Migrate: TypeScript-first migrations
Feature Flags
- LaunchDarkly: Enterprise feature flags
- Unleash: Open-source feature flags
- Custom solution: Simple flag service in database
Monitoring
Track both systems during migration:
import * as Sentry from '@sentry/node';
async function getProject(id: string) {
const transaction = Sentry.startTransaction({
op: 'project.get',
name: 'Get Project',
});
try {
const useNew = shouldUseNewSystem();
const span = transaction.startChild({
op: 'db.query',
description: useNew ? 'New System' : 'Legacy System',
});
const result = useNew
? await newAPI.getProject(id)
: await legacyAPI.getProject(id);
span.finish();
return result;
} finally {
transaction.finish();
}
}Compare performance and error rates between systems.
When to Give Up and Rewrite
Sometimes rewrites are necessary:
- Technology is dead: VB6, Flash, Silverlight
- No one understands it: Original developers gone, no documentation
- Can't be maintained: Every change breaks something
- Security holes: Unfixable vulnerabilities
- Performance is unfixable: Architecture is fundamentally flawed
But even then, consider strangler pattern first.
Lessons Learned
- Migrate gradually: Small batches, feature flags, rollback capability
- Legacy has hidden complexity: Plan for 3x more edge cases than you expect
- Both systems will run in parallel: For months or years
- Data migration is hardest: Allocate 40% of time to data
- Feature parity first: New system must do everything legacy does
- Monitor everything: Compare performance, errors, data consistency
- Team needs wins: Ship features in new system early to maintain morale
Conclusion
Migrating legacy systems to modern architectures is one of the hardest things in software engineering. The key is avoiding the big rewrite trap and using the strangler pattern: gradually replace the old system with the new, one piece at a time.
After 27 years and countless migrations, I've learned that successful migrations aren't about perfect code or cutting-edge tech—they're about incremental progress, careful validation, and always having a rollback plan.
The best migration is the one that's so gradual, users never notice it happened.

Jason Cochran
Sofware Engineer | Cloud Consultant | Founder at Strataga
27 years of experience building enterprise software for oil & gas operators and startups. Specializing in SCADA systems, field data solutions, and AI-powered rapid development. Based in Midland, TX serving the Permian Basin.