AWS Migration & Transfer Services
π AWS Application Discovery Service
| Key Points | Detailed Notes |
|---|---|
| What is it? | Assessment tool for understanding on-premises infrastructure before migration |
| Main Purpose | β’ Automatically discovers servers, applications, dependencies β’ Maps application relationships and performance metrics β’ Provides cost estimation for AWS migration |
| Deployment Types | β’ Agent-based: Install agents on servers for detailed discovery β’ Agentless: Network-based discovery with less detail |
| Key Benefits | β’ Automated Discovery: Eliminates manual inventory β’ Dependency Mapping: Visualizes application relationships β’ Performance Insights: CPU, memory, network utilization β’ Risk Reduction: Identifies migration challenges early |
| Limitations | β’ Surface-level discovery: May miss complex dependencies β’ 90-day data retention: Limited historical data β’ Agent overhead: Can impact system performance β’ Containerized workloads: Struggles with dynamic environments |
| Best Use Cases | β
Large environments (50+ servers) β Unknown dependencies β Cost estimation needs β Well-documented small environments |
Simple Real-World Example:
π¦ Trading Platform Migration
Problem: Bank with 500+ servers, unknown dependencies
Solution: Deployed ADS agents to discover infrastructure
Discovery: Trading system connected to 15 databases + 8 middleware
Result: Proper migration sequence, 40% cost savings identified
Timeline: 1 month assessment vs 6 months manual work
Integration Pattern:
ADS β Migration Hub β Cost Explorer β MGN/DMS
π AWS Application Migration Service (MGN)
| Key Points | Detailed Notes |
|---|---|
| What is it? | Automated lift-and-shift migration with minimal downtime |
| Core Concept | β’ Continuous replication from source to AWS β’ Test environment before cutover β’ Minutes of downtime during final switch |
| Migration Process | 1. Install agents on source servers 2. Continuous sync to AWS replicas 3. Test everything in AWS environment 4. Quick cutover during maintenance window |
| Key Advantages | β’ Minimal downtime: Minutes, not hours β’ Broad compatibility: Physical, virtual, cloud servers β’ Built-in testing: Validate before production β’ Rollback capability: Can reverse if needed |
| Limitations | β’ No modernization: Pure lift-and-shift β’ Bandwidth requirements: Significant for initial sync β’ License costs: May carry over existing licenses β’ Agent dependency: Must install on all servers |
| Perfect For | β
Large VM environments (100+ machines) β 99.9%+ uptime requirements β Need testing before cutover β Applications needing modernization |
Simple Real-World Example:
π E-commerce Platform Migration
Challenge: 200 VMs, Black Friday traffic, 99.95% uptime SLA
Process: Install agents β Continuous replication β Test β Cutover
Results:
β’ 12 weeks total (vs 6 months traditional)
β’ 99.97% uptime achieved
β’ 40% cost reduction
β’ Zero data loss
Remember: MGN = Minimal downtime, Great testing, No modernization
πΎ AWS Database Migration Service (DMS)
| Key Points | Detailed Notes |
|---|---|
| What is it? | Database migration with zero downtime and cross-engine support |
| Migration Types | β’ Homogeneous: Oracle β Oracle (same database type) β’ Heterogeneous: Oracle β PostgreSQL (different engines) β’ Continuous replication: Ongoing sync for analytics |
| Core Process | 1. Source remains online during migration 2. Continuous data sync to target 3. Data validation ensures integrity 4. Cutover when ready with minimal downtime |
| Key Benefits | β’ Zero downtime: Source DB stays operational β’ Cross-engine support: Migrate between different databases β’ Built-in monitoring: Real-time progress tracking β’ Data validation: Automatic integrity checking |
| Challenges | β’ Limited transformations: Not for complex data changes β’ Learning curve: Requires replication knowledge β’ Large objects: Challenges with binary data β’ Schema complexity: Manual work for procedures/triggers |
| Data Engineering Use | β’ CDC streams: Real-time change capture β’ Data lake population: Operational data to S3 β’ Multi-source consolidation: Combine databases |
Simple Real-World Example:
πͺ Retail Inventory Migration: Oracle β PostgreSQL
Challenge: 2TB database, 150 stores, zero downtime
Process: Schema conversion β DMS setup β Continuous replication β Cutover
Results:
β’ 45 minutes downtime (vs 3 hours planned)
β’ 62% cost reduction
β’ 15% performance improvement
β’ All 150 stores operational
π§ AWS Schema Conversion Tool (SCT)
| Key Points | Detailed Notes |
|---|---|
| What is it? | Desktop tool for converting database schemas between engines |
| Relationship to DMS | β’ SCT converts schemas (structure, procedures, functions) β’ DMS migrates data (actual records) β’ Use together for heterogeneous migrations |
| Conversion Process | 1. Analyze source database complexity 2. Auto-convert 80-90% of objects 3. Flag manual work needed 4. Generate reports with effort estimates |
| Key Features | β’ Assessment reports: Complexity analysis β’ Automatic conversion: Most database objects β’ Cost estimation: AWS target environment costs β’ Code conversion: Stored procedures, triggers, functions |
| Limitations | β’ Manual review needed: Complex objects require adjustment β’ Desktop dependency: Must install locally β’ Version compatibility: Keep updated with DB engines |
Simple Conversion Example:
π ERP Migration: SQL Server β PostgreSQL
Analysis: 500 tables, 150 procedures, 25 years of logic
SCT Results:
β’ 85% automatic conversion
β’ 15% manual review needed
β’ 5 weeks vs 6 months manual work
β’ $180K/year licensing savings
β‘ AWS DataSync
| Key Points | Detailed Notes |
|---|---|
| What is it? | Automated online data transfer between on-premises and AWS |
| Performance | β’ 10x faster than traditional tools β’ Network optimization and compression β’ Parallel transfers for efficiency |
| Transfer Types | β’ One-time: Initial data migration β’ Scheduled: Regular backups/updates β’ Triggered: Event-based transfers |
| Key Features | β’ Data validation: Integrity verification β’ Bandwidth optimization: Smart compression β’ Monitoring: Detailed progress tracking β’ Scheduling: Automated recurring jobs |
| Requirements | β’ Stable internet: High-bandwidth connection needed β’ DataSync agent: On-premises deployment β’ Network access: Proper firewall configuration |
| Cost Considerations | β’ Pay per GB: Can be expensive for large, frequent transfers β’ Alternative for huge datasets: Consider Snow Family |
Simple Transfer Example:
π¬ Media Company: 50TB Video Library
Challenge: Transfer video content to AWS for global CDN
Process: Deploy agent β Configure transfer β Monitor progress
Results:
β’ 6 days vs 4 weeks traditional
β’ 83% cost reduction ($5,200 vs $31,000)
β’ Automated daily uploads (500GB/day)
β’ Global content delivery enabled
Decision Matrix:
| Data Size | Internet Speed | Recommendation |
|---|---|---|
| < 1TB | Good | DataSync |
| 1-10TB | Good | DataSync |
| 10-80TB | Poor | Snow Family |
| > 100TB | Any | Snow Family |
π¦ AWS Snow Family
| Key Points | Detailed Notes |
|---|---|
| What is it? | Physical devices for offline data transfer at petabyte scale |
| Device Types | β’ Snowcone (8TB): Small remote locations β’ Snowball Edge (80TB): Standard large transfers β’ Snowmobile (100PB): Extreme large datasets |
| When to Use | β’ 100+ days to transfer over internet β’ $10,000+ in bandwidth costs β’ Poor connectivity or security concerns β’ Petabyte datasets that break normal tools |
| Snowball Edge Special | β’ Edge computing: Process during transfer β’ Local analytics: Work without internet β’ Disconnected operations: Remote locations |
| Process Flow | 1. Order device from AWS 2. Load data locally 3. Ship back to AWS 4. AWS imports to S3 5. Device wiped and returned |
| Timeline | Total: 2-3 weeks (including shipping) |
Simple Transfer Example:
𧬠Genomics Lab: 200TB Research Data
Challenge: 15 years of DNA data, 2-month deadline, 100 Mbps connection
Solution: 3 Γ Snowball Edge devices
Process: Order β Load data β Ship β AWS import
Results:
β’ 8 weeks vs 8+ months network transfer
β’ $14,200 vs $85,000 traditional approach
β’ 83% cost reduction
β’ Zero research downtime
π AWS Transfer Family
| Key Points | Detailed Notes |
|---|---|
| What is it? | Managed file transfer supporting legacy protocols |
| Supported Protocols | β’ SFTP: Secure FTP (most common) β’ FTPS: FTP with SSL security β’ FTP: Basic (avoid if possible) β’ AS2: Business-to-business standard |
| Target Storage | β’ Amazon S3: Primary destination β’ Amazon EFS: File system storage |
| Key Benefits | β’ No infrastructure: Fully managed β’ Legacy support: Works with old systems β’ AWS integration: Direct S3/EFS writes β’ Auto-scaling: Handles volume spikes |
| Cost Structure | β’ Per endpoint: Monthly endpoint charges β’ Per GB transferred: Data transfer costs β’ Can be expensive: For high-volume continuous use |
| Perfect For | β
Partner file exchanges β Legacy system integration β Compliance requirements β High-volume internal transfers |
Simple B2B Example:
π¦ Bank Partner File Exchange
Challenge: 200+ partner banks, 50,000 daily files, legacy SFTP servers
Solution: Transfer Family managed SFTP endpoints
Process: Configure endpoints β Migrate partners β Automate processing
Results:
β’ 93% cost reduction ($7,410 vs $100,000/month)
β’ 3x faster file transfers
β’ Partner onboarding: 5 days vs 30 days
β’ 24/7 automated operations
π― Service Selection Guide
| Migration Need | Primary Service | Supporting Services |
|---|---|---|
| Assessment & Planning | Application Discovery Service | Migration Hub, Cost Explorer |
| Application Migration | Application Migration Service (MGN) | Systems Manager, CloudWatch |
| Database Migration | DMS + Schema Conversion Tool | S3, Redshift, Glue |
| Large File Transfers | DataSync (online) or Snow (offline) | S3, Lambda, CloudWatch |
| Ongoing File Exchange | Transfer Family | S3, EFS, Lambda |
| Real-time Data Streams | Kinesis Data Streams | Lambda, S3, Redshift |
π Summary Section
Key Migration Journey:
- π ASSESS: Use Application Discovery Service to understand current state
- π CONVERT: Use Schema Conversion Tool for database schema changes
- π MIGRATE: Use MGN for applications, DMS for databases
- π¦ TRANSFER: Use DataSync for ongoing files, Snow for massive datasets
- π INTEGRATE: Use Transfer Family for partner file exchanges
Critical Success Factors:
- Always assess before migrating - Donβt go in blind
- Test everything - Use MGNβs testing capabilities
- Plan for ongoing operations - Migration is just the beginning
- Choose the right tool for data size - Online vs offline transfer decisions
- Consider total cost - Not just migration, but ongoing operations
Common Pitfalls to Avoid:
- β Not testing network bandwidth before choosing online transfer
- β Ignoring application dependencies discovered by ADS
- β Forgetting about ongoing file transfer needs after migration
- β Underestimating schema conversion complexity
- β Not planning for post-migration optimization
Quick Reference Decision Tree:
What are you doing?
βββ Don't know what you have β Application Discovery Service
βββ Moving applications β Application Migration Service (MGN)
βββ Moving databases β DMS + Schema Conversion Tool
βββ Moving large files β DataSync (online) or Snow (offline)
βββ Ongoing file transfers β Transfer Family
Memory Aids:
- ADS = Assess Dependencies Systems
- MGN = Minimal downtime, Great testing, No modernization
- DMS = Database Migration Safely
- SCT = Schema Conversion Tool
- Snow = Send Now, Online Wonβt work
- Transfer Family = Traditional File Transfer, Fully managed
π Essential Resources for Further Study
Documentation Links:
- Application Discovery Service Guide
- Application Migration Service Guide
- Database Migration Service Guide
- DataSync User Guide
- Snow Family Guide
- Transfer Family Guide
Best Practices:
Study Tip: Review the Summary Section regularly and use the memory aids to reinforce learning. Practice with the Decision Tree until service selection becomes automatic.