Complete step-by-step guide to migrating your Google Cloud Platform Compute Engine instances to RamNode Cloud VPS.
This guide provides comprehensive instructions for migrating your virtual machines from Google Cloud Platform (GCP) Compute Engine to RamNode Cloud. Whether you're optimizing costs, seeking simpler pricing, or expanding your infrastructure options, this guide covers everything needed for a successful migration.
GCP provides robust export tools through the gcloud CLI and Cloud Console. This guide presents two proven methods: exporting disk images to Cloud Storage and live file synchronization using rsync.
| Format | Description | Notes |
|---|---|---|
| RAW (.tar.gz) | GCP's native export format | Compressed raw disk image |
| QCOW2 | QEMU Copy-On-Write format | ✓ Recommended for RamNode |
| VMDK | VMware virtual disk format | GCP export option available |
| VHD/VHDX | Microsoft Hyper-V format | GCP export option available |
| VDI | VirtualBox disk image | Convert after export |
Recommendation: Export as RAW from GCP and convert to QCOW2 for upload to RamNode. This provides the best balance of compatibility and efficiency.
This method exports your Compute Engine disk to Cloud Storage, where you can download and convert it for upload to RamNode. This is the recommended approach for complete system migrations.
If not already installed, set up the gcloud CLI:
# For Debian/Ubuntu
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -
sudo apt-get update && sudo apt-get install google-cloud-cli
# Authenticate
gcloud auth login
gcloud config set project YOUR_PROJECT_IDFor data consistency, stop the instance before creating a snapshot:
# List your instances
gcloud compute instances list
# Stop the instance
gcloud compute instances stop INSTANCE_NAME --zone=ZONECreate a snapshot of the boot disk:
# List disks to find your boot disk
gcloud compute disks list
# Create a snapshot
gcloud compute snapshots create my-migration-snapshot \
--source-disk=DISK_NAME \
--source-disk-zone=ZONE \
--storage-location=usCreate a new disk from the snapshot for export:
gcloud compute disks create migration-export-disk \
--source-snapshot=my-migration-snapshot \
--zone=ZONECreate a bucket for the exported image:
# Create a bucket (bucket names must be globally unique)
gsutil mb -l us gs://my-migration-bucket-12345
# Grant the Compute Engine service account access
gsutil iam ch serviceAccount:PROJECT_NUMBER-compute@developer.gserviceaccount.com:objectAdmin gs://my-migration-bucket-12345Export the disk to Cloud Storage in a portable format:
# Export as RAW format (recommended for conversion to QCOW2)
gcloud compute images export \
--destination-uri=gs://my-migration-bucket-12345/disk-image.tar.gz \
--source-disk=migration-export-disk \
--source-disk-zone=ZONE \
--export-format=raw
# Alternative: Export as VMDK
gcloud compute images export \
--destination-uri=gs://my-migration-bucket-12345/disk-image.vmdk \
--source-disk=migration-export-disk \
--source-disk-zone=ZONE \
--export-format=vmdkNote: The export process may take 15-60 minutes depending on disk size. You can monitor progress in the Cloud Console under Compute Engine → Operations.
Download the exported image to your local machine or a transfer server:
# Download from Cloud Storage
gsutil cp gs://my-migration-bucket-12345/disk-image.tar.gz .
# Extract the raw image
tar -xzf disk-image.tar.gz
# This creates a file named 'disk.raw'Convert the raw image to QCOW2 format for efficient upload:
# Install qemu-utils if needed
sudo apt-get install -y qemu-utils
# Convert to QCOW2 with compression
qemu-img convert -f raw -O qcow2 -c disk.raw gcp-server.qcow2
# Verify the image
qemu-img info gcp-server.qcow2After successful migration, clean up temporary resources to avoid charges:
# Delete the export disk
gcloud compute disks delete migration-export-disk --zone=ZONE
# Delete the snapshot (optional, keep for rollback)
gcloud compute snapshots delete my-migration-snapshot
# Delete the Cloud Storage object (after uploading to RamNode)
gsutil rm gs://my-migration-bucket-12345/disk-image.tar.gzFor simpler setups or application-level migrations, you can use rsync to transfer files directly to a new RamNode instance. This is ideal when you want a fresh OS installation.
Create a new VPS on RamNode with the same operating system as your GCP instance. Choose a plan with at least the same resources.
On your GCP instance, set up SSH key access to the RamNode server:
# Generate a key pair if you don't have one
ssh-keygen -t ed25519 -C "gcp-migration"
# Copy to RamNode server
ssh-copy-id root@RAMNODE_IPExecute rsync from your GCP instance to transfer all files:
rsync -avzP --numeric-ids \
--exclude='/dev/*' \
--exclude='/proc/*' \
--exclude='/sys/*' \
--exclude='/tmp/*' \
--exclude='/run/*' \
--exclude='/mnt/*' \
--exclude='/media/*' \
--exclude='/lost+found' \
--exclude='/boot/efi/*' \
--exclude='/var/lib/google/*' \
--exclude='/etc/google_hostname' \
/ root@RAMNODE_IP:/Tip: Run rsync multiple times for minimal downtime. The first run copies everything; subsequent runs only sync changes.
On the RamNode server, remove GCP-specific packages:
# Ubuntu/Debian
apt-get purge google-cloud-sdk google-compute-engine \
google-compute-engine-oslogin google-osconfig-agent
apt-get autoremove
# CentOS/Rocky/Alma
yum remove google-cloud-sdk google-compute-engine \
google-compute-engine-oslogin google-osconfig-agentUpdate critical configuration files:
# Update hostname
hostnamectl set-hostname your-new-hostname
# Reset machine-id for cloud-init
rm /etc/machine-id
systemd-machine-id-setup
# Remove GCP-specific configs
rm -rf /etc/google_hostname
rm -rf /var/lib/google/*
# Update network configuration (Ubuntu with Netplan)
nano /etc/netplan/50-cloud-init.yaml
netplan applyEnsure the bootloader is properly configured:
# Ubuntu/Debian
update-grub
grub-install /dev/vda
# CentOS/Rocky/Alma
grub2-mkconfig -o /boot/grub2/grub.cfg
grub2-install /dev/vdaFor images larger than 2GB, use the OpenStack CLI for reliable uploads:
# Download your authentication file from the Cloud Control Panel
# Navigate to: API Access → Download OpenStack RC File
# Install the OpenStack CLI
pip install python-openstackclient
# Source your authentication file
source your-openrc.sh
# Upload the image
openstack image create \
--disk-format qcow2 \
--container-format bare \
--file gcp-server.qcow2 \
"gcp-migration-image"
# Verify the upload
openstack image listFor more details on using the OpenStack CLI, see our OpenStack API Guide.
After the image upload completes:
| GCP Machine Type | Specs | RamNode Equivalent |
|---|---|---|
| e2-micro | 0.25-2 vCPU, 1 GB RAM | Cloud VPS 1GB+ |
| e2-small | 0.5-2 vCPU, 2 GB RAM | Cloud VPS 2GB+ |
| e2-medium | 1-2 vCPU, 4 GB RAM | Cloud VPS 4GB+ |
| e2-standard-2 | 2 vCPU, 8 GB RAM | Cloud VPS 8GB+ |
| e2-standard-4 | 4 vCPU, 16 GB RAM | Cloud VPS 16GB+ |
| e2-standard-8 | 8 vCPU, 32 GB RAM | Cloud VPS 32GB+ |
Access the VNC console to verify your system boots correctly. Check for any boot errors, especially related to disk or network drivers.
If you used the disk export method, remove GCP-specific components:
# Ubuntu/Debian
apt-get purge google-cloud-sdk google-compute-engine \
google-compute-engine-oslogin google-osconfig-agent gce-disk-expand
apt-get autoremove
# Remove GCP metadata scripts
rm -rf /etc/google_hostname
rm -rf /var/lib/google/*
rm -rf /usr/share/google/*Reconfigure networking for RamNode's infrastructure:
# Reset cloud-init to use RamNode's metadata
cloud-init clean --logs
rm -rf /var/lib/cloud/*
# Reboot to reinitialize
rebootUpdate your DNS records to point to the new RamNode IP address. If you use Google Cloud DNS, remember to update records there as well or migrate to another DNS provider.
Test all critical services to ensure they function correctly:
apt-get purge google-compute-engine-oslogin| Practice | Recommendation |
|---|---|
| Create Snapshots First | Always snapshot your GCP instance before migration |
| Document GCP Services | List all GCP-specific services your app uses (Cloud SQL, Pub/Sub, etc.) |
| Clean Before Export | Remove logs, temp files, and package caches to reduce image size |
| Use QCOW2 Format | Convert RAW exports to QCOW2 for efficient storage and upload |
| Test Thoroughly | Deploy and test on RamNode while keeping GCP running |
| Lower DNS TTL | Reduce TTL days before migration for faster DNS propagation |
| Plan for GCP Cleanup | Delete GCP resources after successful migration to avoid ongoing charges |
Need Professional Assistance?
Our Professional Services team can help with complex migrations from GCP, including database migrations and application reconfiguration.