arrow_back

VM Migration: Planning

Join Sign in
Test and share your knowledge with our community!
done
Get access to over 700 hands-on labs, skill badges, and courses

VM Migration: Planning

Lab 1 hour universal_currency_alt 5 Credits show_chart Intermediate
Test and share your knowledge with our community!
done
Get access to over 700 hands-on labs, skill badges, and courses

GSP616

Google Cloud self-paced labs logo

Overview

Google Cloud’s four steps for structured Cloud Migration Path Methodology provides a defined, repeatable migration path for users to follow:

The four-step migration path diagram

  1. Assess the current environment to gain a solid understanding of existing resources and define migration move groups.

  2. Plan how to move your apps and create the basic cloud infrastructure for your workloads to live. This planning includes identity management, organization and project structure, networking, sorting your apps and developing a prioritized migration strategy.

  3. Deploy the existing on-premise or cloud-based servers in Google Cloud leveraging one of Google Cloud’s recommended migration tools like Google’s Velostrata or CloudEndure’s Live Migration Tool

  4. Optimize your newly migrated workloads to realize the true cost benefits and operational efficiencies that Google Cloud can bring to the enterprise

This lab focuses on the Plan phase and how to deploy your basic infrastructure on Google Cloud.

What you'll learn

Terraform is a popular open-source tool for defining and provisioning infrastructure (Infrastructure as Code). In this lab, you'll leverage pre-built Infrastructure as Code templates to set up a cloud network, configure access, and deploy your first application—all in a secure and automated fashion.

More specifically, you will learn how to:

  • Create access credentials for automation in Google Cloud.
  • Create a functional environment for using Terraform.
  • Create a custom mode Virtual Private Cloud (VPC) network, with related firewall rules.
  • Bake an image on Compute Engine.
  • Deploy an instance onto Compute Engine using Terraform.
  • Reference resources across multiple Terraform deployments.

Setup and requirements

Before you click the Start Lab button

Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources will be made available to you.

This hands-on lab lets you do the lab activities yourself in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials that you use to sign in and access Google Cloud for the duration of the lab.

To complete this lab, you need:

  • Access to a standard internet browser (Chrome browser recommended).
Note: Use an Incognito or private browser window to run this lab. This prevents any conflicts between your personal account and the Student account, which may cause extra charges incurred to your personal account.
  • Time to complete the lab---remember, once you start, you cannot pause a lab.
Note: If you already have your own personal Google Cloud account or project, do not use it for this lab to avoid extra charges to your account.

How to start your lab and sign in to the Google Cloud console

  1. Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is the Lab Details panel with the following:

    • The Open Google Cloud console button
    • Time remaining
    • The temporary credentials that you must use for this lab
    • Other information, if needed, to step through this lab
  2. Click Open Google Cloud console (or right-click and select Open Link in Incognito Window if you are running the Chrome browser).

    The lab spins up resources, and then opens another tab that shows the Sign in page.

    Tip: Arrange the tabs in separate windows, side-by-side.

    Note: If you see the Choose an account dialog, click Use Another Account.
  3. If necessary, copy the Username below and paste it into the Sign in dialog.

    {{{user_0.username | "Username"}}}

    You can also find the Username in the Lab Details panel.

  4. Click Next.

  5. Copy the Password below and paste it into the Welcome dialog.

    {{{user_0.password | "Password"}}}

    You can also find the Password in the Lab Details panel.

  6. Click Next.

    Important: You must use the credentials the lab provides you. Do not use your Google Cloud account credentials. Note: Using your own Google Cloud account for this lab may incur extra charges.
  7. Click through the subsequent pages:

    • Accept the terms and conditions.
    • Do not add recovery options or two-factor authentication (because this is a temporary account).
    • Do not sign up for free trials.

After a few moments, the Google Cloud console opens in this tab.

Note: To view a menu with a list of Google Cloud products and services, click the Navigation menu at the top-left. Navigation menu icon

Activate Cloud Shell

Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.

  1. Click Activate Cloud Shell Activate Cloud Shell icon at the top of the Google Cloud console.

When you are connected, you are already authenticated, and the project is set to your Project_ID, . The output contains a line that declares the Project_ID for this session:

Your Cloud Platform project in this session is set to {{{project_0.project_id | "PROJECT_ID"}}}

gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.

  1. (Optional) You can list the active account name with this command:
gcloud auth list
  1. Click Authorize.

Output:

ACTIVE: * ACCOUNT: {{{user_0.username | "ACCOUNT"}}} To set the active account, run: $ gcloud config set account `ACCOUNT`
  1. (Optional) You can list the project ID with this command:
gcloud config list project

Output:

[core] project = {{{project_0.project_id | "PROJECT_ID"}}} Note: For full documentation of gcloud, in Google Cloud, refer to the gcloud CLI overview guide.

Task 1. Setting up your environment

  1. Verify that Terraform is installed in the Cloud Shell environment:
terraform -v

Command output:

Terraform v0.15.3 Note: Your version may be slightly different. Continue if it resembles the above output.
  1. Clone the git repo which contains the lab code by running the following command:
git clone https://github.com/terraform-google-modules/cloud-foundation-training.git
  1. Change the current directory to the networking directory:
cd cloud-foundation-training/other/terraform-codelab/lab-networking

Task 2. Configure variables

In this section, you will set variables for the Terraform configuration. Variables allow you to parameterize Terraform configurations for reuse.

If variables are not set, Terraform will prompt you to set them when it runs.

For ease of use, you can store variable values in a terraform.tfvars file which Terraform automatically loads when it runs.

  1. Create a terraform.tfvars file in Cloud Shell:
touch terraform.tfvars
  1. Open the terraform.tfvars file in Code Editor:
edit terraform.tfvars Note: You may also use an CLI code editor like vi and nano if you are more comfortable with it.
  1. Paste this into your terraform.tfvars file:
project_id="<my project id>"
  1. Update the project ID variable to match your Qwiklabs Project ID. E.g. <my project id> should be replaced with qwiklabs-gcp-xx-xxxxxxxxxxxx.

Task 3. Set up Google Cloud access credentials

In this section, you will create and download service account keys to use as access credentials for Google Cloud. You will also update your template files to use these access credentials.

Terraform requires access rights to your projects and/or environments in Google Cloud. Although the Terraform Google Cloud provider offers multiple ways to supply credentials, in this lab you create and download a credentials file associated with a Google Cloud service account. Using Google Cloud service-account authentication is a best practice.

Create and download default service account access credentials

  1. In your Cloud Shell session, run the following command to create a Service Account for running Terraform:
gcloud iam service-accounts create terraform --display-name terraform

Command output:

Created service account [terraform].
  1. List your Service Accounts to get the email for your new Terraform account:
gcloud iam service-accounts list

Command output:

DISPLAY NAME: Compute Engine default service account EMAIL: 631585931190-compute@developer.gserviceaccount.com DISABLED: False DISPLAY NAME: Qwiklabs User Service Account EMAIL: qwiklabs-gcp-02-8eb8debc8fd5@qwiklabs-gcp-02-8eb8debc8fd5.iam.gserviceaccount.com DISABLED: False DISPLAY NAME: terraform EMAIL: terraform@qwiklabs-gcp-02-8eb8debc8fd5.iam.gserviceaccount.com DISABLED: False
  1. Create and download a key for using the Terraform service account, replacing <service account email> with the Terraform email that was outputted from the previous command:
gcloud iam service-accounts keys create ./credentials.json --iam-account <service account email>

Command output:

created key [...] of type [json] as [./credentials.json] for [terraform@.iam.gserviceaccount.com]

Test completed task

Click Check my progress to verify your performed task. If you have successfully created a service account and a key, you will see an assessment score.

Create a service account and a key (SA Name: terraform)
  1. Grant your Service Account the Owner role on your project by running the following command, replacing <my project id> with your Qwiklabs Project ID and <service account email> with your terraform service account email:
gcloud projects add-iam-policy-binding <my project id> --member=serviceAccount:<service account email> --role=roles/owner

Test completed task

Click Check my progress to verify your performed task. If you have successfully granted an owner role to your service account on the project, you will see an assessment score.

Grant your Service Account the Owner role on your project

Task 4. Set up remote state

Terraform stores a mapping between your configuration and created resources in Terraform state. By default, this state is stored in a local file but the best practice is to store it remotely on Cloud Storage.

In this section, you will create a Cloud Storage bucket to store Terraform state and update your Terraform configuration to point to this bucket.

Create and configure a Cloud Storage bucket

  1. Create a new bucket to store Terraform state. A Cloud Storage bucket needs to be globally unique, so be sure to prefix its name with your Qwiklabs Google Cloud project ID as shown in the command below:
gsutil mb gs://<my project id>-state-bucket

Command output:

Creating gs://-state-bucket/...

Test completed task

Click Check my progress to verify your performed task. If you have successfully created a Cloud Storage bucket, you will see an assessment score.

Create and configure a Cloud Storage bucket
  1. Open the backend config stored in backend.tf:
edit backend.tf
  1. Update the bucket name to match your choice and save the file:
terraform { backend "gcs" { bucket = "my-state-bucket" # Change this to <my project id>-state-bucket prefix = "terraform/lab/network" } }

Task 5. Run Terraform

Now that you have configured credentials and remote state, you are ready to run Terraform. When using Terraform, you will generally follow these steps to deploy and clean up an environment, as outlined in the following image.

First, set up configuration files, then prepare deployment/plan, then deploy resources/apply,then check deployment, then clean up resources/destroy

Run Terraform for the first time

  1. First, initialize Terraform to download the latest version of the Google and Random providers. Run the following command in Cloud Shell to do so:
terraform init
  • If you run this command and receive an error regarding your Cloud Storage bucket not existing, make sure you have the correct name in backend.tf. Then, run the commands below:
rm -rf .terraform/ terraform init

This will clean your local Terraform state and produce a successful initialization.

  1. Run a plan step to validate the configuration syntax and show a preview of what will be created:
terraform plan

The plan output shows Terraform is going to create 8 resources for your network.

  1. Now execute Terraform apply to apply those changes:
terraform apply

You will see output like this:

Plan: 8 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value:
  1. Enter yes to the prompt. After the apply has finished, you should see an output similar to the following:
Apply complete! Resources: 8 added, 0 changed, 0 destroyed.
  1. Once you have applied the changes, you can display the list of resources in the Terraform state with the show command:
terraform show

Test completed task

Click Check my progress to verify your performed task. If you have successfully run Terraform for the first time, you will see an assessment score.

Run Terraform for the first time

Task 6. Add a subnet

The repo you downloaded includes a module defining your network and subnets. You will add an additional subnet to host migrated VMs.

Create an additional network

  1. Open the network config stored in network.tf. The network config in this file is managed via the network module:
edit network.tf
  1. Add an additional subnet in the subnets block of the file (on line 40). You can choose your own name and CIDR range, such as 10.10.30.0/24.
module "vpc" { ... subnets = [ ... { # Creates your first subnet in {{{project_0.default_region|Region}}} and defines a range for it subnet_name = "my-first-subnet" subnet_ip = "10.10.10.0/24" subnet_region = "{{{project_0.default_region|Region}}}" }, # Add your subnet here { subnet_name = "my-third-subnet" subnet_ip = "10.10.30.0/24" subnet_region = "{{{project_0.default_region|Region}}}" }, ] }
  1. You also need to add a section defining the secondary ranges for your subnet (line 49), which can be an empty list:
secondary_ranges = { my-first-subnet = [] my-gke-subnet = [ { # Define a secondary range for Kubernetes pods to use range_name = "my-gke-pods-range" ip_cidr_range = "192.168.64.0/24" }, ] # Add your subnet's secondary range below this line my-third-subnet = [] }
  1. Now execute Terraform apply to add your new subnet:
terraform apply Note: If you receive an error as Error waiting to create Subnetwork, Re-run the above command

You will see output like this:

Plan: 1 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value:
  1. Enter yes to the prompt. After the apply has finished, you should see an output similar to the following.
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Test completed task

Click Check my progress to verify your performed task. If you have successfully applied changes to add a subnet using Terraform script, you will see an assessment score.

Add a subnet

Task 7. Allow https traffic

The lab includes code for managing firewall rules in Terraform. You can extend this to add additional firewall rules for inbound or outbound traffic.

  1. Open the firewall config stored in firewall.tf:
edit firewall.tf
  1. Edit the allow-http rule to also allow https traffic on port 443 (line 51):
resource "google_compute_firewall" "allow-http" { name = "allow-http" network = module.vpc.network_name project = google_project_service.compute.project allow { protocol = "tcp" ports = ["80", "443"] # Edit this line } # Allow traffic from everywhere to instances with an http-server tag source_ranges = ["0.0.0.0/0"] target_tags = ["allow-http"] }
  1. Now execute Terraform apply to update the firewall rule:
terraform apply

You will see output like this:

Plan: 0 to add, 1 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value:
  1. Enter yes to the prompt. After the apply has finished, you should see an output similar to the following.
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

Test completed task

Click Check my progress to verify your performed task. If you have successfully applied changes to update a firewall rule using Terraform script, you will see an assessment score.

Allow https traffic

Task 8. Add a Terraform output

Terraform outputs can be used to capture the most important and useful information for your environment, both for human and machine consumption. This might include key IP addresses, instance names, or other information.

In this section, you will add an output to share the ID of your new subnet.

  1. Open the network config stored in outputs.tf:
edit outputs.tf
  1. Add a section which outputs the name of your new subnet, based on the existing outputs. Note that the subnets are zero-indexed:
# Add you new output below this line output "my_subnet_name" { value = module.vpc.subnets_names[2] }
  1. Now execute Terraform apply to update the firewall rule:
terraform apply
  1. Enter yes when prompted to enter a value.

  2. View the outputs using the Terraform output command:

terraform output

Command output:

first_subnet_name = "my-first-subnet" my_subnet_name = "my-third-subnet" network_name = "my-custom-network"

Task 9. Create the initial VM

You will extend the work you completed earlier by creating a VM and deploying it onto your network. You will also learn how to create a base image and dynamically layer configuration information for VMs.

To build an image, you should start by launching a VM where you will install the software which you want to be included in your image.

Launch the initial VM

  1. Launch a VM using gcloud on your first subnetwork:
gcloud compute instances create build-instance --zone={{{project_0.default_zone|zone}}} --machine-type=e2-standard-2 --subnet=my-first-subnet --network-tier=PREMIUM --maintenance-policy=MIGRATE --image=debian-10-buster-v20221206 --image-project=debian-cloud --boot-disk-size=100GB --boot-disk-type=pd-standard --boot-disk-device-name=build-instance-1 --tags=allow-ssh
  1. SSH into the VM:
gcloud compute ssh build-instance --zone={{{project_0.default_zone|zone}}}

Command output:

WARNING: The public SSH key file for gcloud does not exist. WARNING: The private SSH key file for gcloud does not exist. WARNING: You do not have an SSH key for gcloud. WARNING: SSH keygen will be executed to generate a key. This tool needs to create the directory [/home/user/.ssh] before being able to generate SSH keys. Do you want to continue (Y/n)?
  1. Enter Y and then when asked for a passphrase press ENTER twice.

  2. Install Apache on the VM:

sudo apt-get update && sudo apt-get install apache2 -y
  1. Exit the ssh session:
exit

Task 10. Capture the base image

Now that you have a machine running the software you need, you can capture it as a base image to spin up additional identical virtual machines.

Create image

  1. Stop the VM. It is always best practice to stop VMs before capturing images if possible:
gcloud compute instances stop build-instance --zone={{{project_0.default_zone|zone}}}
  1. Create an image from the boot disk:
gcloud compute images create apache-one \ --source-disk build-instance \ --source-disk-zone {{{project_0.default_zone|zone}}} \ --family my-apache-webserver

By including the family parameter, we tied our image to a family. This allows us to easily get information or deploy the latest image from that family.

  1. Use gcloud to get info on the latest my-apache-webserver image:
gcloud compute images describe-from-family my-apache-webserver

Test completed task

Click Check my progress to verify your performed task. If you have successfully created an image from the boot disk, you will see an assessment score.

Create an image from the boot disk

Task 11. Update Terraform config

Now that we have an image to work from, we would like to deploy it via Terraform. However, it is a best practice to separate Terraform configurations into logical units and VM instances into application-layer concerns.

So, we will switch to a new application-specific Terraform config and make updates there—leaving our networking Terraform alone.

Set up application Terraform config

  1. Switch to the application lab directory in your Cloud Shell:
cd ../lab-app
  1. Copy the credentials and variables files over from your networking configuration:
cp ../lab-networking/credentials.json . cp ../lab-networking/terraform.tfvars .
  1. Update the backend configuration for your application Terraform code by editing backend.tf:
edit backend.tf Note: You can reuse the same Cloud Storage bucket and simply have a different prefix for the application state. Make sure to update the bucket setting here to match the same bucket you used before. terraform { backend "gcs" { bucket = "<my-project-id>-state-bucket" # Edit this this line to match your lab-networking/backend.tf file prefix = "terraform/lab/vm" } }
  1. You should also update the Terraform remote state data source. Terraform remote state is very useful for sharing information/outputs across multiple projects—for example, for a central networking team to share subnet information with application teams. Make sure to update the bucket setting here to match the same bucket you used before:
data "terraform_remote_state" "network" { backend = "gcs" config = { bucket = "<my-project-id>-state-bucket" # Update this too prefix = "terraform/lab/network" } }

Task 12. Deploy VM via Terraform

Terraform can now be pointed at our baked image to launch an instance. Before launching the instance, let's investigate the details of this Terraform config.

Review VM config

  1. The Terraform configuration for your VM is stored in vm.tf:
edit vm.tf
  1. Because we tagged our image to an image family, we can grab the latest image from that family via Terraform. If needed, update this data source to match the image family name you chose:
data "google_compute_image" "apache" { family = "my-apache-webserver" project = var.project_id }
  1. We're also able to use a reference to choose the subnet name to deploy on. The data.terraform_remote_state.network.my_subnet_name declaration automatically grabs the my_subnet_name output from the networking config you created earlier:
resource "google_compute_instance" "app" { ... network_interface { subnetwork = data.terraform_remote_state.network.my_subnet_name subnetwork_project = var.project_id access_config { # Include this section to give the VM an external ip address } } ... } Note: Remember to replace the zone with your zone in vm.tf file.
  1. Assuming everything looks good in this configuration, you can apply the Terraform to deploy the VM:
terraform init terraform apply
  1. Enter yes when prompted to enter a value.

This will output the external IP of your VM.

Apply complete! Resources: 1 added, 0 changed, 0 destroyed. Outputs: ip = "35.185.255.154"
  1. Open the http://<your ip> address in your web browser to see a welcome message.

Test completed task

Click Check my progress to verify your performed task. If you have successfully deployed VM insatnce using Terraform, you will see an assessment score.

Deploy the VM

Task 13. Update the application VM

Thanks to defining the VM configuration via Terraform, you can declaratively change the application's configuration merely by tweaking your Terraform config.

  1. Open your vm.tf file:
edit vm.tf
  1. Change the metadata_startup_script to have a new welcome message:
resource "google_compute_instance" "app" { ... metadata_startup_script = "echo '<!doctype html><html><body><h1>New message!</h1></body></html>' | sudo tee /var/www/html/index.html" # Edit this line tags = ["allow-ping", "allow-http", "allow-ssh"] }
  1. Run Terraform to recreate your VM with the new config:
terraform apply
  1. Enter yes when prompted to enter a value.

You should receive a similar output:

Apply complete! Resources: 1 added, 0 changed, 1 destroyed. Outputs: ip = 34.83.209.192
  1. After the VM is fully recreated, open the http://<your ip> address in your web browser to see the new welcome message.
Note: If you get an Apply cancelled. message, enter in the terraform apply again.

Task 14. Destroy the infrastructure

Terraform configuration also allows you to easily tear down infrastructure after you're done using it. This can be particularly helpful for temporary development or testing infrastructure.

Update VM config

  1. Destroy the VM infrastructure via Terraform:
terraform destroy

This will prompt you to confirm you want to destroy the VM.

An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: - destroy Terraform will perform the following actions: - google_compute_instance.app Plan: 0 to add, 0 to change, 1 to destroy. Do you really want to destroy all resources? Terraform will destroy all your managed infrastructure, as shown above. There is no undo. Only 'yes' will be accepted to confirm. Enter a value: yes
  1. Enter yes when prompted to enter a value.

Once you confirm, Terraform will begin to tear down your infrastructure.

google_compute_instance.app: Destroying... (ID: my-app-instance) google_compute_instance.app: Still destroying... (ID: my-app-instance, 10s elapsed) google_compute_instance.app: Still destroying... (ID: my-app-instance, 20s elapsed) google_compute_instance.app: Still destroying... (ID: my-app-instance, 30s elapsed) google_compute_instance.app: Destruction complete after 36s Destroy complete! Resources: 1 destroyed.
  1. Feel free to destroy the networking infrastructure as well. It will be automatically destroyed when your lab expires.

Congratulations!

This concluded the self-paced lab, Datacenter Migration: Automated Foundations & Deployments. In this lab, you completed the entire workflow for automating the deployment of your networking resources in Google Cloud. You set up access credentials, set up Terraform, created resources including a VPC network, subnet, and firewall rules, modified the existing resources, carefully verified the capabilities of those resources, and then outputted them.

You also learned how to create an initial VM, capture a base image, update Terraform configurations, and deploy the VM via Terraform.

Finish your quest

This self-paced lab is part of the VM Migration quest. A quest is a series of related labs that form a learning path. Completing this quest earns you a badge to recognize your achievement. You can make your badge or badges public and link to them in your online resume or social media account. Enroll in this quest and get immediate completion credit. Refer to the Google Cloud Skills Boost catalog for all available quests.

Take your next lab

Continue your quest with Migrate for Anthos: Qwik Start, or check out a different Google Cloud Skills Boost lab, for example VM Migration: Introduction to StratoZone Assessments.

Next steps / Learn more

If you would like to learn more, you can explore additional resources to manage via Terraform using these resources:

Google Cloud training and certification

...helps you make the most of Google Cloud technologies. Our classes include technical skills and best practices to help you get up to speed quickly and continue your learning journey. We offer fundamental to advanced level training, with on-demand, live, and virtual options to suit your busy schedule. Certifications help you validate and prove your skill and expertise in Google Cloud technologies.

Manual Last Updated December 08, 2023

Lab Last Tested December 08, 2023

Copyright 2024 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.