Want to learn the core SQL and visualization skills of a Data Analyst? Interested on how to write queries that scale to petabyte-size datasets? Take the BigQuery for Analyst Quest and learn how to query, ingest, optimize, visualize, and even build machine learning models in SQL inside of BigQuery.
In this fundamental-level quest, you will learn the ins and outs of Stackdriver: an important GCP service for generating insights into applications’ health. Stackdriver provides a wealth of information in application monitoring, report logging, and diagnoses. The labs in this quest will give you hands-on practice with Stackdriver, and will teach you how to monitor virtual machines, generate logs and alerts, and create custom metrics for application data.
Security is an uncompromising feature of Google Cloud Platform services, and GCP has developed specific tools for ensuring safety and identity across your projects. In this fundamental-level quest, you will get hands-on practice with GCP’s Identity and Access Management (IAM) service, which is the go-to for managing user and virtual machine accounts. You will get experience with network security by provisioning VPCs and VPNs, and learn what tools are available for security threat and data loss protections.
In this lab you will learn how to use Google Cloud Machine Learning and Tensorflow to develop and evaluate prediction models using machine learning.
In this hands-on lab explore the Vision, Speech-to-Text, Translation, and Natural Language APIs and use the APIs to analyse audio recordings and map them to relevant images.
This one-day instructor-led course introduces participants to the big data capabilities of Google Cloud Platform.
In this lab you will learn how to implement logistic regression using a machine learning library for Apache Spark running on a Google Cloud Dataproc cluster to develop a model for data from a multivariable dataset.
In this lab you use Machine Learning (ML) to analyze the public NCAA dataset and predict NCAA tournament brackets.
Learn the process for partitioning a data set into two separate parts: a training set to develop a model, and a test set to evaluate the accuracy of the model and then independently evaluate predictive models in a repeatable manner.
In this lab you will build a simple scikit-learn model, upload the model to Cloud Machine Learning Engine, and make predictions against the model.