Spring Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: cramtick70

Professional-Machine-Learning-Engineer Google Professional Machine Learning Engineer Questions and Answers

Questions 4

You work for a company that sells corporate electronic products to thousands of businesses worldwide. Your company stores historical customer data in BigQuery. You need to build a model that predicts customer lifetime value over the next three years. You want to use the simplest approach to build the model. What should you do?

Options:

A.

Access BigQuery Studio in the Google Cloud console. Run the create model statement in the SQL editor to create an ARIMA model.

B.

Create a Vertex Al Workbench notebook. Use IPython magic to run the create model statement to create an ARIMA model.

C.

Access BigQuery Studio in the Google Cloud console. Run the create model statement in the SQL editor to create an AutoML regression model.

D.

Create a Vertex Al Workbench notebook. Use IPython magic to run the create model statement to create an AutoML regression model.

Buy Now
Questions 5

You work at a bank. You need to develop a credit risk model to support loan application decisions You decide to implement the model by using a neural network in TensorFlow Due to regulatory requirements, you need to be able to explain the models predictions based on its features When the model is deployed, you also want to monitor the model ' s performance overtime You decided to use Vertex Al for both model development and deployment What should you do?

Options:

A.

Use Vertex Explainable Al with the sampled Shapley method, and enable Vertex Al Model Monitoring to

check for feature distribution drift.

B.

Use Vertex Explainable Al with the sampled Shapley method, and enable Vertex Al Model Monitoring to

check for feature distribution skew.

C.

Use Vertex Explainable Al with the XRAI method, and enable Vertex Al Model Monitoring to check for feature distribution drift.

D.

Use Vertex Explainable Al with the XRAI method and enable Vertex Al Model Monitoring to check for feature distribution skew.

Buy Now
Questions 6

You have a custom job that runs on Vertex Al on a weekly basis The job is Implemented using a proprietary ML workflow that produces the datasets. models, and custom artifacts, and sends them to a Cloud Storage bucket Many different versions of the datasets and models were created Due to compliance requirements, your company needs to track which model was used for making a particular prediction, and needs access to the artifacts for each model. How should you configure your workflows to meet these requirement?

Options:

A.

Configure a TensorFlow Extended (TFX) ML Metadata database, and use the ML Metadata API.

B.

Create a Vertex Al experiment, and enable autologging inside the custom job

C.

Use the Vertex Al Metadata API inside the custom Job to create context, execution, and artifacts for each model, and use events to link them together.

D.

Register each model in Vertex Al Model Registry, and use model labels to store the related dataset and model information.

Buy Now
Questions 7

You work for a company that manages a ticketing platform for a large chain of cinemas. Customers use a mobile app to search for movies they’re interested in and purchase tickets in the app. Ticket purchase requests are sent to Pub/Sub and are processed with a Dataflow streaming pipeline configured to conduct the following steps:

1. Check for availability of the movie tickets at the selected cinema.

2. Assign the ticket price and accept payment.

3. Reserve the tickets at the selected cinema.

4. Send successful purchases to your database.

Each step in this process has low latency requirements (less than 50 milliseconds). You have developed a logistic regression model with BigQuery ML that predicts whether offering a promo code for free popcorn increases the chance of a ticket purchase, and this prediction should be added to the ticket purchase process. You want to identify the simplest way to deploy this model to production while adding minimal latency. What should you do?

Options:

A.

Run batch inference with BigQuery ML every five minutes on each new set of tickets issued.

B.

Export your model in TensorFlow format, and add a tfx_bsl.public.beam.RunInference step to the Dataflow pipeline.

C.

Export your model in TensorFlow format, deploy it on Vertex AI, and query the prediction endpoint from your streaming pipeline.

D.

Convert your model with TensorFlow Lite (TFLite), and add it to the mobile app so that the promo code and the incoming request arrive together in Pub/Sub.

Buy Now
Questions 8

You are building an ML model to predict trends in the stock market based on a wide range of factors. While exploring the data, you notice that some features have a large range. You want to ensure that the features with the largest magnitude don’t overfit the model. What should you do?

Options:

A.

Standardize the data by transforming it with a logarithmic function.

B.

Apply a principal component analysis (PCA) to minimize the effect of any particular feature.

C.

Use a binning strategy to replace the magnitude of each feature with the appropriate bin number.

D.

Normalize the data by scaling it to have values between 0 and 1.

Buy Now
Questions 9

You are an ML engineer at a manufacturing company You are creating a classification model for a predictive maintenance use case You need to predict whether a crucial machine will fail in the next three days so that the repair crew has enough time to fix the machine before it breaks. Regular maintenance of the machine is relatively inexpensive, but a failure would be very costly You have trained several binary classifiers to predict whether the machine will fail. where a prediction of 1 means that the ML model predicts a failure.

You are now evaluating each model on an evaluation dataset. You want to choose a model that prioritizes detection while ensuring that more than 50% of the maintenance jobs triggered by your model address an imminent machine failure. Which model should you choose?

Options:

A.

The model with the highest area under the receiver operating characteristic curve (AUC ROC) and precision greater than 0 5

B.

The model with the lowest root mean squared error (RMSE) and recall greater than 0.5.

C.

The model with the highest recall where precision is greater than 0.5.

D.

The model with the highest precision where recall is greater than 0.5.

Buy Now
Questions 10

You want to migrate a scikrt-learn classifier model to TensorFlow. You plan to train the TensorFlow classifier model using the same training set that was used to train the scikit-learn model and then compare the performances using a common test set. You want to use the Vertex Al Python SDK to manually log the evaluation metrics of each model and compare them based on their F1 scores and confusion matrices. How should you log the metrics?

Options:

A.

B.

C.

D.

Buy Now
Questions 11

You need to train a natural language model to perform text classification on product descriptions that contain millions of examples and 100,000 unique words. You want to preprocess the words individually so that they can be fed into a recurrent neural network. What should you do?

Options:

A.

Create a hot-encoding of words, and feed the encodings into your model.

B.

Identify word embeddings from a pre-trained model, and use the embeddings in your model.

C.

Sort the words by frequency of occurrence, and use the frequencies as the encodings in your model.

D.

Assign a numerical value to each word from 1 to 100,000 and feed the values as inputs in your model.

Buy Now
Questions 12

You have been tasked with deploying prototype code to production. The feature engineering code is in PySpark and runs on Dataproc Serverless. The model training is executed by using a Vertex Al custom training job. The two steps are not connected, and the model training must currently be run manually after the feature engineering step finishes. You need to create a scalable and maintainable production process that runs end-to-end and tracks the connections between steps. What should you do?

Options:

A.

Create a Vertex Al Workbench notebook Use the notebook to submit the Dataproc Serverless feature engineering job Use the same notebook to submit the custom model training job Run the notebook cells sequentially to tie the steps together end-to-end

B.

Create a Vertex Al Workbench notebook Initiate an Apache Spark context in the notebook, and run the PySpark feature engineering code Use the same notebook to run the custom model training job in TensorFlow Run the notebook cells sequentially to tie the steps together end-to-end

C.

Use the Kubeflow pipelines SDK to write code that specifies two components

- The first is a Dataproc Serverless component that launches the feature engineering job

- The second is a custom component wrapped in the

creare_cusrora_rraining_job_from_ccraponent Utility that launches the custom model training

job.

D.

Create a Vertex Al Pipelines job to link and run both components Use the Kubeflow pipelines SDK to write code that specifies two components

- The first component initiates an Apache Spark context that runs the PySpark feature engineering code

- The second component runs the TensorFlow custom model training code Create a Vertex Al Pipelines job to link and run both components

Buy Now
Questions 13

Your company manages an application that aggregates news articles from many different online sources and sends them to users. You need to build a recommendation model that will suggest articles to readers that are similar to the articles they are currently reading. Which approach should you use?

Options:

A.

Create a collaborative filtering system that recommends articles to a user based on the user’s past behavior.

B.

Encode all articles into vectors using word2vec, and build a model that returns articles based on vector similarity.

C.

Build a logistic regression model for each user that predicts whether an article should be recommended to a user.

D.

Manually label a few hundred articles, and then train an SVM classifier based on the manually classified articles that categorizes additional articles into their respective categories.

Buy Now
Questions 14

You recently used BigQuery ML to train an AutoML regression model. You shared results with your team and received positive feedback. You need to deploy your model for online prediction as quickly as possible. What should you do?

Options:

A.

Retrain the model by using BigQuery ML. and specify Vertex Al as the model registry Deploy the model from Vertex Al Model Registry to a Vertex Al endpoint.

B.

Retrain the model by using Vertex Al Deploy the model from Vertex Al Model Registry to a Vertex Al endpoint.

C.

Alter the model by using BigQuery ML and specify Vertex Al as the model registry Deploy the model from Vertex Al Model Registry to a Vertex Al endpoint.

D.

Export the model from BigQuery ML to Cloud Storage Import the model into Vertex Al Model Registry Deploy the model to a Vertex Al endpoint.

Buy Now
Questions 15

As the lead ML Engineer for your company, you are responsible for building ML models to digitize scanned customer forms. You have developed a TensorFlow model that converts the scanned images into text and stores them in Cloud Storage. You need to use your ML model on the aggregated data collected at the end of each day with minimal manual intervention. What should you do?

Options:

A.

Use the batch prediction functionality of Al Platform

B.

Create a serving pipeline in Compute Engine for prediction

C.

Use Cloud Functions for prediction each time a new data point is ingested

D.

Deploy the model on Al Platform and create a version of it for online inference.

Buy Now
Questions 16

You are developing an ML model that uses sliced frames from video feed and creates bounding boxes around specific objects. You want to automate the following steps in your training pipeline: ingestion and preprocessing of data in Cloud Storage, followed by training and hyperparameter tuning of the object model using Vertex AI jobs, and finally deploying the model to an endpoint. You want to orchestrate the entire pipeline with minimal cluster management. What approach should you use?

Options:

A.

Use Kubeflow Pipelines on Google Kubernetes Engine.

B.

Use Vertex AI Pipelines with TensorFlow Extended (TFX) SDK.

C.

Use Vertex AI Pipelines with Kubeflow Pipelines SDK.

D.

Use Cloud Composer for the orchestration.

Buy Now
Questions 17

You work for a multinational organization that has recently begun operations in Spain. Teams within your organization will need to work with various Spanish documents, such as business, legal, and financial documents. You want to use machine learning to help your organization get accurate translations quickly and with the least effort. Your organization does not require domain-specific terms or jargon. What should you do?

Options:

A.

Create a Vertex Al Workbench notebook instance. In the notebook, convert the Spanish documents into plain text, and create a custom TensorFlow seq2seq translation model.

B.

Create a Vertex Al Workbench notebook instance. In the notebook, extract sentences from the documents, and train a custom AutoML text model.

C.

Use Google Translate to translate 1.000 phrases from Spanish to English. Using these translated pairs, train a custom AutoML Translation model.

D.

Use the Document Translation feature of the Cloud Translation API to translate the documents.

Buy Now
Questions 18

You work for a manufacturing company. You need to train a custom image classification model to detect product defects at the end of an assembly line Although your model is performing well some images in your holdout set are consistently mislabeled with high confidence You want to use Vertex Al to understand your model ' s results What should you do?

Options:

A.

B.

C.

D.

Buy Now
Questions 19

You are a data scientist at an industrial equipment manufacturing company. You are developing a regression model to estimate the power consumption in the company’s manufacturing plants based on sensor data collected from all of the plants. The sensors collect tens of millions of records every day. You need to schedule daily training runs for your model that use all the data collected up to the current date. You want your model to scale smoothly and require minimal development work. What should you do?

Options:

A.

Develop a custom TensorFlow regression model, and optimize it using Vertex Al Training.

B.

Develop a regression model using BigQuery ML.

C.

Develop a custom scikit-learn regression model, and optimize it using Vertex Al Training

D.

Develop a custom PyTorch regression model, and optimize it using Vertex Al Training

Buy Now
Questions 20

You work for a bank with strict data governance requirements. You recently implemented a custom model to detect fraudulent transactions You want your training code to download internal data by using an API endpoint hosted in your projects network You need the data to be accessed in the most secure way, while mitigating the risk of data exfiltration. What should you do?

Options:

A.

Enable VPC Service Controls for peering’s, and add Vertex Al to a service perimeter

B.

Create a Cloud Run endpoint as a proxy to the data Use Identity and Access Management (1AM)

authentication to secure access to the endpoint from the training job.

C.

Configure VPC Peering with Vertex Al and specify the network of the training job

D.

Download the data to a Cloud Storage bucket before calling the training job

Buy Now
Questions 21

You are working on a Neural Network-based project. The dataset provided to you has columns with different ranges. While preparing the data for model training, you discover that gradient optimization is having difficulty moving weights to a good solution. What should you do?

Options:

A.

Use feature construction to combine the strongest features.

B.

Use the representation transformation (normalization) technique.

C.

Improve the data cleaning step by removing features with missing values.

D.

Change the partitioning step to reduce the dimension of the test set and have a larger training set.

Buy Now
Questions 22

You work for a pet food company that manages an online forum Customers upload photos of their pets on the forum to share with others About 20 photos are uploaded daily You want to automatically and in near real time detect whether each uploaded photo has an animal You want to prioritize time and minimize cost of your application development and deployment What should you do?

Options:

A.

Send user-submitted images to the Cloud Vision API Use object localization to identify all objects in the image and compare the results against a list of animals.

B.

Download an object detection model from TensorFlow Hub. Deploy the model to a Vertex Al endpoint. Send new user-submitted images to the model endpoint to classify whether each photo has an animal.

C.

Manually label previously submitted images with bounding boxes around any animals Build an AutoML object detection model by using Vertex Al Deploy the model to a Vertex Al endpoint Send new user-submitted images to your model endpoint to detect whether each photo has an animal.

D.

Manually label previously submitted images as having animals or not Create an image dataset on Vertex Al Train a classification model by using Vertex AutoML to distinguish the two classes Deploy the model to a Vertex Al endpoint Send new user-submitted images to your model endpoint to classify whether each photo has an animal.

Buy Now
Questions 23

You are an ML engineer at a large grocery retailer with stores in multiple regions. You have been asked to create an inventory prediction model. Your models features include region, location, historical demand, and seasonal popularity. You want the algorithm to learn from new inventory data on a daily basis. Which algorithms should you use to build the model?

Options:

A.

Classification

B.

Reinforcement Learning

C.

Recurrent Neural Networks (RNN)

D.

Convolutional Neural Networks (CNN)

Buy Now
Questions 24

You work for a rapidly growing social media company. Your team builds TensorFlow recommender models in an on-premises CPU cluster. The data contains billions of historical user events and 100 000 categorical features. You notice that as the data increases the model training time increases. You plan to move the models to Google Cloud You want to use the most scalable approach that also minimizes training time. What should you do?

Options:

A.

Deploy the training jobs by using TPU VMs with TPUv3 Pod slices, and use the TPUEmbedding API.

B.

Deploy the training jobs in an autoscaling Google Kubernetes Engine cluster with CPUs

C.

Deploy a matrix factorization model training job by using BigQuery ML.

D.

Deploy the training jobs by using Compute Engine instances with A100 GPUs and use the

t f. nn. embedding_lookup API.

Buy Now
Questions 25

You work for a hospital that wants to optimize how it schedules operations. You need to create a model that uses the relationship between the number of surgeries scheduled and beds used You want to predict how many beds will be needed for patients each day in advance based on the scheduled surgeries You have one year of data for the hospital organized in 365 rows

The data includes the following variables for each day

• Number of scheduled surgeries

• Number of beds occupied

• Date

You want to maximize the speed of model development and testing What should you do?

Options:

A.

Create a BigQuery table Use BigQuery ML to build a regression model, with number of beds as the target variable and number of scheduled surgeries and date features (such as day of week) as the predictors

B.

Create a BigQuery table Use BigQuery ML to build an ARIMA model, with number of beds as the target variable and date as the time variable.

C.

Create a Vertex Al tabular dataset Tram an AutoML regression model, with number of beds as the target variable and number of scheduled minor surgeries and date features (such as day of the week) as the predictors

D.

Create a Vertex Al tabular dataset Train a Vertex Al AutoML Forecasting model with number of beds as the target variable, number of scheduled surgeries as a covariate, and date as the time variable.

Buy Now
Questions 26

You recently trained a XGBoost model that you plan to deploy to production for online inference Before sending a predict request to your model ' s binary you need to perform a simple data preprocessing step This step exposes a REST API that accepts requests in your internal VPC Service Controls and returns predictions You want to configure this preprocessing step while minimizing cost and effort What should you do?

Options:

A.

Store a pickled model in Cloud Storage Build a Flask-based app packages the app in a custom container image, and deploy the model to Vertex Al Endpoints.

B.

Build a Flask-based app. package the app and a pickled model in a custom container image, and deploy the model to Vertex Al Endpoints.

C.

Build a custom predictor class based on XGBoost Predictor from the Vertex Al SDK. package it and a pickled model in a custom container image based on a Vertex built-in image, and deploy the model to Vertex Al Endpoints.

D.

Build a custom predictor class based on XGBoost Predictor from the Vertex Al SDK and package the handler in a custom container image based on a Vertex built-in container image Store a pickled model in Cloud Storage and deploy the model to Vertex Al Endpoints.

Buy Now
Questions 27

You have trained a text classification model in TensorFlow using Al Platform. You want to use the trained model for batch predictions on text data stored in BigQuery while minimizing computational overhead. What should you do?

Options:

A.

Export the model to BigQuery ML.

B.

Deploy and version the model on Al Platform.

C.

Use Dataflow with the SavedModel to read the data from BigQuery

D.

Submit a batch prediction job on Al Platform that points to the model location in Cloud Storage.

Buy Now
Questions 28

You trained a model on data that is stored in a Cloud Storage bucket. The model needs to be retrained frequently in Vertex AI Training by using the latest data in the bucket. Data preprocessing is required prior to the retraining. You want to build a simple and efficient near real-time ML pipeline in Vertex AI that will perform the data preprocessing when new data arrives in the bucket. What should you do?

Options:

A.

Use the Vertex AI SDK to preprocess the new data in the bucket prior to each model retraining. Store the processed features in BigQuery.

B.

Create a Cloud Run function that is triggered when new data arrives in the bucket. The function initiates a Vertex AI Pipeline to preprocess the new data and store the processed features in Vertex AI Feature Store.

C.

Create a pipeline by using the Vertex AI SDK. Schedule the pipeline with Cloud Scheduler to preprocess the new data in the bucket. Store the processed features in Vertex AI Feature Store.

D.

Build a Dataflow pipeline to preprocess the new data in the bucket and store the processed features in BigQuery. Configure a cron job to trigger the pipeline execution.

Buy Now
Questions 29

You are deploying a new version of a model to a production Vertex Al endpoint that is serving traffic You plan to direct all user traffic to the new model You need to deploy the model with minimal disruption to your application What should you do?

Options:

A.

1 Create a new endpoint.

2 Create a new model Set it as the default version Upload the model to Vertex Al Model Registry.

3. Deploy the new model to the new endpoint.

4 Update Cloud DNS to point to the new endpoint

B.

1. Create a new endpoint.

2. Create a new model Set the parentModel parameter to the model ID of the currently deployed model and set it as the default version Upload the model to Vertex Al Model Registry

3. Deploy the new model to the new endpoint and set the new model to 100% of the traffic

C.

1 Create a new model Set the parentModel parameter to the model ID of the currently deployed model Upload the model to Vertex Al Model Registry.

2 Deploy the new model to the existing endpoint and set the new model to 100% of the traffic.

D.

1, Create a new model Set it as the default version Upload the model to Vertex Al Model Registry

2 Deploy the new model to the existing endpoint

Buy Now
Questions 30

Your company manages an ecommerce platform and has a large dataset of customer reviews. Each review has a positive, negative, or neutral label. You need to quickly prototype a sentiment analysis model that accurately predicts the sentiment labels of new customer reviews while minimizing time and cost. What should you do?

Options:

A.

Train a sentiment analysis model by using a BERT-based model, and fine-tune the model by using domain-specific customer reviews.

B.

Use the Natural Language API for real-time sentiment analysis.

C.

Use AutoML to train a multi-class classification model that predicts sentiment labels based on the training data.

D.

Use the Vertex AI Text embeddings API to vectorize the text, and train a regression model by using AutoML to predict sentiment scores.

Buy Now
Questions 31

Your team has been tasked with creating an ML solution in Google Cloud to classify support requests for one of your platforms. You analyzed the requirements and decided to use TensorFlow to build the classifier so that you have full control of the model ' s code, serving, and deployment. You will use Kubeflow pipelines for the ML platform. To save time, you want to build on existing resources and use managed services instead of building a completely new model. How should you build the classifier?

Options:

A.

Use the Natural Language API to classify support requests

B.

Use AutoML Natural Language to build the support requests classifier

C.

Use an established text classification model on Al Platform to perform transfer learning

D.

Use an established text classification model on Al Platform as-is to classify support requests

Buy Now
Questions 32

You want to rebuild your ML pipeline for structured data on Google Cloud. You are using PySpark to conduct data transformations at scale, but your pipelines are taking over 12 hours to run. To speed up development and pipeline run time, you want to use a serverless tool and SQL syntax. You have already moved your raw data into Cloud Storage. How should you build the pipeline on Google Cloud while meeting the speed and processing requirements?

Options:

A.

Use Data Fusion ' s GUI to build the transformation pipelines, and then write the data into BigQuery

B.

Convert your PySpark into SparkSQL queries to transform the data and then run your pipeline on Dataproc to write the data into BigQuery.

C.

Ingest your data into Cloud SQL convert your PySpark commands into SQL queries to transform the data, and then use federated queries from BigQuery for machine learning

D.

Ingest your data into BigQuery using BigQuery Load, convert your PySpark commands into BigQuery SQL queries to transform the data, and then write the transformations to a new table

Buy Now
Questions 33

You are analyzing customer data for a healthcare organization that is stored in Cloud Storage. The data contains personally identifiable information (PII) You need to perform data exploration and preprocessing while ensuring the security and privacy of sensitive fields What should you do?

Options:

A.

Use the Cloud Data Loss Prevention (DLP) API to de-identify the PI! before performing data exploration and preprocessing.

B.

Use customer-managed encryption keys (CMEK) to encrypt the Pll data at rest and decrypt the Pll data during data exploration and preprocessing.

C.

Use a VM inside a VPC Service Controls security perimeter to perform data exploration and preprocessing.

D.

Use Google-managed encryption keys to encrypt the Pll data at rest, and decrypt the Pll data during data exploration and preprocessing.

Buy Now
Questions 34

You developed a custom model by using Vertex Al to forecast the sales of your company s products based on historical transactional data You anticipate changes in the feature distributions and the correlations between the features in the near future You also expect to receive a large volume of prediction requests You plan to use Vertex Al Model Monitoring for drift detection and you want to minimize the cost. What should you do?

Options:

A.

Use the features for monitoring Set a monitoring- frequency value that is higher than the default.

B.

Use the features for monitoring Set a prediction-sampling-rare value that is closer to 1 than 0.

C.

Use the features and the feature attributions for monitoring. Set a monitoring-frequency value that is lower than the default.

D.

Use the features and the feature attributions for monitoring Set a prediction-sampling-rate value that is closer to 0 than 1.

Buy Now
Questions 35

You work for a company that sells corporate electronic products to thousands of businesses worldwide. Your company stores historical customer data in BigQuery. You need to build a model that predicts customer lifetime value over the next three years. You want to use the simplest approach to build the model. What should you do?

Options:

A.

Access BigQuery Studio in the Google Cloud console. Run the CREATE MODEL statement in the SQL editor to create a deep neural network (DNN) regressor model.

B.

Create a Vertex AI Workbench notebook. Use IPython magic to run the CREATE MODEL statement to create a deep neural network (DNN) regressor model.

C.

Access BigQuery Studio in the Google Cloud console. Run the CREATE MODEL statement in the SQL editor to create an AutoML regression model.

D.

Create a Vertex AI Workbench notebook. Use IPython magic to run the CREATE MODEL statement to create an AutoML regression model.

Buy Now
Questions 36

You are using Kubeflow Pipelines to develop an end-to-end PyTorch-based MLOps pipeline. The pipeline reads data from BigQuery,

processes the data, conducts feature engineering, model training, model evaluation, and deploys the model as a binary file to Cloud Storage. You are

writing code for several different versions of the feature engineering and model training steps, and running each new version in Vertex Al Pipelines.

Each pipeline run is taking over an hour to complete. You want to speed up the pipeline execution to reduce your development time, and you want to

avoid additional costs. What should you do?

Options:

A.

Delegate feature engineering to BigQuery and remove it from the pipeline.

B.

Add a GPU to the model training step.

C.

Enable caching in all the steps of the Kubeflow pipeline.

D.

Comment out the part of the pipeline that you are not currently updating.

Buy Now
Questions 37

You work for a company that is developing a new video streaming platform. You have been asked to create a recommendation system that will suggest the next video for a user to watch. After a review by an AI Ethics team, you are approved to start development. Each video asset in your company’s catalog has useful metadata (e.g., content type, release date, country), but you do not have any historical user event data. How should you build the recommendation system for the first version of the product?

Options:

A.

Launch the product without machine learning. Present videos to users alphabetically, and start collecting user event data so you can develop a recommender model in the future.

B.

Launch the product without machine learning. Use simple heuristics based on content metadata to recommend similar videos to users, and start collecting user event data so you can develop a recommender model in the future.

C.

Launch the product with machine learning. Use a publicly available dataset such as MovieLens to train a model using the Recommendations AI, and then apply this trained model to your data.

D.

Launch the product with machine learning. Generate embeddings for each video by training an autoencoder on the content metadata using TensorFlow. Cluster content based on the similarity of these embeddings, and then recommend videos from the same cluster.

Buy Now
Questions 38

You are an ML engineer at a global shoe store. You manage the ML models for the company ' s website. You are asked to build a model that will recommend new products to the user based on their purchase behavior and similarity with other users. What should you do?

Options:

A.

Build a classification model

B.

Build a knowledge-based filtering model

C.

Build a collaborative-based filtering model

D.

Build a regression model using the features as predictors

Buy Now
Questions 39

You built a custom ML model using scikit-learn. Training time is taking longer than expected. You decide to migrate your model to Vertex AI Training, and you want to improve the model’s training time. What should you try out first?

Options:

A.

Migrate your model to TensorFlow, and train it using Vertex AI Training.

B.

Train your model in a distributed mode using multiple Compute Engine VMs.

C.

Train your model with DLVM images on Vertex AI, and ensure that your code utilizes NumPy and SciPy internal methods whenever possible.

D.

Train your model using Vertex AI Training with GPUs.

Buy Now
Questions 40

You work at a subscription-based company. You have trained an ensemble of trees and neural networks to predict customer churn, which is the likelihood that customers will not renew their yearly subscription. The average prediction is a 15% churn rate, but for a particular customer the model predicts that they are 70% likely to churn. The customer has a product usage history of 30%, is located in New York City, and became a customer in 1997. You need to explain the difference between the actual prediction, a 70% churn rate, and the average prediction. You want to use Vertex Explainable AI. What should you do?

Options:

A.

Train local surrogate models to explain individual predictions.

B.

Configure sampled Shapley explanations on Vertex Explainable AI.

C.

Configure integrated gradients explanations on Vertex Explainable AI.

D.

Measure the effect of each feature as the weight of the feature multiplied by the feature value.

Buy Now
Questions 41

You are the Director of Data Science at a large company, and your Data Science team has recently begun using the Kubeflow Pipelines SDK to orchestrate their training pipelines. Your team is struggling to integrate their custom Python code into the Kubeflow Pipelines SDK. How should you instruct them to proceed in order to quickly integrate their code with the Kubeflow Pipelines SDK?

Options:

A.

Use the func_to_container_op function to create custom components from the Python code.

B.

Use the predefined components available in the Kubeflow Pipelines SDK to access Dataproc, and run the custom code there.

C.

Package the custom Python code into Docker containers, and use the load_component_from_file function to import the containers into the pipeline.

D.

Deploy the custom Python code to Cloud Functions, and use Kubeflow Pipelines to trigger the Cloud Function.

Buy Now
Questions 42

You have created a Vertex Al pipeline that includes two steps. The first step preprocesses 10 TB data completes in about 1 hour, and saves the result in a Cloud Storage bucket The second step uses the processed data to train a model You need to update the model ' s code to allow you to test different algorithms You want to reduce pipeline execution time and cost, while also minimizing pipeline changes What should you do?

Options:

A.

Add a pipeline parameter and an additional pipeline step Depending on the parameter value the pipeline step conducts or skips data preprocessing and starts model training.

B.

Create another pipeline without the preprocessing step, and hardcode the preprocessed Cloud Storage file location for model training.

C.

Configure a machine with more CPU and RAM from the compute-optimized machine family for the data preprocessing step.

D.

Enable caching for the pipeline job. and disable caching for the model training step.

Buy Now
Questions 43

You need to analyze user activity data from your company’s mobile applications. Your team will use BigQuery for data analysis, transformation, and experimentation with ML algorithms. You need to ensure real-time ingestion of the user activity data into BigQuery. What should you do?

Options:

A.

Configure Pub/Sub to stream the data into BigQuery.

B.

Run an Apache Spark streaming job on Dataproc to ingest the data into BigQuery.

C.

Run a Dataflow streaming job to ingest the data into BigQuery.

D.

Configure Pub/Sub and a Dataflow streaming job to ingest the data into BigQuery,

Buy Now
Questions 44

You are investigating the root cause of a misclassification error made by one of your models. You used Vertex Al Pipelines to tram and deploy the model. The pipeline reads data from BigQuery. creates a copy of the data in Cloud Storage in TFRecord format trains the model in Vertex Al Training on that copy, and deploys the model to a Vertex Al endpoint. You have identified the specific version of that model that misclassified: and you need to recover the data this model was trained on. How should you find that copy of the data ' ?

Options:

A.

Use Vertex Al Feature Store Modify the pipeline to use the feature store; and ensure that all training data is stored in it Search the feature store for the data used for the training.

B.

Use the lineage feature of Vertex Al Metadata to find the model artifact Determine the version of the model and identify the step that creates the data copy, and search in the metadata for its location.

C.

Use the logging features in the Vertex Al endpoint to determine the timestamp of the models deployment Find the pipeline run at that timestamp Identify the step that creates the data copy; and search in the logs for its location.

D.

Find the job ID in Vertex Al Training corresponding to the training for the model Search in the logs of that job for the data used for the training.

Buy Now
Questions 45

You are training an object detection machine learning model on a dataset that consists of three million X-ray images, each roughly 2 GB in size. You are using Vertex AI Training to run a custom training application on a Compute Engine instance with 32-cores, 128 GB of RAM, and 1 NVIDIA P100 GPU. You notice that model training is taking a very long time. You want to decrease training time without sacrificing model performance. What should you do?

Options:

A.

Increase the instance memory to 512 GB and increase the batch size.

B.

Replace the NVIDIA P100 GPU with a v3-32 TPU in the training job.

C.

Enable early stopping in your Vertex AI Training job.

D.

Use the tf.distribute.Strategy API and run a distributed training job.

Buy Now
Questions 46

You are responsible for building a unified analytics environment across a variety of on-premises data marts. Your company is experiencing data quality and security challenges when integrating data across the servers, caused by the use of a wide range of disconnected tools and temporary solutions. You need a fully managed, cloud-native data integration service that will lower the total cost of work and reduce repetitive work. Some members on your team prefer a codeless interface for building Extract, Transform, Load (ETL) process. Which service should you use?

Options:

A.

Dataflow

B.

Dataprep

C.

Apache Flink

D.

Cloud Data Fusion

Buy Now
Questions 47

You are collaborating on a model prototype with your team. You need to create a Vertex Al Workbench environment for the members of your team and also limit access to other employees in your project. What should you do?

Options:

A.

1. Create a new service account and grant it the Notebook Viewer role.

2 Grant the Service Account User role to each team member on the service account.

3 Grant the Vertex Al User role to each team member.

4. Provision a Vertex Al Workbench user-managed notebook instance that uses the new service account.

B.

1. Grant the Vertex Al User role to the default Compute Engine service account.

2. Grant the Service Account User role to each team member on the default Compute Engine service account.

3. Provision a Vertex Al Workbench user-managed notebook instance that uses the default Compute Engine service account.

C.

1 Create a new service account and grant it the Vertex Al User role.

2 Grant the Service Account User role to each team member on the service account.

3. Grant the Notebook Viewer role to each team member.

4 Provision a Vertex Al Workbench user-managed notebook instance that uses the new service account.

D.

1 Grant the Vertex Al User role to the primary team member.

2. Grant the Notebook Viewer role to the other team members.

3. Provision a Vertex Al Workbench user-managed notebook instance that uses the primary user’s account.

Buy Now
Questions 48

You trained a model on data stored in a Cloud Storage bucket. The model needs to be retrained frequently in Vertex AI Training using the latest data in the bucket. Data preprocessing is required prior to retraining. You want to build a simple and efficient near-real-time ML pipeline in Vertex AI that will preprocess the data when new data arrives in the bucket. What should you do?

Options:

A.

Create a pipeline using the Vertex AI SDK. Schedule the pipeline with Cloud Scheduler to preprocess the new data in the bucket. Store the processed features in Vertex AI Feature Store.

B.

Create a Cloud Run function that is triggered when new data arrives in the bucket. The function initiates a Vertex AI Pipeline to preprocess the new data and store the processed features in Vertex AI Feature Store.

C.

Build a Dataflow pipeline to preprocess the new data in the bucket and store the processed features in BigQuery. Configure a cron job to trigger the pipeline execution.

D.

Use the Vertex AI SDK to preprocess the new data in the bucket prior to each model retraining. Store the processed features in BigQuery.

Buy Now
Questions 49

You work with a learn of researchers lo develop state-of-the-art algorithms for financial analysis. Your team develops and debugs complex models in TensorFlow. You want to maintain the ease of debugging while also reducing the model training time. How should you set up your training environment?

Options:

A.

Configure a v3-8 TPU VM.

B.

Configure a v3-8 TPU node.

C.

Configure a c2-standard-60 VM without GPUs.

D, Configure a n1-standard-4 VM with 1 NVIDIA P100 GPU.

Buy Now
Questions 50

You have recently used TensorFlow to train a classification model on tabular data You have created a Dataflow pipeline that can transform several terabytes of data into training or prediction datasets consisting of TFRecords. You now need to productionize the model, and you want the predictions to be automatically uploaded to a BigQuery table on a weekly schedule. What should you do?

Options:

A.

Import the model into Vertex Al and deploy it to a Vertex Al endpoint On Vertex Al Pipelines create a pipeline that uses the Dataf lowPythonJobop and the Mcdei3archPredictoc components.

B.

Import the model into Vertex Al and deploy it to a Vertex Al endpoint Create a Dataflow pipeline that reuses the data processing logic sends requests to the endpoint and then uploads predictions to a BigQuery table.

C.

Import the model into Vertex Al On Vertex Al Pipelines, create a pipeline that uses the DatafIowPythonJobOp and the ModelBatchPredictOp components.

D.

Import the model into BigQuery Implement the data processing logic in a SQL query On Vertex Al Pipelines create a pipeline that uses the BigqueryQueryJobop and the EigqueryPredictModejobOp components.

Buy Now
Questions 51

You work for an advertising company and want to understand the effectiveness of your company ' s latest advertising campaign. You have streamed 500 MB of campaign data into BigQuery. You want to query the table, and then manipulate the results of that query with a pandas dataframe in an Al Platform notebook. What should you do?

Options:

A.

Use Al Platform Notebooks ' BigQuery cell magic to query the data, and ingest the results as a pandas dataframe

B.

Export your table as a CSV file from BigQuery to Google Drive, and use the Google Drive API to ingest the file into your notebook instance

C.

Download your table from BigQuery as a local CSV file, and upload it to your Al Platform notebook instance Use pandas. read_csv to ingest the file as a pandas dataframe

D.

From a bash cell in your Al Platform notebook, use the bq extract command to export the table as a CSV file to Cloud Storage, and then use gsutii cp to copy the data into the notebook Use pandas. read_csv to ingest the file as a pandas dataframe

Buy Now
Questions 52

Your team is working on an NLP research project to predict political affiliation of authors based on articles they have written. You have a large training dataset that is structured like this:

You followed the standard 80%-10%-10% data distribution across the training, testing, and evaluation subsets. How should you distribute the training examples across the train-test-eval subsets while maintaining the 80-10-10 proportion?

A)

B)

C)

D)

Options:

A.

Option A

B.

Option B

C.

Option C

D.

Option D

Buy Now
Questions 53

You have built a model that is trained on data stored in Parquet files. You access the data through a Hive table hosted on Google Cloud. You preprocessed these data with PySpark and exported it as a CSV file into Cloud Storage. After preprocessing, you execute additional steps to train and evaluate your model. You want to parametrize this model training in Kubeflow Pipelines. What should you do?

Options:

A.

Remove the data transformation step from your pipeline.

B.

Containerize the PySpark transformation step, and add it to your pipeline.

C.

Add a ContainerOp to your pipeline that spins a Dataproc cluster, runs a transformation, and then saves the transformed data in Cloud Storage.

D.

Deploy Apache Spark at a separate node pool in a Google Kubernetes Engine cluster. Add a ContainerOp to your pipeline that invokes a corresponding transformation job for this Spark instance.

Buy Now
Questions 54

You work for a food product company. Your company ' s historical sales data is stored in BigQuery You need to use Vertex Al’s custom training service to train multiple TensorFlow models that read the data from BigQuery and predict future sales You plan to implement a data preprocessing algorithm that performs min-max scaling and bucketing on a large number of features before you start experimenting with the models. You want to minimize preprocessing time, cost and development effort How should you configure this workflow?

Options:

A.

Write the transformations into Spark that uses the spark-bigquery-connector and use Dataproc to preprocess the data.

B.

Write SQL queries to transform the data in-place in BigQuery.

C.

Add the transformations as a preprocessing layer in the TensorFlow models.

D.

Create a Dataflow pipeline that uses the BigQuerylO connector to ingest the data process it and write it back to BigQuery.

Buy Now
Questions 55

You work for a company that captures live video footage of checkout areas in their retail stores You need to use the live video footage to build a mode! to detect the number of customers waiting for service in near real time You want to implement a solution quickly and with minimal effort How should you build the model?

Options:

A.

Use the Vertex Al Vision Occupancy Analytics model.

B.

Use the Vertex Al Vision Person/vehicle detector model

C.

Train an AutoML object detection model on an annotated dataset by using Vertex AutoML

D.

Train a Seq2Seq+ object detection model on an annotated dataset by using Vertex AutoML

Buy Now
Questions 56

You are an ML engineer on an agricultural research team working on a crop disease detection tool to detect leaf rust spots in images of crops to determine the presence of a disease. These spots, which can vary in shape and size, are correlated to the severity of the disease. You want to develop a solution that predicts the presence and severity of the disease with high accuracy. What should you do?

Options:

A.

Create an object detection model that can localize the rust spots.

B.

Develop an image segmentation ML model to locate the boundaries of the rust spots.

C.

Develop a template matching algorithm using traditional computer vision libraries.

D.

Develop an image classification ML model to predict the presence of the disease.

Buy Now
Questions 57

You work for a pharmaceutical company based in Canada. Your team developed a BigQuery ML model to predict the number of flu infections for the next month in Canada Weather data is published weekly and flu infection statistics are published monthly. You need to configure a model retraining policy that minimizes cost What should you do?

Options:

A.

Download the weather and flu data each week Configure Cloud Scheduler to execute a Vertex Al pipeline to retrain the model weekly.

B.

Download the weather and flu data each month Configure Cloud Scheduler to execute a Vertex Al pipeline to retrain the model monthly.

C.

Download the weather and flu data each week Configure Cloud Scheduler to execute a Vertex Al pipeline to retrain the model every month.

D.

Download the weather data each week, and download the flu data each month Deploy the model to a Vertex Al endpoint with feature drift monitoring. and retrain the model if a monitoring alert is detected.

Buy Now
Questions 58

You recently deployed a model to a Vertex Al endpoint Your data drifts frequently so you have enabled request-response logging and created a Vertex Al Model Monitoring job. You have observed that your model is receiving higher traffic than expected. You need to reduce the model monitoring cost while continuing to quickly detect drift. What should you do?

Options:

A.

Replace the monitoring job with a DataFlow pipeline that uses TensorFlow Data Validation (TFDV).

B.

Replace the monitoring job with a custom SQL scnpt to calculate statistics on the features and predictions in BigQuery.

C.

Decrease the sample_rate parameter in the Randomsampleconfig of the monitoring job.

D.

Increase the monitor_interval parameter in the scheduieconfig of the monitoring job.

Buy Now
Questions 59

You have trained a model on a dataset that required computationally expensive preprocessing operations. You need to execute the same preprocessing at prediction time. You deployed the model on Al Platform for high-throughput online prediction. Which architecture should you use?

Options:

A.

• Validate the accuracy of the model that you trained on preprocessed data

• Create a new model that uses the raw data and is available in real time

• Deploy the new model onto Al Platform for online prediction

B.

• Send incoming prediction requests to a Pub/Sub topic

• Transform the incoming data using a Dataflow job

• Submit a prediction request to Al Platform using the transformed data

• Write the predictions to an outbound Pub/Sub queue

C.

• Stream incoming prediction request data into Cloud Spanner

• Create a view to abstract your preprocessing logic.

• Query the view every second for new records

• Submit a prediction request to Al Platform using the transformed data

• Write the predictions to an outbound Pub/Sub queue.

D.

• Send incoming prediction requests to a Pub/Sub topic

• Set up a Cloud Function that is triggered when messages are published to the Pub/Sub topic.

• Implement your preprocessing logic in the Cloud Function

• Submit a prediction request to Al Platform using the transformed data

• Write the predictions to an outbound Pub/Sub queue

Buy Now
Questions 60

You work for a gaming company that manages a popular online multiplayer game where teams with 6 players play against each other in 5-minute battles. There are many new players every day. You need to build a model that automatically assigns available players to teams in real time. User research indicates that the game is more enjoyable when battles have players with similar skill levels. Which business metrics should you track to measure your model’s performance? (Choose One Correct Answer)

Options:

A.

Average time players wait before being assigned to a team

B.

Precision and recall of assigning players to teams based on their predicted versus actual ability

C.

User engagement as measured by the number of battles played daily per user

D.

Rate of return as measured by additional revenue generated minus the cost of developing a new model

Buy Now
Questions 61

You trained a text classification model. You have the following SignatureDefs:

What is the correct way to write the predict request?

Options:

A.

data = json.dumps({ " signature_name " : " serving_default ' \ " instances " : [fab ' , ' be1, ' cd ' ]]})

B.

data = json dumps({ " signature_name " : " serving_default " ! " instances " : [[ ' a ' , ' b ' , " c " , ' d ' , ' e ' , ' f ' ]]})

C.

data = json.dumps({ " signature_name " : " serving_default, " instances " : [[ ' a ' , ' b\ ' c ' 1, [d\ ' e\ T]]})

D.

data = json dumps({ " signature_name " : f,serving_default " , " instances " : [[ ' a ' , ' b ' ], [c\ ' d ' ], [ ' e\ T]]})

Buy Now
Questions 62

You work as an ML engineer at a social media company, and you are developing a visual filter for users’ profile photos. This requires you to train an ML model to detect bounding boxes around human faces. You want to use this filter in your company’s iOS-based mobile phone application. You want to minimize code development and want the model to be optimized for inference on mobile phones. What should you do?

Options:

A.

Train a model using AutoML Vision and use the “export for Core ML” option.

B.

Train a model using AutoML Vision and use the “export for Coral” option.

C.

Train a model using AutoML Vision and use the “export for TensorFlow.js” option.

D.

Train a custom TensorFlow model and convert it to TensorFlow Lite (TFLite).

Buy Now
Questions 63

While running a model training pipeline on Vertex Al, you discover that the evaluation step is failing because of an out-of-memory error. You are currently using TensorFlow Model Analysis (TFMA) with a standard Evaluator TensorFlow Extended (TFX) pipeline component for the evaluation step. You want to stabilize the pipeline without downgrading the evaluation quality while minimizing infrastructure overhead. What should you do?

Options:

A.

Add tfma.MetricsSpec () to limit the number of metrics in the evaluation step.

B.

Migrate your pipeline to Kubeflow hosted on Google Kubernetes Engine, and specify the appropriate node parameters for the evaluation step.

C.

Include the flag -runner=DataflowRunner in beam_pipeline_args to run the evaluation step on Dataflow.

D.

Move the evaluation step out of your pipeline and run it on custom Compute Engine VMs with sufficient memory.

Buy Now
Questions 64

You have been asked to build a model using a dataset that is stored in a medium-sized (~10 GB) BigQuery table. You need to quickly determine whether this data is suitable for model development. You want to create a one-time report that includes both informative visualizations of data distributions and more sophisticated statistical analyses to share with other ML engineers on your team. You require maximum flexibility to create your report. What should you do?

Options:

A.

Use Vertex AI Workbench user-managed notebooks to generate the report.

B.

Use the Google Data Studio to create the report.

C.

Use the output from TensorFlow Data Validation on Dataflow to generate the report.

D.

Use Dataprep to create the report.

Buy Now
Questions 65

You are profiling the performance of your TensorFlow model training time and notice a performance issue caused by inefficiencies in the input data pipeline for a single 5 terabyte CSV file dataset on Cloud Storage. You need to optimize the input pipeline performance. Which action should you try first to increase the efficiency of your pipeline?

Options:

A.

Preprocess the input CSV file into a TFRecord file.

B.

Randomly select a 10 gigabyte subset of the data to train your model.

C.

Split into multiple CSV files and use a parallel interleave transformation.

D.

Set the reshuffle_each_iteration parameter to true in the tf.data.Dataset.shuffle method.

Buy Now
Questions 66

You are an AI architect at a popular photo-sharing social media platform. Your organization’s content moderation team currently scans images uploaded by users and removes explicit images manually. You want to implement an AI service to automatically prevent users from uploading explicit images. What should you do?

Options:

A.

Develop a custom TensorFlow model in a Vertex AI Workbench instance. Train the model on a dataset of manually labeled images. Deploy the model to a Vertex AI endpoint. Run periodic batch inference to identify inappropriate uploads and report them to the content moderation team.

B.

Train an image clustering model using TensorFlow in a Vertex AI Workbench instance. Deploy this model to a Vertex AI endpoint and configure it for online inference. Run this model each time a new image is uploaded to identify and block inappropriate uploads.

C.

Create a dataset using manually labeled images. Ingest this dataset into AutoML. Train an image classification model and deploy it to a Vertex AI endpoint. Integrate this endpoint with the image upload process to identify and block inappropriate uploads. Monitor predictions and periodically retrain the model.

D.

Send a copy of every user-uploaded image to a Cloud Storage bucket. Configure a Cloud Run function that triggers the Cloud Vision API to detect explicit content each time a new image is uploaded. Report the classifications to the content moderation team for review.

Buy Now
Questions 67

You work for a retail company. You have been tasked with building a model to determine the probability of churn for each customer. You need the predictions to be interpretable so the results can be used to develop marketing campaigns that target at-risk customers. What should you do?

Options:

A.

Build a random forest regression model in a Vertex Al Workbench notebook instance Configure the model to generate feature importance’s after the model is trained.

B.

Build an AutoML tabular regression model Configure the model to generate explanations when it makes predictions.

C.

Build a custom TensorFlow neural network by using Vertex Al custom training Configure the model to generate explanations when it makes predictions.

D.

Build a random forest classification model in a Vertex Al Workbench notebook instance Configure the model to generate feature importance’s after the model is trained.

Buy Now
Questions 68

While performing exploratory data analysis on a dataset, you find that an important categorical feature has 5% null values. You want to minimize the bias that could result from the missing values. How should you handle the missing values?

Options:

A.

Remove the rows with missing values, and upsample your dataset by 5%.

B.

Replace the missing values with the feature’s mean.

C.

Replace the missing values with a placeholder category indicating a missing value.

D.

Move the rows with missing values to your validation dataset.

Buy Now
Questions 69

You work for an auto insurance company. You are preparing a proof-of-concept ML application that uses images of damaged vehicles to infer damaged parts Your team has assembled a set of annotated images from damage claim documents in the company ' s database The annotations associated with each image consist of a bounding box for each identified damaged part and the part name. You have been given a sufficient budget to tram models on Google Cloud You need to quickly create an initial model What should you do?

Options:

A.

Download a pre-trained object detection mode! from TensorFlow Hub Fine-tune the model in Vertex Al Workbench by using the annotated image data.

B.

Train an object detection model in AutoML by using the annotated image data.

C.

Create a pipeline in Vertex Al Pipelines and configure the AutoMLTrainingJobRunOp compon it to train a custom object detection model by using the annotated image data.

D.

Train an object detection model in Vertex Al custom training by using the annotated image data.

Buy Now
Questions 70

You work for a startup that has multiple data science workloads. Your compute infrastructure is currently on-premises. and the data science workloads are native to PySpark Your team plans to migrate their data science workloads to Google Cloud You need to build a proof of concept to migrate one data science job to Google Cloud You want to propose a migration process that requires minimal cost and effort. What should you do first?

Options:

A.

Create a n2-standard-4 VM instance and install Java, Scala and Apache Spark dependencies on it.

B.

Create a Google Kubemetes Engine cluster with a basic node pool configuration install Java Scala, and

Apache Spark dependencies on it.

C.

Create a Standard (1 master. 3 workers) Dataproc cluster, and run a Vertex Al Workbench notebook instance

on it.

D.

Create a Vertex Al Workbench notebook with instance type n2-standard-4.

Buy Now
Questions 71

You created a model that uses BigQuery ML to perform linear regression. You need to retrain the model on the cumulative data collected every week. You want to minimize the development effort and the scheduling cost. What should you do?

Options:

A.

Use BigQuerys scheduling service to run the model retraining query periodically.

B.

Create a pipeline in Vertex Al Pipelines that executes the retraining query and use the Cloud Scheduler API to run the query weekly.

C.

Use Cloud Scheduler to trigger a Cloud Function every week that runs the query for retraining the model.

D.

Use the BigQuery API Connector and Cloud Scheduler to trigger. Workflows every week that retrains the model.

Buy Now
Questions 72

You work for a manufacturing company. You need to train a custom image classification model to detect product detects at the end of an assembly line. Although your model is performing well, some images in your holdout set are consistently mislabeled with high confidence. You want to use Vertex Al to understand your models results. What should you do?

Options:

A.

Configure feature-based explanations by using sampled Shapley. Set number of feature permutations to the maximum value of 50.

B.

Create an index by using Vertex Al Matching Engine. Query the index with your mislabeled images

C.

Configure example-based explanations by using integrated gradients. Set visualization type to pixels, and set clip_percent_upperbound to 95.

D.

Configure example-based explanations. Specify the embedding output layer to be used for the latent space representation.

Buy Now
Questions 73

You need to quickly build and train a model to predict the sentiment of customer reviews with custom categories without writing code. You do not have enough data to train a model from scratch. The resulting model should have high predictive performance. Which service should you use?

Options:

A.

AutoML Natural Language

B.

Cloud Natural Language API

C.

AI Hub pre-made Jupyter Notebooks

D.

AI Platform Training built-in algorithms

Buy Now
Questions 74

You have been asked to develop an input pipeline for an ML training model that processes images from disparate sources at a low latency. You discover that your input data does not fit in memory. How should you create a dataset following Google-recommended best practices?

Options:

A.

Create a tf.data.Dataset.prefetch transformation

B.

Convert the images to tf .Tensor Objects, and then run Dataset. from_tensor_slices{).

C.

Convert the images to tf .Tensor Objects, and then run tf. data. Dataset. from_tensors ().

D.

Convert the images Into TFRecords, store the images in Cloud Storage, and then use the tf. data API to read the images for training

Buy Now
Questions 75

You are developing ML models with Al Platform for image segmentation on CT scans. You frequently update your model architectures based on the newest available research papers, and have to rerun training on the same dataset to benchmark their performance. You want to minimize computation costs and manual intervention while having version control for your code. What should you do?

Options:

A.

Use Cloud Functions to identify changes to your code in Cloud Storage and trigger a retraining job

B.

Use the gcloud command-line tool to submit training jobs on Al Platform when you update your code

C.

Use Cloud Build linked with Cloud Source Repositories to trigger retraining when new code is pushed to the repository

D.

Create an automated workflow in Cloud Composer that runs daily and looks for changes in code in Cloud Storage using a sensor.

Buy Now
Questions 76

You are developing a model to help your company create more targeted online advertising campaigns. You need to create a dataset that you will use to train the model. You want to avoid creating or reinforcing unfair bias in the model. What should you do?

Choose 2 answers

Options:

A.

Include a comprehensive set of demographic features.

B.

include only the demographic groups that most frequently interact with advertisements.

C.

Collect a random sample of production traffic to build the training dataset.

D.

Collect a stratified sample of production traffic to build the training dataset.

E.

Conduct fairness tests across sensitive categories and demographics on the trained model.

Buy Now
Questions 77

You recently developed a deep learning model using Keras, and now you are experimenting with different training strategies. First, you trained the model using a single GPU, but the training process was too slow. Next, you distributed the training across 4 GPUs using tf.distribute.MirroredStrategy (with no other changes), but you did not observe a decrease in training time. What should you do?

Options:

A.

Distribute the dataset with tf.distribute.Strategy.experimental_distribute_dataset

B.

Create a custom training loop.

C.

Use a TPU with tf.distribute.TPUStrategy.

D.

Increase the batch size.

Buy Now
Questions 78

You are an ML engineer at a manufacturing company. You need to build a model that identifies defects in products based on images of the product taken at the end of the assembly line. You want your model to preprocess the images with lower computation to quickly extract features of defects in products. Which approach should you use to build the model?

Options:

A.

Reinforcement learning

B.

Recommender system

C.

Recurrent Neural Networks (RNN)

D.

Convolutional Neural Networks (CNN)

Buy Now
Questions 79

During batch training of a neural network, you notice that there is an oscillation in the loss. How should you adjust your model to ensure that it converges?

Options:

A.

Increase the size of the training batch

B.

Decrease the size of the training batch

C.

Increase the learning rate hyperparameter

D.

Decrease the learning rate hyperparameter

Buy Now
Questions 80

You work for an organization that operates a streaming music service. You have a custom production model that is serving a " next song " recommendation based on a user’s recent listening history. Your model is deployed on a Vertex Al endpoint. You recently retrained the same model by using fresh data. The model received positive test results offline. You now want to test the new model in production while minimizing complexity. What should you do?

Options:

A.

Create a new Vertex Al endpoint for the new model and deploy the new model to that new endpoint Build a service to randomly send 5% of production traffic to the new endpoint Monitor end-user metrics such as listening time If end-user metrics improve between models over time gradually increase the percentage of production traffic sent to the new endpoint.

B.

Capture incoming prediction requests in BigQuery Create an experiment in Vertex Al Experiments Run batch predictions for both models using the captured data Use the user ' s selected song to compare the models performance side by side If the new models performance metrics are better than the previous model deploy the new model to production.

C.

Deploy the new model to the existing Vertex Al endpoint Use traffic splitting to send 5% of production traffic to the new model Monitor end-user metrics, such as listening time If end-user metrics improve between models over time, gradually increase the percentage of production traffic sent to the new model.

D.

Configure a model monitoring job for the existing Vertex Al endpoint. Configure the monitoring job to detect prediction drift, and set a threshold for alerts Update the model on the endpoint from the previous model to the new model If you receive an alert of prediction drift, revert to the previous model.

Buy Now
Questions 81

You work on a growing team of more than 50 data scientists who all use Al Platform. You are designing a strategy to organize your jobs, models, and versions in a clean and scalable way. Which strategy should you choose?

Options:

A.

Set up restrictive I AM permissions on the Al Platform notebooks so that only a single user or group can access a given instance.

B.

Separate each data scientist ' s work into a different project to ensure that the jobs, models, and versions created by each data scientist are accessible only to that user.

C.

Use labels to organize resources into descriptive categories. Apply a label to each created resource so that users can filter the results by label when viewing or monitoring the resources

D.

Set up a BigQuery sink for Cloud Logging logs that is appropriately filtered to capture information about Al Platform resource usage In BigQuery create a SQL view that maps users to the resources they are using.

Buy Now
Questions 82

Your data science team is training a PyTorch model for image classification based on a pre-trained RestNet model. You need to perform hyperparameter tuning to optimize for several parameters. What should you do?

Options:

A.

Convert the model to a Keras model, and run a Keras Tuner job.

B.

Run a hyperparameter tuning job on AI Platform using custom containers.

C.

Create a Kuberflow Pipelines instance, and run a hyperparameter tuning job on Katib.

D.

Convert the model to a TensorFlow model, and run a hyperparameter tuning job on AI Platform.

Buy Now
Questions 83

You are developing models to classify customer support emails. You created models with TensorFlow Estimators using small datasets on your on-premises system, but you now need to train the models using large datasets to ensure high performance. You will port your models to Google Cloud and want to minimize code refactoring and infrastructure overhead for easier migration from on-prem to cloud. What should you do?

Options:

A.

Use Vertex Al Platform for distributed training

B.

Create a cluster on Dataproc for training

C.

Create a Managed Instance Group with autoscaling

D.

Use Kubeflow Pipelines to train on a Google Kubernetes Engine cluster.

Buy Now
Questions 84

Your data science team has requested a system that supports scheduled model retraining, Docker containers, and a service that supports autoscaling and monitoring for online prediction requests. Which platform components should you choose for this system?

Options:

A.

Vertex AI Pipelines and App Engine

B.

Vertex AI Pipelines and Al Platform Prediction

C.

Cloud Composer, BigQuery ML , and Al Platform Prediction

D.

Cloud Composer, Al Platform Training with custom containers, and App Engine

Buy Now
Questions 85

You have a large corpus of written support cases that can be classified into 3 separate categories: Technical Support, Billing Support, or Other Issues. You need to quickly build, test, and deploy a service that will automatically classify future written requests into one of the categories. How should you configure the pipeline?

Options:

A.

Use the Cloud Natural Language API to obtain metadata to classify the incoming cases.

B.

Use AutoML Natural Language to build and test a classifier. Deploy the model as a REST API.

C.

Use BigQuery ML to build and test a logistic regression model to classify incoming requests. Use BigQuery ML to perform inference.

D.

Create a TensorFlow model using Google’s BERT pre-trained model. Build and test a classifier, and deploy the model using Vertex AI.

Buy Now
Questions 86

You need to design a customized deep neural network in Keras that will predict customer purchases based on their purchase history. You want to explore model performance using multiple model architectures, store training data, and be able to compare the evaluation metrics in the same dashboard. What should you do?

Options:

A.

Create multiple models using AutoML Tables

B.

Automate multiple training runs using Cloud Composer

C.

Run multiple training jobs on Al Platform with similar job names

D.

Create an experiment in Kubeflow Pipelines to organize multiple runs

Buy Now
Questions 87

You work with a data engineering team that has developed a pipeline to clean your dataset and save it in a Cloud Storage bucket. You have created an ML model and want to use the data to refresh your model as soon as new data is available. As part of your CI/CD workflow, you want to automatically run a Kubeflow Pipelines training job on Google Kubernetes Engine (GKE). How should you architect this workflow?

Options:

A.

Configure your pipeline with Dataflow, which saves the files in Cloud Storage After the file is saved, start the training job on a GKE cluster

B.

Use App Engine to create a lightweight python client that continuously polls Cloud Storage for new files As soon as a file arrives, initiate the training job

C.

Configure a Cloud Storage trigger to send a message to a Pub/Sub topic when a new file is available in a storage bucket. Use a Pub/Sub-triggered Cloud Function to start the training job on a GKE cluster

D.

Use Cloud Scheduler to schedule jobs at a regular interval. For the first step of the job. check the timestamp of objects in your Cloud Storage bucket If there are no new files since the last run, abort the job.

Buy Now
Questions 88

You have been given a dataset with sales predictions based on your company’s marketing activities. The data is structured and stored in BigQuery, and has been carefully managed by a team of data analysts. You need to prepare a report providing insights into the predictive capabilities of the data. You were asked to run several ML models with different levels of sophistication, including simple models and multilayered neural networks. You only have a few hours to gather the results of your experiments. Which Google Cloud tools should you use to complete this task in the most efficient and self-serviced way?

Options:

A.

Use BigQuery ML to run several regression models, and analyze their performance.

B.

Read the data from BigQuery using Dataproc, and run several models using SparkML.

C.

Use Vertex AI Workbench user-managed notebooks with scikit-learn code for a variety of ML algorithms and performance metrics.

D.

Train a custom TensorFlow model with Vertex AI, reading the data from BigQuery featuring a variety of ML algorithms.

Buy Now
Exam Name: Google Professional Machine Learning Engineer
Last Update: Apr 7, 2026
Questions: 296
Professional-Machine-Learning-Engineer pdf

Professional-Machine-Learning-Engineer PDF

$25.5  $84.99
Professional-Machine-Learning-Engineer Engine

Professional-Machine-Learning-Engineer Testing Engine

$30  $99.99
Professional-Machine-Learning-Engineer PDF + Engine

Professional-Machine-Learning-Engineer PDF + Testing Engine

$40.5  $134.99