Summer Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: cramtreat

Professional-Data-Engineer Google Professional Data Engineer Exam Questions and Answers

Questions 4

Given the record streams MJTelco is interested in ingesting per day, they are concerned about the cost of Google BigQuery increasing. MJTelco asks you to provide a design solution. They require a single large data table called tracking_table. Additionally, they want to minimize the cost of daily queries while performing fine-grained analysis of each day’s events. They also want to use streaming ingestion. What should you do?

Options:

A.

Create a table called tracking_table and include a DATE column.

B.

Create a partitioned table called tracking_table and include a TIMESTAMP column.

C.

Create sharded tables for each day following the pattern tracking_table_YYYYMMDD.

D.

Create a table called tracking_table with a TIMESTAMP column to represent the day.

Buy Now
Questions 5

Which methods can be used to reduce the number of rows processed by BigQuery?

Options:

A.

Splitting tables into multiple tables; putting data in partitions

B.

Splitting tables into multiple tables; putting data in partitions; using the LIMIT clause

C.

Putting data in partitions; using the LIMIT clause

D.

Splitting tables into multiple tables; using the LIMIT clause

Buy Now
Questions 6

You are creating a model to predict housing prices. Due to budget constraints, you must run it on a single resource-constrained virtual machine. Which learning algorithm should you use?

Options:

A.

Linear regression

B.

Logistic classification

C.

Recurrent neural network

D.

Feedforward neural network

Buy Now
Questions 7

You work for a car manufacturer and have set up a data pipeline using Google Cloud Pub/Sub to capture anomalous sensor events. You are using a push subscription in Cloud Pub/Sub that calls a custom HTTPS endpoint that you have created to take action of these anomalous events as they occur. Your custom HTTPS endpoint keeps getting an inordinate amount of duplicate messages. What is the most likely cause of these duplicate messages?

Options:

A.

The message body for the sensor event is too large.

B.

Your custom endpoint has an out-of-date SSL certificate.

C.

The Cloud Pub/Sub topic has too many messages published to it.

D.

Your custom endpoint is not acknowledging messages within the acknowledgement deadline.

Buy Now
Questions 8

You need to compose visualizations for operations teams with the following requirements:

Which approach meets the requirements?

Options:

A.

Load the data into Google Sheets, use formulas to calculate a metric, and use filters/sorting to show only suboptimal links in a table.

B.

Load the data into Google BigQuery tables, write Google Apps Script that queries the data, calculates the metric, and shows only suboptimal rows in a table in Google Sheets.

C.

Load the data into Google Cloud Datastore tables, write a Google App Engine Application that queries all rows, applies a function to derive the metric, and then renders results in a table using the Google charts and visualization API.

D.

Load the data into Google BigQuery tables, write a Google Data Studio 360 report that connects to your data, calculates a metric, and then uses a filter expression to show only suboptimal rows in a table.

Buy Now
Questions 9

You designed a database for patient records as a pilot project to cover a few hundred patients in three clinics. Your design used a single database table to represent all patients and their visits, and you used self-joins to generate reports. The server resource utilization was at 50%. Since then, the scope of the project has expanded. The database must now store 100 times more patientrecords. You can no longer run the reports, because they either take too long or they encounter errors with insufficient compute resources. How should you adjust the database design?

Options:

A.

Add capacity (memory and disk space) to the database server by the order of 200.

B.

Shard the tables into smaller ones based on date ranges, and only generate reports with prespecified date ranges.

C.

Normalize the master patient-record table into the patient table and the visits table, and create other necessary tables to avoid self-join.

D.

Partition the table into smaller tables, with one for each clinic. Run queries against the smaller table pairs, and use unions for consolidated reports.

Buy Now
Questions 10

MJTelco needs you to create a schema in Google Bigtable that will allow for the historical analysis of the last 2 years of records. Each record that comes in is sent every 15 minutes, and contains a unique identifier of the device and a data record. The most common query is for all the data for a given device for a given day. Which schema should you use?

Options:

A.

Rowkey: date#device_idColumn data: data_point

B.

Rowkey: dateColumn data: device_id, data_point

C.

Rowkey: device_idColumn data: date, data_point

D.

Rowkey: data_pointColumn data: device_id, date

E.

Rowkey: date#data_pointColumn data: device_id

Buy Now
Questions 11

You work for a large fast food restaurant chain with over 400,000 employees. You store employee information in Google BigQuery in a Users table consisting of a FirstName field and a LastName field. A member of IT is building an application and asks you to modify the schema and data in BigQuery so the application can query a FullName field consisting of the value of the FirstName field concatenated with a space, followed by the value of the LastName field for each employee. How can you make that data available while minimizing cost?

Options:

A.

Create a view in BigQuery that concatenates the FirstName and LastName field values to produce the FullName.

B.

Add a new column called FullName to the Users table. Run an UPDATE statement that updates the FullName column for each user with the concatenation of the FirstName and LastName values.

C.

Create a Google Cloud Dataflow job that queries BigQuery for the entire Users table, concatenates the FirstName value and LastName value for each user, and loads the proper values for FirstName, LastName, and FullName into a new table in BigQuery.

D.

Use BigQuery to export the data for the table to a CSV file. Create a Google Cloud Dataproc job to process the CSV file and output a new CSV file containing the proper values for FirstName, LastName and FullName. Run a BigQuery load job to load the new CSV file into BigQuery.

Buy Now
Questions 12

You work for a manufacturing plant that batches application log files together into a single log file once a day at 2:00 AM. You have written a Google Cloud Dataflow job to process that log file. You need to make sure the log file in processed once per day as inexpensively as possible. What should you do?

Options:

A.

Change the processing job to use Google Cloud Dataproc instead.

B.

Manually start the Cloud Dataflow job each morning when you get into the office.

C.

Create a cron job with Google App Engine Cron Service to run the Cloud Dataflow job.

D.

Configure the Cloud Dataflow job as a streaming job so that it processes the log data immediately.

Buy Now
Questions 13

You are deploying a new storage system for your mobile application, which is a media streaming service. You decide the best fit is Google Cloud Datastore. You have entities with multiple properties, some of which can take on multiple values. For example, in the entity ‘Movie’ the property ‘actors’ and the property ‘tags’ have multiple values but the property ‘date released’ does not. A typical query would ask for all movies with actor= ordered by date_released or all movies with tag=Comedy ordered by date_released. How should you avoid a combinatorial explosion in the number of indexes?

Options:

A.

Option A

B.

Option B.

C.

Option C

D.

Option D

Buy Now
Questions 14

You work for an economic consulting firm that helps companies identify economic trends as they happen. As part of your analysis, you use Google BigQuery to correlate customer data with the average prices of the 100 most common goods sold, including bread, gasoline, milk, and others. The average prices of these goods are updated every 30 minutes. You want to make sure this data stays up to date so you can combine it with other data in BigQuery as cheaply as possible. What should you do?

Options:

A.

Load the data every 30 minutes into a new partitioned table in BigQuery.

B.

Store and update the data in a regional Google Cloud Storage bucket and create a federated data source in BigQuery

C.

Store the data in Google Cloud Datastore. Use Google Cloud Dataflow to query BigQuery and combine the data programmatically with the data stored in Cloud Datastore

D.

Store the data in a file in a regional Google Cloud Storage bucket. Use Cloud Dataflow to query BigQuery and combine the data programmatically with the data stored in Google Cloud Storage.

Buy Now
Questions 15

Your company produces 20,000 files every hour. Each data file is formatted as a comma separated values (CSV) file that is less than 4 KB. All files must be ingested on Google Cloud Platform before they can be processed. Your company site has a 200 ms latency to Google Cloud, and your Internet connection bandwidth is limited as 50 Mbps. You currently deploy a secure FTP (SFTP) server on a virtual machine in Google Compute Engine as the data ingestion point. A local SFTP client runs on a dedicated machine to transmit the CSV files as is. The goal is to make reports with data from the previous day available to the executives by 10:00 a.m. each day. This design is barely able to keep up with the current volume, even though the bandwidth utilization is rather low.

You are told that due to seasonality, your company expects the number of files to double for the next three months. Which two actions should you take? (choose two.)

Options:

A.

Introduce data compression for each file to increase the rate file of file transfer.

B.

Contact your internet service provider (ISP) to increase your maximum bandwidth to at least 100 Mbps.

C.

Redesign the data ingestion process to use gsutil tool to send the CSV files to a storage bucket in parallel.

D.

Assemble 1,000 files into a tape archive (TAR) file. Transmit the TAR files instead, and disassemble the CSV files in the cloud upon receiving them.

E.

Create an S3-compatible storage endpoint in your network, and use Google Cloud Storage Transfer Service to transfer on-premices data to the designated storage bucket.

Buy Now
Questions 16

You are designing the database schema for a machine learning-based food ordering service that will predict what users want to eat. Here is some of the information you need to store:

The user profile: What the user likes and doesn’t like to eat

The user account information: Name, address, preferred meal times

The order information: When orders are made, from where, to whom

The database will be used to store all the transactional data of the product. You want to optimize the data schema. Which Google Cloud Platform product should you use?

Options:

A.

BigQuery

B.

Cloud SQL

C.

Cloud Bigtable

D.

Cloud Datastore

Buy Now
Questions 17

Your company is loading comma-separated values (CSV) files into Google BigQuery. The data is fully imported successfully; however, the imported data is not matching byte-to-byte to the source file. What is the most likely cause of this problem?

Options:

A.

The CSV data loaded in BigQuery is not flagged as CSV.

B.

The CSV data has invalid rows that were skipped on import.

C.

The CSV data loaded in BigQuery is not using BigQuery’s default encoding.

D.

The CSV data has not gone through an ETL phase before loading into BigQuery.

Buy Now
Questions 18

Your company has recently grown rapidly and now ingesting data at a significantly higher rate than it was previously. You manage the daily batch MapReduce analytics jobs in Apache Hadoop. However, the recent increase in data has meant the batch jobs are falling behind. You were asked to recommend ways the development team could increase the responsiveness of the analytics without increasing costs. What should you recommend they do?

Options:

A.

Rewrite the job in Pig.

B.

Rewrite the job in Apache Spark.

C.

Increase the size of the Hadoop cluster.

D.

Decrease the size of the Hadoop cluster but also rewrite the job in Hive.

Buy Now
Questions 19

You create a new report for your large team in Google Data Studio 360. The report uses Google BigQuery as its data source. It is company policy to ensure employees can view only the data associated with their region, so you create and populate a table for each region. You need to enforce the regional access policy to the data.

Which two actions should you take? (Choose two.)

Options:

A.

Ensure all the tables are included in global dataset.

B.

Ensure each table is included in a dataset for a region.

C.

Adjust the settings for each table to allow a related region-based security group view access.

D.

Adjust the settings for each view to allow a related region-based security group view access.

E.

Adjust the settings for each dataset to allow a related region-based security group view access.

Buy Now
Questions 20

You have a streaming pipeline that ingests data from Pub/Sub in production. You need to update this streaming pipeline with improved business logic. You need to ensure that the updated pipeline reprocesses the previous two days of delivered Pub/Sub messages. What should you do?

Choose 2 answers

Options:

A.

Use Pub/Sub Seek with a timestamp.

B.

Use the Pub/Sub subscription clear-retry-policy flag.

C.

Create a new Pub/Sub subscription two days before the deployment.

D.

Use the Pub/Sub subscription retain-asked-messages flag.

E.

Use Pub/Sub Snapshot capture two days before the deployment.

Buy Now
Questions 21

You want to use a database of information about tissue samples to classify future tissue samples as either normal or mutated. You are evaluating an unsupervised anomaly detection method for classifying the tissue samples. Which two characteristic support this method? (Choose two.)

Options:

A.

There are very few occurrences of mutations relative to normal samples.

B.

There are roughly equal occurrences of both normal and mutated samples in the database.

C.

You expect future mutations to have different features from the mutated samples in the database.

D.

You expect future mutations to have similar features to the mutated samples in the database.

E.

You already have labels for which samples are mutated and which are normal in the database.

Buy Now
Questions 22

Your weather app queries a database every 15 minutes to get the current temperature. The frontend is powered by Google App Engine and server millions of users. How should you design the frontend to respond to a database failure?

Options:

A.

Issue a command to restart the database servers.

B.

Retry the query with exponential backoff, up to a cap of 15 minutes.

C.

Retry the query every second until it comes back online to minimize staleness of data.

D.

Reduce the query frequency to once every hour until the database comes back online.

Buy Now
Questions 23

Which action can a Cloud Dataproc Viewer perform?

Options:

A.

Submit a job.

B.

Create a cluster.

C.

Delete a cluster.

D.

List the jobs.

Buy Now
Questions 24

How would you query specific partitions in a BigQuery table?

Options:

A.

Use the DAY column in the WHERE clause

B.

Use the EXTRACT(DAY) clause

C.

Use the __PARTITIONTIME pseudo-column in the WHERE clause

D.

Use DATE BETWEEN in the WHERE clause

Buy Now
Questions 25

Which of the following statements is NOT true regarding Bigtable access roles?

Options:

A.

Using IAM roles, you cannot give a user access to only one table in a project, rather than all tables in a project.

B.

To give a user access to only one table in a project, grant the user the Bigtable Editor role forthat table.

C.

You can configure access control only at the project level.

D.

To give a user access to only one table in a project, you must configure access through your application.

Buy Now
Questions 26

Your company is streaming real-time sensor data from their factory floor into Bigtable and they have noticed extremely poor performance. How should the row key be redesigned to improve Bigtable performance on queries that populate real-time dashboards?

Options:

A.

Use a row key of the form .

B.

Use a row key of the form .

C.

Use a row key of the form #.

D.

Use a row key of the form >##.

Buy Now
Questions 27

You are building a model to predict whether or not it will rain on a given day. You have thousands of input features and want to see if you can improve training speed by removing some features while having a minimum effect on model accuracy. What can you do?

Options:

A.

Eliminate features that are highly correlated to the output labels.

B.

Combine highly co-dependent features into one representative feature.

C.

Instead of feeding in each feature individually, average their values in batches of 3.

D.

Remove the features that have null values for more than 50% of the training records.

Buy Now
Questions 28

Your company is performing data preprocessing for a learning algorithm in Google Cloud Dataflow. Numerous data logs are being are being generated during this step, and the team wants to analyze them. Due to the dynamic nature of the campaign, the data is growing exponentially every hour.

The data scientists have written the following code to read the data for a new key features in the logs.

BigQueryIO.Read

.named(“ReadLogData”)

.from(“clouddataflow-readonly:samples.log_data”)

You want to improve the performance of this data read. What should you do?

Options:

A.

Specify the TableReference object in the code.

B.

Use .fromQuery operation to read specific fields from the table.

C.

Use of both the Google BigQuery TableSchema and TableFieldSchema classes.

D.

Call a transform that returns TableRow objects, where each element in the PCollexction represents a single row in the table.

Buy Now
Questions 29

You have spent a few days loading data from comma-separated values (CSV) files into the Google BigQuery table CLICK_STREAM. The column DT stores the epoch time of click events. For convenience, you chose a simple schema where every field is treated as the STRING type. Now, you want to compute web session durations of users who visit your site, and you want to change its data type to the TIMESTAMP. You want to minimize the migration effort without making future queries computationally expensive. What should you do?

Options:

A.

Delete the table CLICK_STREAM, and then re-create it such that the column DT is of the TIMESTAMP type. Reload the data.

B.

Add a column TS of the TIMESTAMP type to the table CLICK_STREAM, and populate the numeric values from the column TS for each row. Reference the column TS instead of the column DT from now on.

C.

Create a view CLICK_STREAM_V, where strings from the column DT are cast into TIMESTAMP values. Reference the view CLICK_STREAM_V instead of the table CLICK_STREAM from now on.

D.

Add two columns to the table CLICK STREAM: TS of the TIMESTAMP type and IS_NEW of the BOOLEAN type. Reload all data in append mode. For each appended row, set the value of IS_NEW to true. For future queries, reference the column TS instead of the column DT, with the WHERE clause ensuring that the value of IS_NEW must be true.

E.

Construct a query to return every row of the table CLICK_STREAM, while using the built-in function to cast strings from the column DT into TIMESTAMP values. Run the query into a destination table NEW_CLICK_STREAM, in which the column TS is the TIMESTAMP type. Reference the table NEW_CLICK_STREAM instead of the table CLICK_STREAM from now on. In the future, new data is loaded into the table NEW_CLICK_STREAM.

Buy Now
Questions 30

Your company’s on-premises Apache Hadoop servers are approaching end-of-life, and IT has decided to migrate the cluster to Google Cloud Dataproc. A like-for-like migration of the cluster would require 50 TB of Google Persistent Disk per node. The CIO is concerned about the cost of using that much block storage. You want to minimize the storage cost of the migration. What should you do?

Options:

A.

Put the data into Google Cloud Storage.

B.

Use preemptible virtual machines (VMs) for the Cloud Dataproc cluster.

C.

Tune the Cloud Dataproc cluster so that there is just enough disk for all data.

D.

Migrate some of the cold data into Google Cloud Storage, and keep only the hot data in Persistent Disk.

Buy Now
Questions 31

Your company handles data processing for a number of different clients. Each client prefers to use their own suite of analytics tools, with some allowing direct query access via Google BigQuery. You need to secure the data so that clients cannot see each other’s data. You want to ensure appropriate access to the data. Which three steps should you take? (Choose three.)

Options:

A.

Load data into different partitions.

B.

Load data into a different dataset for each client.

C.

Put each client’s BigQuery dataset into a different table.

D.

Restrict a client’s dataset to approved users.

E.

Only allow a service account to access the datasets.

F.

Use the appropriate identity and access management (IAM) roles for each client’s users.

Buy Now
Questions 32

Your company has hired a new data scientist who wants to perform complicated analyses across very large datasets stored in Google Cloud Storage and in a Cassandra cluster on Google Compute Engine. The scientist primarily wants to create labelled data sets for machine learning projects, along with some visualization tasks. She reports that her laptop is not powerful enough to perform her tasks and it is slowing her down. You want to help her perform her tasks. What should you do?

Options:

A.

Run a local version of Jupiter on the laptop.

B.

Grant the user access to Google Cloud Shell.

C.

Host a visualization tool on a VM on Google Compute Engine.

D.

Deploy Google Cloud Datalab to a virtual machine (VM) on Google Compute Engine.

Buy Now
Questions 33

You want to process payment transactions in a point-of-sale application that will run on Google Cloud Platform. Your user base could grow exponentially, but you do not want to manage infrastructure scaling.

Which Google database service should you use?

Options:

A.

Cloud SQL

B.

BigQuery

C.

Cloud Bigtable

D.

Cloud Datastore

Buy Now
Questions 34

You want to use Google Stackdriver Logging to monitor Google BigQuery usage. You need an instant notification to be sent to your monitoring tool when new data is appended to a certain table using an insert job, but you do not want to receive notifications for other tables. What should you do?

Options:

A.

Make a call to the Stackdriver API to list all logs, and apply an advanced filter.

B.

In the Stackdriver logging admin interface, and enable a log sink export to BigQuery.

C.

In the Stackdriver logging admin interface, enable a log sink export to Google Cloud Pub/Sub, and subscribe to the topic from your monitoring tool.

D.

Using the Stackdriver API, create a project sink with advanced log filter to export to Pub/Sub, and subscribe to the topic from your monitoring tool.

Buy Now
Questions 35

Your company’s customer and order databases are often under heavy load. This makes performing analytics against them difficult without harming operations. The databases are in a MySQL cluster, with nightly backups taken using mysqldump. You want to perform analytics with minimal impact on operations. What should you do?

Options:

A.

Add a node to the MySQL cluster and build an OLAP cube there.

B.

Use an ETL tool to load the data from MySQL into Google BigQuery.

C.

Connect an on-premises Apache Hadoop cluster to MySQL and perform ETL.

D.

Mount the backups to Google Cloud SQL, and then process the data using Google Cloud Dataproc.

Buy Now
Questions 36

Which of these statements about BigQuery caching is true?

Options:

A.

By default, a query's results are not cached.

B.

BigQuery caches query results for 48 hours.

C.

Query results are cached even if you specify a destination table.

D.

There is no charge for a query that retrieves its results from cache.

Buy Now
Questions 37

Your company is in a highly regulated industry. One of your requirements is to ensure individual users have access only to the minimum amount of information required to do their jobs. You want to enforce this requirement with Google BigQuery. Which three approaches can you take? (Choose three.)

Options:

A.

Disable writes to certain tables.

B.

Restrict access to tables by role.

C.

Ensure that the data is encrypted at all times.

D.

Restrict BigQuery API access to approved users.

E.

Segregate data across multiple tables or databases.

F.

Use Google Stackdriver Audit Logging to determine policy violations.

Buy Now
Questions 38

You need to store and analyze social media postings in Google BigQuery at a rate of 10,000 messages per minute in near real-time. Initially, design the application to use streaming inserts for individual postings. Your application also performs data aggregations right after the streaming inserts. You discover that the queries after streaming inserts do not exhibit strong consistency, and reports from the queries might miss in-flight data. How can you adjust your application design?

Options:

A.

Re-write the application to load accumulated data every 2 minutes.

B.

Convert the streaming insert code to batch load for individual messages.

C.

Load the original message to Google Cloud SQL, and export the table every hour to BigQuery via streaming inserts.

D.

Estimate the average latency for data availability after streaming inserts, and always run queries after waiting twice as long.

Buy Now
Questions 39

Your company is using WHILECARD tables to query data across multiple tables with similar names. The SQL statement is currently failing with the following error:

# Syntax error : Expected end of statement but got “-“ at [4:11]

SELECT age

FROM

bigquery-public-data.noaa_gsod.gsod

WHERE

age != 99

AND_TABLE_SUFFIX = ‘1929’

ORDER BY

age DESC

Which table name will make the SQL statement work correctly?

Options:

A.

‘bigquery-public-data.noaa_gsod.gsod‘

B.

bigquery-public-data.noaa_gsod.gsod*

C.

‘bigquery-public-data.noaa_gsod.gsod’*

D.

‘bigquery-public-data.noaa_gsod.gsod*`

Buy Now
Questions 40

Which of the following statements about Legacy SQL and Standard SQL is not true?

Options:

A.

Standard SQL is the preferred query language for BigQuery.

B.

If you write a query in Legacy SQL, it might generate an error if you try to run it with Standard SQL.

C.

One difference between the two query languages is how you specify fully-qualified table names (i.e. table names that include their associated project name).

D.

You need to set a query language for each dataset and the default is Standard SQL.

Buy Now
Questions 41

What Dataflow concept determines when a Window's contents should be output based on certain criteria being met?

Options:

A.

Sessions

B.

OutputCriteria

C.

Windows

D.

Triggers

Buy Now
Questions 42

Flowlogistic wants to use Google BigQuery as their primary analysis system, but they still have Apache Hadoop and Spark workloads that they cannot move to BigQuery. Flowlogistic does not know how to store the data that is common to both workloads. What should they do?

Options:

A.

Store the common data in BigQuery as partitioned tables.

B.

Store the common data in BigQuery and expose authorized views.

C.

Store the common data encoded as Avro in Google Cloud Storage.

D.

Store he common data in the HDFS storage for a Google Cloud Dataproc cluster.

Buy Now
Questions 43

Flowlogistic is rolling out their real-time inventory tracking system. The tracking devices will all send package-tracking messages, which will now go to a single Google Cloud Pub/Sub topic instead of the Apache Kafka cluster. A subscriber application will then process the messages for real-time reporting and store them in Google BigQuery for historical analysis. You want to ensure the package data can be analyzed over time.

Which approach should you take?

Options:

A.

Attach the timestamp on each message in the Cloud Pub/Sub subscriber application as they are received.

B.

Attach the timestamp and Package ID on the outbound message from each publisher device as they are sent to Clod Pub/Sub.

C.

Use the NOW () function in BigQuery to record the event’s time.

D.

Use the automatically generated timestamp from Cloud Pub/Sub to order the data.

Buy Now
Questions 44

What are two methods that can be used to denormalize tables in BigQuery?

Options:

A.

1) Split table into multiple tables; 2) Use a partitioned table

B.

1) Join tables into one table; 2) Use nested repeated fields

C.

1) Use a partitioned table; 2) Join tables into one table

D.

1) Use nested repeated fields; 2) Use a partitioned table

Buy Now
Questions 45

The YARN ResourceManager and the HDFS NameNode interfaces are available on a Cloud Dataproc cluster ____.

Options:

A.

application node

B.

conditional node

C.

master node

D.

worker node

Buy Now
Questions 46

Which Cloud Dataflow / Beam feature should you use to aggregate data in an unbounded data source every hour based on the time when the data entered the pipeline?

Options:

A.

An hourly watermark

B.

An event time trigger

C.

The with Allowed Lateness method

D.

A processing time trigger

Buy Now
Questions 47

By default, which of the following windowing behavior does Dataflow apply to unbounded data sets?

Options:

A.

Windows at every 100 MB of data

B.

Single, Global Window

C.

Windows at every 1 minute

D.

Windows at every 10 minutes

Buy Now
Questions 48

You are loading CSV files from Cloud Storage to BigQuery. The files have known data quality issues, including mismatched data types, such as STRINGS and INT64s in the same column, andinconsistent formatting of values such as phone numbers or addresses. You need to create the data pipeline to maintain data quality and perform the required cleansing and transformation. What should you do?

Options:

A.

Use Data Fusion to transform the data before loading it into BigQuery.

B.

Load the CSV files into a staging table with the desired schema, perform the transformations with SQL. and then write the results to the final destination table.

C.

Create a table with the desired schema, toad the CSV files into the table, and perform the transformations in place using SQL.

D.

Use Data Fusion to convert the CSV files lo a self-describing data formal, such as AVRO. before loading the data to BigOuery.

Buy Now
Questions 49

You want to migrate an Apache Spark 3 batch job from on-premises to Google Cloud. You need to minimally change the job so that the job reads from Cloud Storage and writes the result to BigQuery. Your job is optimized for Spark, where each executor has 8 vCPU and 16 GB memory, and you want to be able to choose similar settings. You want to minimize installation and management effort to run your job. What should you do?

Options:

A.

Execute the job in a new Dataproc cluster.

B.

Execute as a Dataproc Serverless job.

C.

Execute the job as part of a deployment in a new Google Kubernetes Engine cluster.

D.

Execute the job from a new Compute Engine VM.

Buy Now
Questions 50

You have a job that you want to cancel. It is a streaming pipeline, and you want to ensure that any data that is in-flight is processed and written to the output. Which of the following commands can you use on the Dataflow monitoring console to stop the pipeline job?

Options:

A.

Cancel

B.

Drain

C.

Stop

D.

Finish

Buy Now
Questions 51

What are two of the characteristics of using online prediction rather than batch prediction?

Options:

A.

It is optimized to handle a high volume of data instances in a job and to run more complex models.

B.

Predictions are returned in the response message.

C.

Predictions are written to output files in a Cloud Storage location that you specify.

D.

It is optimized to minimize the latency of serving predictions.

Buy Now
Questions 52

Does Dataflow process batch data pipelines or streaming data pipelines?

Options:

A.

Only Batch Data Pipelines

B.

Both Batch and Streaming Data Pipelines

C.

Only Streaming Data Pipelines

D.

None of the above

Buy Now
Questions 53

Flowlogistic’s CEO wants to gain rapid insight into their customer base so his sales team can be better informed in the field. This team is not very technical, so they’ve purchased a visualization tool to simplify the creation of BigQuery reports. However, they’ve been overwhelmed by all thedata in the table, and are spending a lot of money on queries trying to find the data they need. You want to solve their problem in the most cost-effective way. What should you do?

Options:

A.

Export the data into a Google Sheet for virtualization.

B.

Create an additional table with only the necessary columns.

C.

Create a view on the table to present to the virtualization tool.

D.

Create identity and access management (IAM) roles on the appropriate columns, so only they appear in a query.

Buy Now
Questions 54

Which row keys are likely to cause a disproportionate number of reads and/or writes on a particular node in a Bigtable cluster (select 2 answers)?

Options:

A.

A sequential numeric ID

B.

A timestamp followed by a stock symbol

C.

A non-sequential numeric ID

D.

A stock symbol followed by a timestamp

Buy Now
Questions 55

You are choosing a NoSQL database to handle telemetry data submitted from millions of Internet-of-Things (IoT) devices. The volume of data is growing at 100 TB per year, and each data entry has about 100 attributes. The data processing pipeline does not require atomicity, consistency, isolation, and durability (ACID). However, high availability and low latency are required.

You need to analyze the data by querying against individual fields. Which three databases meet your requirements? (Choose three.)

Options:

A.

Redis

B.

HBase

C.

MySQL

D.

MongoDB

E.

Cassandra

F.

HDFS with Hive

Buy Now
Questions 56

You need to migrate a Redis database from an on-premises data center to a Memorystore for Redis instance. You want to follow Google-recommended practices and perform the migration for minimal cost. time, and effort. What should you do?

Options:

A.

Make a secondary instance of the Redis database on a Compute Engine instance, and then perform a live cutover.

B.

Write a shell script to migrate the Redis data, and create a new Memorystore for Redis instance.

C.

Create a Dataflow job to road the Redis database from the on-premises data center. and write the data to a Memorystore for Redis instance

D.

Make an RDB backup of the Redis database, use the gsutil utility to copy the RDB file into a Cloud Storage bucket, and then import the RDB tile into the Memorystore for Redis instance.

Buy Now
Questions 57

Flowlogistic’s management has determined that the current Apache Kafka servers cannot handle the data volume for their real-time inventory tracking system. You need to build a new system on Google Cloud Platform (GCP) that will feed the proprietary tracking software. The system must be able to ingest data from a variety of global sources, process and query in real-time, and store the data reliably. Which combination of GCP products should you choose?

Options:

A.

Cloud Pub/Sub, Cloud Dataflow, and Cloud Storage

B.

Cloud Pub/Sub, Cloud Dataflow, and Local SSD

C.

Cloud Pub/Sub, Cloud SQL, and Cloud Storage

D.

Cloud Load Balancing, Cloud Dataflow, and Cloud Storage

Buy Now
Questions 58

The data analyst team at your company uses BigQuery for ad-hoc queries and scheduled SQL pipelines in a Google Cloud project with a slot reservation of 2000 slots. However, with the recent introduction of hundreds of new non time-sensitive SQL pipelines, the team is encountering frequent quota errors. You examine the logs and notice that approximately 1500 queries are being triggered concurrently during peak time. You need to resolve the concurrency issue. What should you do?

Options:

A.

Update SQL pipelines and ad-hoc queries to run as interactive query jobs.

B.

Increase the slot capacity of the project with baseline as 0 and maximum reservation size as 3000.

C.

Update SOL pipelines to run as a batch query, and run ad-hoc queries as interactive query jobs.

D.

Increase the slot capacity of the project with baseline as 2000 and maximum reservation size as 3000.

Buy Now
Questions 59

You work for a mid-sized enterprise that needs to move its operational system transaction data from an on-premises database to GCP. The database is about 20 TB in size. Which database should you choose?

Options:

A.

Cloud SQL

B.

Cloud Bigtable

C.

Cloud Spanner

D.

Cloud Datastore

Buy Now
Questions 60

You store historic data in Cloud Storage. You need to perform analytics on the historic data. You want to use a solution to detect invalid data entries and perform data transformations that will not require programming or knowledge of SQL.

What should you do?

Options:

A.

Use Cloud Dataflow with Beam to detect errors and perform transformations.

B.

Use Cloud Dataprep with recipes to detect errors and perform transformations.

C.

Use Cloud Dataproc with a Hadoop job to detect errors and perform transformations.

D.

Use federated tables in BigQuery with queries to detect errors and perform transformations.

Buy Now
Questions 61

You currently use a SQL-based tool to visualize your data stored in BigQuery The data visualizations require the use of outer joins and analytic functions. Visualizations must be based on data that is no less than 4 hours old. Business users are complaining that the visualizations are too slow to generate. You want to improve the performance of the visualization queries while minimizing the maintenance overhead of the data preparation pipeline. What should you do?

Options:

A.

Create materialized views with the allow_non_incremental_definition option set to true for the visualization queries. Specify the max_3taleness parameter to 4 hours and the enable_refresh parameter to true. Reference the materialized views in the data visualization tool.

B.

Create views for the visualization queries. Reference the views in the data visualization tool.

C.

Create materialized views for the visualization queries. Use the incremental updates capability of BigQuery materialized views to handlechanged data automatically. Reference the materialized views in the data visualization tool.

D.

Create a Cloud Function instance to export the visualization query results as parquet files to a Cloud Storage bucket. Use Cloud Schedulerto trigger the Cloud Function every 4 hours. Reference the parquet files in the data visualization tool.

Buy Now
Questions 62

Your company is migrating its on-premises data warehousing solution to BigQuery. The existing data warehouse uses trigger-based change data capture (CDC) to apply daily updates from transactional database sources Your company wants to use BigQuery to improve its handling of CDC and to optimize the performance of the data warehouse Source system changes must be available for query m near-real time using tog-based CDC streams You need to ensure that changes in the BigQuery reporting table are available with minimal latency and reduced overhead. What should you do? Choose 2 answers

Options:

A.

Perform a DML INSERT UPDATE, or DELETE to replicate each CDC record in the reporting table m real time.

B.

Periodically DELETE outdated records from the reporting tablePeriodically use a DML MERGE to simultaneously perform DML INSERT. UPDATE, and DELETE operations in the reporting table

C.

Insert each new CDC record and corresponding operation type into a staging table in real time

D.

Insert each new CDC record and corresponding operation type into the reporting table in real time and use a materialized view to expose only the current version of each unique record.

Buy Now
Questions 63

You have one BigQuery dataset which includes customers' street addresses. You want to retrieve all occurrences of street addresses from the dataset. What should you do?

Options:

A.

Create a deep inspection job on each table in your dataset with Cloud Data Loss Prevention and create an inspection template that includes the STREET_ADDRESS infoType.

B.

Create a de-identification job in Cloud Data Loss Prevention and use the masking transformation.

C.

Write a SQL query in BigQuery by using REGEXP_CONTAINS on all tables in your dataset to find rows where the word "street" appears.

D.

Create a discovery scan configuration on your organization with Cloud Data Loss Prevention and create an inspection template thatincludes the STREET_ADDRESS infoType.

Buy Now
Questions 64

You are collecting loT sensor data from millions of devices across the world and storing the data in BigQuery. Your access pattern is based on recent data tittered by location_id and device_version with the following query:

You want to optimize your queries for cost and performance. How should you structure your data?

Options:

A.

Partition table data by create_date, location_id and device_version

B.

Partition table data by create_date cluster table data by tocation_id and device_version

C.

Cluster table data by create_date location_id and device_version

D.

Cluster table data by create_date, partition by location and device_version

Buy Now
Questions 65

Your organization is modernizing their IT services and migrating to Google Cloud. You need to organize the data that will be stored in Cloud Storage and BigQuery. You need to enable a data mesh approach to share the data between sales, product design, and marketing departments What should you do?

Options:

A.

1Create a project for storage of the data for your organization.2 Create a central Cloud Storage bucket with three folders to store the files for each department.3. Create a central BigQuery dataset with tables prefixed with the department name.4 Give viewer rights for the storage project for the users of your departments.

B.

1Create a project for storage of the data for each of your departments.2 Enable each department to create Cloud Storage buckets and BigQuery datasets.3. Create user groups for authorized readers for each bucket and dataset.4 Enable the IT team to administer the user groups to add or remove users as the departments' request.

C.

1 Create multiple projects for storage of the data for each of your departments' applications.2 Enable each department to create Cloud Storage buckets and BigQuery datasets.3. Publish the data that each department shared in Analytics Hub.4 Enable all departments to discover and subscribe to the data they need in Analytics Hub.

D.

1 Create multiple projects for storage of the data for each of your departments' applications.2 Enable each department to create Cloud Storage buckets and BigQuery datasets.3 In Dataplex, map each department to a data lake and the Cloud Storage buckets, and map the BigQuery datasets to zones.4 Enable each department to own and share the data of their data lakes.

Buy Now
Questions 66

You need to create a SQL pipeline. The pipeline runs an aggregate SOL transformation on a BigQuery table every two hours and appends the result to another existing BigQuery table. You need to configure the pipeline to retry if errors occur. You want the pipeline to send an email notification after three consecutive failures. What should you do?

Options:

A.

Create a BigQuery scheduled query to run the SOL transformation with schedule options that repeats every two hours, and enable emailnotifications.

B.

Use the BigQueryUpsertTableOperator in Cloud Composer, set the retry parameter to three, and set the email_on_failure parameter totrue.

C.

Use the BigQuerylnsertJobOperator in Cloud Composer, set the retry parameter to three, and set the email_on_failure parameter totrue.

D.

Create a BigQuery scheduled query to run the SQL transformation with schedule options that repeats every two hours, and enablenotification to Pub/Sub topic. Use Pub/Sub and Cloud Functions to send an email after three tailed executions.

Buy Now
Questions 67

You have an Oracle database deployed in a VM as part of a Virtual Private Cloud (VPC) network. You want to replicate and continuously synchronize 50 tables to BigQuery. You want to minimize the need to manage infrastructure. What should you do?

Options:

A.

Create a Datastream service from Oracle to BigQuery, use a private connectivity configuration to the same VPC network, and a connection profile to BigQuery.

B.

Create a Pub/Sub subscription to write to BigQuery directly Deploy the Debezium Oracle connector to capture changes in the Oracle database, and sink to the Pub/Sub topic.

C.

Deploy Apache Kafka in the same VPC network, use Kafka Connect Oracle Change Data Capture (CDC), and Dataflow to stream the Kafka topic to BigQuery.

D.

Deploy Apache Kafka in the same VPC network, use Kafka Connect Oracle change data capture (CDC), and the Kafka Connect Google BigQuery Sink Connector.

Buy Now
Questions 68

You want to build a managed Hadoop system as your data lake. The data transformation process is composed of a series of Hadoop jobs executed in sequence. To accomplish the design of separating storage from compute, you decided to use the Cloud Storage connector to store all input data, output data, and intermediary data. However, you noticed that one Hadoop job runsvery slowly with Cloud Dataproc, when compared with the on-premises bare-metal Hadoop environment (8-core nodes with 100-GB RAM). Analysis shows that this particular Hadoop job is disk I/O intensive. You want to resolve the issue. What should you do?

Options:

A.

Allocate sufficient memory to the Hadoop cluster, so that the intermediary data of that particular Hadoop job can be held in memory

B.

Allocate sufficient persistent disk space to the Hadoop cluster, and store the intermediate data of that particular Hadoop job on native HDFS

C.

Allocate more CPU cores of the virtual machine instances of the Hadoop cluster so that the networking bandwidth for each instance can scale up

D.

Allocate additional network interface card (NIC), and configure link aggregation in the operating system to use the combined throughput when working with Cloud Storage

Buy Now
Questions 69

You’re training a model to predict housing prices based on an available dataset with real estate properties. Your plan is to train a fully connected neural net, and you’ve discovered that the dataset contains latitude and longtitude of the property. Real estate professionals have told you that the location of the property is highly influential on price, so you’d like to engineer a feature that incorporates this physical dependency.

What should you do?

Options:

A.

Provide latitude and longtitude as input vectors to your neural net.

B.

Create a numeric column from a feature cross of latitude and longtitude.

C.

Create a feature cross of latitude and longtitude, bucketize at the minute level and use L1 regularization during optimization.

D.

Create a feature cross of latitude and longtitude, bucketize it at the minute level and use L2 regularization during optimization.

Buy Now
Questions 70

Your company built a TensorFlow neural-network model with a large number of neurons and layers. The model fits well for the training data. However, when tested against new data, it performs poorly. What method can you employ to address this?

Options:

A.

Threading

B.

Serialization

C.

Dropout Methods

D.

Dimensionality Reduction

Buy Now
Questions 71

You need (o give new website users a globally unique identifier (GUID) using a service that takes in data points and returns a GUID This data is sourced from both internal and external systems via HTTP calls that you will make via microservices within your pipeline There will be tens of thousands of messages per second and that can be multithreaded, and you worry about the backpressure on the system How should you design your pipeline to minimize that backpressure?

Options:

A.

Call out to the service via HTTP

B.

Create the pipeline statically in the class definition

C.

Create a new object in the startBundle method of DoFn

D.

Batch the job into ten-second increments

Buy Now
Questions 72

You want to migrate an on-premises Hadoop system to Cloud Dataproc. Hive is the primary tool in use, and the data format is Optimized Row Columnar (ORC). All ORC files have been successfully copied to a Cloud Storage bucket. You need to replicate some data to the cluster’s local Hadoop Distributed File System (HDFS) to maximize performance. What are two ways to start using Hive in Cloud Dataproc? (Choose two.)

Options:

A.

Run the gsutil utility to transfer all ORC files from the Cloud Storage bucket to HDFS. Mount the Hive tables locally.

B.

Run the gsutil utility to transfer all ORC files from the Cloud Storage bucket to any node of the Dataproc cluster. Mount the Hive tables locally.

C.

Run the gsutil utility to transfer all ORC files from the Cloud Storage bucket to the master node of the Dataproc cluster. Then run the Hadoop utility to copy them do HDFS. Mount the Hive tables from HDFS.

D.

Leverage Cloud Storage connector for Hadoop to mount the ORC files as external Hive tables. Replicate external Hive tables to the native ones.

E.

Load the ORC files into BigQuery. Leverage BigQuery connector for Hadoop to mount the BigQuery tables as external Hive tables. Replicate external Hive tables to the native ones.

Buy Now
Questions 73

You are operating a streaming Cloud Dataflow pipeline. Your engineers have a new version of the pipeline with a different windowing algorithm and triggering strategy. You want to update the running pipeline with the new version. You want to ensure that no data is lost during the update. What should you do?

Options:

A.

Update the Cloud Dataflow pipeline inflight by passing the --update option with the --jobName set to the existing job name

B.

Update the Cloud Dataflow pipeline inflight by passing the --update option with the --jobName set to a new unique job name

C.

Stop the Cloud Dataflow pipeline with the Cancel option. Create a new Cloud Dataflow job with the updated code

D.

Stop the Cloud Dataflow pipeline with the Drain option. Create a new Cloud Dataflow job with the updated code

Buy Now
Exam Name: Google Professional Data Engineer Exam
Last Update: Sep 15, 2025
Questions: 383
Professional-Data-Engineer pdf

Professional-Data-Engineer PDF

$29.75  $84.99
Professional-Data-Engineer Engine

Professional-Data-Engineer Testing Engine

$35  $99.99
Professional-Data-Engineer PDF + Engine

Professional-Data-Engineer PDF + Testing Engine

$47.25  $134.99