Spring Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: cramtick70

Databricks-Generative-AI-Engineer-Associate Databricks Certified Generative AI Engineer Associate Questions and Answers

Questions 4

A Generative Al Engineer has created a RAG application to look up answers to questions about a series of fantasy novels that are being asked on the author’s web forum. The fantasy novel texts are chunked and embedded into a vector store with metadata (page number, chapter number, book title), retrieved with the user’s query, and provided to an LLM for response generation. The Generative AI Engineer used their intuition to pick the chunking strategy and associated configurations but now wants to more methodically choose the best values.

Which TWO strategies should the Generative AI Engineer take to optimize their chunking strategy and parameters? (Choose two.)

Options:

A.

Change embedding models and compare performance.

B.

Add a classifier for user queries that predicts which book will best contain the answer. Use this to filter retrieval.

C.

Choose an appropriate evaluation metric (such as recall or NDCG) and experiment with changes in the chunking strategy, such as splitting chunks by paragraphs or chapters.

Choose the strategy that gives the best performance metric.

D.

Pass known questions and best answers to an LLM and instruct the LLM to provide the best token count. Use a summary statistic (mean, median, etc.) of the best token counts to choose chunk size.

E.

Create an LLM-as-a-judge metric to evaluate how well previous questions are answered by the most appropriate chunk. Optimize the chunking parameters based upon the values of the metric.

Buy Now
Questions 5

A Generative AI Engineer developed an LLM application using the provisioned throughput Foundation Model API. Now that the application is ready to be deployed, they realize their volume of requests are not sufficiently high enough to create their own provisioned throughput endpoint. They want to choose a strategy that ensures the best cost-effectiveness for their application.

What strategy should the Generative AI Engineer use?

Options:

A.

Switch to using External Models instead

B.

Deploy the model using pay-per-token throughput as it comes with cost guarantees

C.

Change to a model with a fewer number of parameters in order to reduce hardware constraint issues

D.

Throttle the incoming batch of requests manually to avoid rate limiting issues

Buy Now
Questions 6

A company has a typical RAG-enabled, customer-facing chatbot on its website.

Select the correct sequence of components a user's questions will go through before the final output is returned. Use the diagram above for reference.

Options:

A.

1.embedding model, 2.vector search, 3.context-augmented prompt, 4.response-generating LLM

B.

1.context-augmented prompt, 2.vector search, 3.embedding model, 4.response-generating LLM

C.

1.response-generating LLM, 2.vector search, 3.context-augmented prompt, 4.embedding model

D.

1.response-generating LLM, 2.context-augmented prompt, 3.vector search, 4.embedding model

Buy Now
Questions 7

What is the most suitable library for building a multi-step LLM-based workflow?

Options:

A.

Pandas

B.

TensorFlow

C.

PySpark

D.

LangChain

Buy Now
Questions 8

A Generative AI Engineer has created a RAG application which can help employees retrieve answers from an internal knowledge base, such as Confluence pages or Google Drive. The prototype application is now working with some positive feedback from internal company testers. Now the Generative Al Engineer wants to formally evaluate the system’s performance and understand where to focus their efforts to further improve the system.

How should the Generative AI Engineer evaluate the system?

Options:

A.

Use cosine similarity score to comprehensively evaluate the quality of the final generated answers.

B.

Curate a dataset that can test the retrieval and generation components of the system separately. Use MLflow’s built in evaluation metrics to perform the evaluation on the retrieval and generation components.

C.

Benchmark multiple LLMs with the same data and pick the best LLM for the job.

D.

Use an LLM-as-a-judge to evaluate the quality of the final answers generated.

Buy Now
Questions 9

A Generative Al Engineer is developing a RAG application and would like to experiment with different embedding models to improve the application performance.

Which strategy for picking an embedding model should they choose?

Options:

A.

Pick an embedding model trained on related domain knowledge

B.

Pick the most recent and most performant open LLM released at the time

C.

pick the embedding model ranked highest on the Massive Text Embedding Benchmark (MTEB) leaderboard hosted by HuggingFace

D.

Pick an embedding model with multilingual support to support potential multilingual user questions

Buy Now
Questions 10

Which indicator should be considered to evaluate the safety of the LLM outputs when qualitatively assessing LLM responses for a translation use case?

Options:

A.

The ability to generate responses in code

B.

The similarity to the previous language

C.

The latency of the response and the length of text generated

D.

The accuracy and relevance of the responses

Buy Now
Questions 11

When developing an LLM application, it’s crucial to ensure that the data used for training the model complies with licensing requirements to avoid legal risks.

Which action is NOT appropriate to avoid legal risks?

Options:

A.

Reach out to the data curators directly before you have started using the trained model to let them know.

B.

Use any available data you personally created which is completely original and you can decide what license to use.

C.

Only use data explicitly labeled with an open license and ensure the license terms are followed.

D.

Reach out to the data curators directly after you have started using the trained model to let them know.

Buy Now
Questions 12

A Generative AI Engineer is developing an agent system using a popular agent-authoring library. The agent comprises multiple parallel and sequential chains. The engineer encounters challenges as the agent fails at one of the steps, making it difficult to debug the root cause. They need to find an appropriate approach to research this issue and discover the cause of failure. Which approach do they choose?

Options:

A.

Enable MLflow tracing to gain visibility into each agent's behavior and execution step.

B.

Run MLflow.evaluate to determine root cause of failed step.

C.

Implement structured logging within the agent's code to capture detailed execution information.

D.

Deconstruct the agent into independent steps to simplify debugging.

Buy Now
Questions 13

A Generative Al Engineer needs to design an LLM pipeline to conduct multi-stage reasoning that leverages external tools. To be effective at this, the LLM will need to plan and adapt actions while performing complex reasoning tasks.

Which approach will do this?

Options:

A.

Tram the LLM to generate a single, comprehensive response without interacting with any external tools, relying solely on its pre-trained knowledge.

B.

Implement a framework like ReAct which allows the LLM to generate reasoning traces and perform task-specific actions that leverage external tools if necessary.

C.

Encourage the LLM to make multiple API calls in sequence without planning or structuring the calls, allowing the LLM to decide when and how to use external tools spontaneously.

D.

Use a Chain-of-Thought (CoT) prompting technique to guide the LLM through a series of reasoning steps, then manually input the results from external tools for the final answer.

Buy Now
Questions 14

A Generative Al Engineer has successfully ingested unstructured documents and chunked them by document sections. They would like to store the chunks in a Vector Search index. The current format of the dataframe has two columns: (i) original document file name (ii) an array of text chunks for each document.

What is the most performant way to store this dataframe?

Options:

A.

Split the data into train and test set, create a unique identifier for each document, then save to a Delta table

B.

Flatten the dataframe to one chunk per row, create a unique identifier for each row, and save to a Delta table

C.

First create a unique identifier for each document, then save to a Delta table

D.

Store each chunk as an independent JSON file in Unity Catalog Volume. For each JSON file, the key is the document section name and the value is the array of text chunks for that section

Buy Now
Questions 15

A Generative Al Engineer is tasked with developing an application that is based on an open source large language model (LLM). They need a foundation LLM with a large context window.

Which model fits this need?

Options:

A.

DistilBERT

B.

MPT-30B

C.

Llama2-70B

D.

DBRX

Buy Now
Questions 16

A Generative AI Engineer at an automotive company would like to build a question-answering chatbot to help customers answer specific questions about their vehicles. They have:

    A catalog with hundreds of thousands of cars manufactured since the 1960s

    Historical searches with user queries and successful matches

    Descriptions of their own cars in multiple languages

They have already selected an open-source LLM and created a test set of user queries. They need to discard techniques that will not help them build the chatbot. Which do they discard?

Options:

A.

Setting chunk size to match the model's context window to maximize coverage

B.

Implementing metadata filtering based on car models and years

C.

Fine-tuning an embedding model on automotive terminology

D.

Adding few-shot examples for response generation

Buy Now
Questions 17

A Generative AI Engineer received the following business requirements for an external chatbot.

The chatbot needs to know what types of questions the user asks and routes to appropriate models to answer the questions. For example, the user might ask about upcoming event details. Another user might ask about purchasing tickets for a particular event.

What is an ideal workflow for such a chatbot?

Options:

A.

The chatbot should only look at previous event information

B.

There should be two different chatbots handling different types of user queries.

C.

The chatbot should be implemented as a multi-step LLM workflow. First, identify the type of question asked, then route the question to the appropriate model. If it’s an upcoming event question, send the query to a text-to-SQL model. If it’s about ticket purchasing, the customer should be redirected to a payment platform.

D.

The chatbot should only process payments

Buy Now
Questions 18

A Generative Al Engineer is using an LLM to classify species of edible mushrooms based on text descriptions of certain features. The model is returning accurate responses in testing and the Generative Al Engineer is confident they have the correct list of possible labels, but the output frequently contains additional reasoning in the answer when the Generative Al Engineer only wants to return the label with no additional text.

Which action should they take to elicit the desired behavior from this LLM?

Options:

A.

Use few snot prompting to instruct the model on expected output format

B.

Use zero shot prompting to instruct the model on expected output format

C.

Use zero shot chain-of-thought prompting to prevent a verbose output format

D.

Use a system prompt to instruct the model to be succinct in its answer

Buy Now
Questions 19

A Generative Al Engineer is developing a RAG system for their company to perform internal document Q&A for structured HR policies, but the answers returned are frequently incomplete and unstructured It seems that the retriever is not returning all relevant context The Generative Al Engineer has experimented with different embedding and response generating LLMs but that did not improve results.

Which TWO options could be used to improve the response quality?

Choose 2 answers

Options:

A.

Add the section header as a prefix to chunks

B.

Increase the document chunk size

C.

Split the document by sentence

D.

Use a larger embedding model

E.

Fine tune the response generation model

Buy Now
Questions 20

A Generative AI Engineer has been asked to build an LLM-based question-answering application. The application should take into account new documents that are frequently published. The engineer wants to build this application with the least cost and least development effort and have it operate at the lowest cost possible.

Which combination of chaining components and configuration meets these requirements?

Options:

A.

For the application a prompt, a retriever, and an LLM are required. The retriever output is inserted into the prompt which is given to the LLM to generate answers.

B.

The LLM needs to be frequently with the new documents in order to provide most up-to-date answers.

C.

For the question-answering application, prompt engineering and an LLM are required to generate answers.

D.

For the application a prompt, an agent and a fine-tuned LLM are required. The agent is used by the LLM to retrieve relevant content that is inserted into the prompt which is given to the LLM to generate answers.

Buy Now
Questions 21

A Generative Al Engineer is setting up a Databricks Vector Search that will lookup news articles by topic within 10 days of the date specified An example query might be "Tell me about monster truck news around January 5th 1992". They want to do this with the least amount of effort.

How can they set up their Vector Search index to support this use case?

Options:

A.

Split articles by 10 day blocks and return the block closest to the query.

B.

Include metadata columns for article date and topic to support metadata filtering.

C.

pass the query directly to the vector search index and return the best articles.

D.

Create separate indexes by topic and add a classifier model to appropriately pick the best index.

Buy Now
Exam Name: Databricks Certified Generative AI Engineer Associate
Last Update: Feb 21, 2026
Questions: 73
Databricks-Generative-AI-Engineer-Associate pdf

Databricks-Generative-AI-Engineer-Associate PDF

$25.5  $84.99
Databricks-Generative-AI-Engineer-Associate Engine

Databricks-Generative-AI-Engineer-Associate Testing Engine

$30  $99.99
Databricks-Generative-AI-Engineer-Associate PDF + Engine

Databricks-Generative-AI-Engineer-Associate PDF + Testing Engine

$40.5  $134.99