OCI GEN AI Test 2
OCI GEN AI Test 2
☁️How does the Retrieval-Augmented Generation (RAG) Token technique differ from RAG Sequence
when generating a model’s response?
RAG Token retrieves relevant documents for each part of the response and constructs the answer
incrementally.
RAG Token does not use document retrieval but generates responses based on pre-existing
knowledge only.
RAG Token retrieves documents only at the beginning of the response generation and uses those for
the entire content.
Unlike RAG Sequence, RAG Token generates the entire response at once without considering
individual parts.
Question 2Incorrect
☁️Which is a key advantage of using T-Few over Vanilla fine-tuning in the OCI Generative AI service?
Question 3Incorrect
☁️How do Dot Product and Cosine Distance differ in their application to comparing text embeddings
in natural language processing?
Dot Product is used for semantic analysis, whereas Cosine Distance is used for syntactic comparisons.
Dot Product assesses the overall similarity in content, whereas Cosine Distance measures topical
relevance.
Dot Product calculates the literal overlap of words, whereas Cosine Distance evaluates the stylistic
similarity.
Dot Product measures the magnitude and direction of vectors, whereas Cosine Distance focuses on
the orientation regardless of magnitude.
Question 4Correct
It standardizes vector lengths for meaningful comparison using metrics such as Cosine Similarity.
☁️Which is NOT a category of pretrained foundational models available in the OCI Generative AI
service?
Generation models
Translation models
Embedding models
Summarization models
Question 6Correct
T-Few fine-tuning involves updating the weights of all layers in the model.
Question 7Correct
☁️Which statement is true about the "Top p" parameter of the OCI Generative AI Generation
models?
"Top p" limits token selection based on the sum of their probabilities.
"Top p" selects tokens from the "Top k" tokens sorted by probability.
Question 8Correct
☁️How are fine-tuned customer models stored to enable strong data privacy and security in the OCI
Generative AI service?
☁️What is the purpose of the "stop sequence" parameter in the OCI Generative AI Generation
models?
It specifies a string that tells the model to stop generating more content.
It determines the maximum number of tokens the model can generate per response.
Question 10Correct
☁️What does a higher number assigned to a token signify in the "Show Likelihoods" feature of the
language model token generation?
The token will be the only one considered in the next generation step.
The token is unrelated to the current token and will not be used.
Question 11Correct
☁️Which Oracle Accelerated Data Science (ADS) class can be used to deploy a Large Language Model
(LLM) application to OCI Data Science model deployment?
GenerativeAI
TextLoader
RetrievalQA
ChainDeployment
Question 12Correct
☁️Which role does a "model endpoint" serve in the inference workflow of the OCI Generative AI
service?
Updates the weights of the base model during the fine-tuning process
☁️What issue might arise from using small data sets with the Vanilla fine-tuning method in the OCI
Generative AI service?
Data Leakage
Overfitting
Underfitting
Model Drift
Question 14Incorrect
PromptTemplate supports any number of variables, including the possibility of having none.
Question 15Incorrect
☁️In LangChain, which retriever search type is used to balance between relevancy and diversity?
mmr
similarity_score_threshold
similarity
top k
Question 16Correct
☁️How does the architecture of dedicated AI clusters contribute to minimizing GPU memory
overhead for T-Few fine-tuned model inference?
By loading the entire model into GPU memory for efficient processing
By sharing base model weights across multiple fine-tuned models on the same group of GPUs
Question 17Correct
☁️Given the following code:
Question 18Correct
☁️What is the purpose of the "stop sequence" parameter in the OCI Generative AI Generation
models?
It determines the maximum number of tokens the model can generate per response.
It specifies a string that tells the model to stop generating more content.
Question 19Correct
ConvorsationImageMemory
ConversationTokenBufferMemory
ConvorsationBufferMemory
ConversationSummaryMemory
Question 20Correct
☁️Which technique involves prompting the Large Language Model (LLM) to emit intermediate
reasoning steps as part of its response?
Step-Back Prompting
In-context Learning
Least-to-most Prompting
Chain-of-Thought