Generative AI Prompt Engineering MCQ

0
7

1. What is the primary purpose of prompt engineering in generative AI?
A) To train new AI models
B) To design input prompts that guide AI to produce desired outputs
C) To increase AI hardware performance
D) To reduce data size
Answer: B

2. Which prompting technique involves giving the AI a few examples to guide output?
A) Zero-shot prompting
B) Few-shot prompting
C) Chain-of-thought prompting
D) Reinforcement learning
Answer: B

3. What does zero-shot prompting mean?
A) Training the model from scratch
B) Asking the AI to perform a task without any example
C) Providing multiple examples in the prompt
D) Fine-tuning the AI model
Answer: B

4. How does chain-of-thought prompting improve AI responses?
A) By reducing model size
B) By encouraging the AI to reason step-by-step
C) By limiting output length
D) By training AI on larger datasets
Answer: B

5. What is a common challenge in prompt engineering?
A) Increasing data storage
B) Handling ambiguous or biased prompts
C) Improving GPU speed
D) Coding AI algorithms
Answer: B

6. How does adding context to a prompt help generative AI?
A) It increases processing speed
B) It makes AI responses more relevant and domain-specific
C) It compresses the prompt length
D) It limits the AI’s creativity
Answer: B

7. Which of the following best describes “prompt tuning”?
A) Modifying AI training parameters
B) Refining prompt wording and structure to improve output
C) Adding more data to the training set
D) Deploying AI on cloud platforms
Answer: B

8. What is the effect of poorly engineered prompts?
A) High-quality AI output
B) Irrelevant or ambiguous AI responses
C) Faster AI processing
D) Smaller model size
Answer: B

9. Why is prompt engineering considered iterative?
A) Because it involves multiple trial-and-error attempts to optimize prompts
B) Because it trains models repeatedly
C) Because it compresses data continuously
D) Because it reduces hardware requirements
Answer: A

10. Which prompt type asks the AI to solve problems without prior examples?
A) Few-shot prompting
B) Zero-shot prompting
C) One-shot prompting
D) Multi-shot prompting
Answer: B
11. Which factor is most important when crafting an effective prompt?
A) Length of the prompt
B) Clarity and specificity of instructions
C) Number of keywords used
D) Use of complex vocabulary
Answer: B

12. What role does “temperature” play in generative AI outputs?
A) Controls the randomness or creativity of the output
B) Controls the AI model size
C) Determines the speed of generation
D) Sets the number of output tokens
Answer: A

13. What is “one-shot prompting”?
A) Providing exactly one example in the prompt
B) Asking the AI to perform without examples
C) Training the AI on a single dataset
D) Generating a single output token
Answer: A

14. How can negative prompting be used in generative AI?
A) To instruct the model what to avoid generating
B) To increase output length
C) To reduce processing time
D) To train the AI faster
Answer: A

15. Why might prompt engineering require domain knowledge?
A) To design hardware better
B) To write relevant and precise prompts for specific applications
C) To compress datasets
D) To train AI models faster
Answer: B

16. What is a “prompt template”?
A) A pre-designed structure used to create consistent prompts
B) A method of training AI
C) A model evaluation metric
D) A data compression technique
Answer: A

17. How does prompt length affect generative AI models?
A) Longer prompts always improve output quality
B) Very long prompts may cause the AI to lose focus or run out of token limit
C) Short prompts are ignored by the model
D) Prompt length doesn’t affect AI output
Answer: B

18. Which of these is a tool or platform often used for prompt engineering?
A) TensorFlow
B) OpenAI Playground
C) Hadoop
D) Docker
Answer: B

19. How can prompt engineering help mitigate bias in AI outputs?
A) By ignoring problematic inputs
B) By carefully wording prompts to avoid triggering biased responses
C) By increasing training data size
D) By compressing AI models
Answer: B

20. What is the impact of ambiguity in prompts?
A) AI generates precise answers
B) AI may produce vague or irrelevant outputs
C) AI stops generating output
D) AI speeds up generation
Answer: B
21. What does “context window” refer to in generative AI models?
A) The time AI takes to generate output
B) The maximum number of tokens the AI can consider at once
C) The size of the training dataset
D) The hardware used for AI
Answer: B

22. Which of the following improves prompt clarity?
A) Using vague terms
B) Adding explicit instructions
C) Using slang
D) Making prompts ambiguous
Answer: B

23. What is an “instruction prompt”?
A) A prompt that contains explicit tasks or commands for the AI
B) A prompt that trains the model
C) A prompt that compresses data
D) A prompt with random words
Answer: A

24. How can prompt engineering be automated?
A) Using algorithms to test and optimize prompts systematically
B) Manually rewriting prompts every time
C) Increasing the model size
D) Reducing data input
Answer: A

25. What happens if the prompt is too long and exceeds token limits?
A) The AI ignores the entire prompt
B) The AI truncates or loses early parts of the prompt, impacting output quality
C) The AI processes faster
D) The AI generates multiple outputs
Answer: B

26. What is “output conditioning” in prompt engineering?
A) Restricting the AI to produce outputs in a certain style or format
B) Increasing training data
C) Decreasing model parameters
D) Upgrading hardware
Answer: A

27. Which of the following is NOT a goal of prompt engineering?
A) Enhancing output relevance
B) Reducing ambiguity
C) Increasing model training time
D) Guiding creativity in outputs
Answer: C

28. How does “prompt chaining” work?
A) Linking multiple prompts where the output of one prompt is input for the next
B) Using only one prompt repeatedly
C) Training AI with chained datasets
D) Compressing prompts
Answer: A

29. What does “multi-modal prompting” refer to?
A) Using text along with other inputs like images or audio as prompts
B) Using multiple AI models at once
C) Generating multiple outputs per prompt
D) Using prompts in different languages
Answer: A

30. What is the effect of using ambiguous language in prompts?
A) It produces more creative results
B) It often causes irrelevant or confusing AI responses
C) It improves response speed
D) It shortens output length
Answer: B

31. What is the primary challenge when designing prompts for large language models?
A) High memory consumption
B) Crafting prompts that are precise yet flexible
C) Writing code in Python
D) Downloading datasets
Answer: B

32. What is a “negative prompt”?
A) A prompt that guides AI to avoid certain words or topics
B) A prompt that trains the AI negatively
C) A prompt used to stop AI responses
D) A prompt that compresses data
Answer: A

33. Which metric can help evaluate prompt effectiveness?
A) Response relevance and coherence
B) CPU usage
C) Data transfer speed
D) Network bandwidth
Answer: A

34. How can cultural bias be minimized in prompt engineering?
A) Ignoring sensitive topics
B) Using diverse and inclusive prompt wording
C) Limiting the prompt length
D) Reducing AI parameters
Answer: B

35. What is “prompt injection”?
A) A security vulnerability where malicious input manipulates AI output
B) Training AI with new data
C) Optimizing prompts for better results
D) Increasing the AI model size
Answer: A

36. What is “prompt dropout”?
A) Ignoring certain parts of the prompt during training to improve robustness
B) Dropping entire prompts during AI training
C) Shortening output length
D) Increasing dataset size
Answer: A

37. Why is it important to specify output format in prompts?
A) To improve the clarity and usability of AI responses
B) To train the AI faster
C) To reduce data size
D) To increase randomness
Answer: A

38. How does reinforcement learning from human feedback (RLHF) relate to prompt engineering?
A) It helps improve prompts by incorporating human preferences in training
B) It compresses prompts
C) It increases AI processing speed
D) It is unrelated to prompt design
Answer: A

39. Which is NOT an example of prompt optimization?
A) Changing wording for clarity
B) Adding examples
C) Increasing model size
D) Adding explicit constraints
Answer: C

40. What is the “token” in generative AI models?
A) A unit of text such as a word or subword used in processing inputs and outputs
B) A security key
C) A data file
D) A hardware component
Answer: A

41. Why should prompts avoid overly complex vocabulary?
A) Because AI models cannot process advanced words
B) To ensure the AI interprets instructions correctly and avoids confusion
C) Because it slows down generation
D) Because it reduces token limits
Answer: B

42. How can prompt templates improve prompt engineering?
A) By providing reusable structures for consistent and effective prompting
B) By increasing dataset size
C) By compressing AI models
D) By reducing computational cost
Answer: A

43. What is “meta-prompting”?
A) Asking the AI to generate or improve prompts itself
B) Using multiple prompts chained together
C) Using prompts in different languages
D) Training AI on metadata
Answer: A

44. How does prompt specificity affect generative AI?
A) Too specific prompts limit creativity; too vague prompts reduce relevance
B) Specific prompts always produce better outputs
C) Vague prompts always produce better outputs
D) Specificity doesn’t affect outputs
Answer: A

45. What is “prompt drift”?
A) The tendency for AI to gradually deviate from intended output over multiple interactions
B) Reducing prompt length
C) Increasing prompt complexity
D) Training on irrelevant data
Answer: A

46. Which of these is a strategy to avoid prompt bias?
A) Including diverse examples in few-shot prompts
B) Ignoring prompt structure
C) Using only zero-shot prompts
D) Limiting output length
Answer: A

47. What is the benefit of “interactive prompting”?
A) Enables back-and-forth refinement of AI responses for better results
B) Compresses prompts
C) Trains AI faster
D) Increases randomness in outputs
Answer: A

48. Which of the following best describes “contextual prompting”?
A) Including relevant background info in the prompt to improve output relevance
B) Using prompts only in English
C) Limiting prompt length
D) Generating multiple outputs
Answer: A

49. How can you ensure reproducibility in generative AI responses?
A) By setting a fixed random seed and using consistent prompt phrasing
B) Using different prompts every time
C) Changing model parameters randomly
D) Using vague prompts
Answer: A

50. What is “prompt sensitivity”?
A) How much the AI output changes in response to small prompt modifications
B) AI’s ability to process sensory data
C) Prompt length variability
D) Number of tokens used
Answer: A
51. What is the effect of including contradictory information within a prompt?
A) AI resolves contradictions and provides a clear answer
B) AI may produce confusing or inconsistent outputs
C) AI ignores contradictions automatically
D) AI outputs are unaffected
Answer: B

52. Which approach helps in reducing hallucinations (incorrect facts) in AI outputs?
A) Using detailed, fact-based prompts with clear instructions
B) Making prompts very short
C) Avoiding context in prompts
D) Increasing temperature parameter
Answer: A

53. How does the “temperature” parameter influence prompt outputs?
A) Higher temperature produces more random and creative outputs
B) Higher temperature speeds up generation
C) Lower temperature causes the AI to generate longer outputs
D) Temperature affects only training
Answer: A

54. What is “prompt bias”?
A) Bias introduced by the wording or structure of the prompt that influences AI output unfairly
B) Bias in AI hardware
C) Bias in data preprocessing
D) Bias in model architecture
Answer: A

55. What is the significance of “prompt length” in transformer models?
A) Transformer models can handle infinite prompt length
B) Prompt length is limited by model’s context window size
C) Longer prompts always improve performance
D) Prompt length doesn’t matter
Answer: B

56. How can prompts be structured to better handle multi-step reasoning?
A) Using chain-of-thought prompting to guide stepwise explanation
B) Asking direct yes/no questions
C) Using only zero-shot prompting
D) Making prompts shorter
Answer: A

57. Why is it important to specify the output format in prompts?
A) To ensure the AI produces answers in the desired style or structure
B) To reduce token usage
C) To limit the AI’s creativity
D) To increase generation speed
Answer: A

58. What is an example of “interactive prompt engineering”?
A) Iteratively refining prompts based on AI responses during a session
B) Writing a prompt once and never changing it
C) Using only predefined prompts
D) Avoiding user feedback
Answer: A

59. What role does “prompt paraphrasing” play?
A) Rewording prompts to find versions that yield better AI outputs
B) Compressing prompts into shorter forms
C) Training new AI models
D) Reducing the number of tokens used
Answer: A

60. Which of these best describes “prompt ensembling”?
A) Using multiple prompts and combining their outputs for better results
B) Training AI on multiple datasets
C) Compressing prompts into one
D) Using only the first prompt given
Answer: A

61. How does “prompt priming” work?
A) Providing initial context or instructions to prepare the AI before the main task
B) Training the AI from scratch
C) Ignoring previous prompts
D) Compressing input data
Answer: A

62. Why might one use “negative examples” in few-shot prompting?
A) To show the AI what to avoid generating
B) To confuse the AI
C) To increase randomness
D) To reduce token usage
Answer: A

63. What does “prompt robustness” refer to?
A) The AI’s ability to produce consistent outputs despite variations in prompt phrasing
B) AI’s hardware resistance
C) Data security of the prompts
D) Size of the prompt
Answer: A

64. How does “context switching” impact prompt engineering?
A) It can cause the AI to lose track of the original task if prompts jump between topics
B) It improves output quality
C) It compresses prompts
D) It increases training speed
Answer: A

65. What is the benefit of “hierarchical prompting”?
A) Breaking down complex tasks into smaller sub-prompts for better AI handling
B) Making prompts longer
C) Using unrelated prompts
D) Increasing token usage
Answer: A

66. What is the impact of specifying constraints in prompts?
A) Limits AI output to meet specific criteria like length or style
B) Makes AI slower
C) Reduces model size
D) Has no impact
Answer: A

67. What is a “system prompt”?
A) A special prompt that sets behavior or rules for the AI before user input
B) A prompt to reboot AI
C) A prompt for model training
D) A prompt for compressing outputs
Answer: A

68. What does “prompt debugging” involve?
A) Testing and modifying prompts to fix issues like unclear or irrelevant AI outputs
B) Fixing AI hardware
C) Debugging AI source code
D) Data cleaning
Answer: A

69. Which strategy helps reduce overfitting to specific prompt patterns?
A) Varying prompt styles and wording during prompt engineering
B) Using one fixed prompt
C) Increasing model parameters
D) Ignoring prompt quality
Answer: A

70. What is “meta-learning” in relation to prompt engineering?
A) Training models to learn how to better understand or generate prompts
B) Compressing prompts
C) Increasing token limits
D) Data augmentation
Answer: A

71. Which of these is NOT a characteristic of a good prompt?
A) Clarity
B) Ambiguity
C) Specificity
D) Context relevance
Answer: B

72. How can user feedback improve prompt engineering?
A) By iteratively refining prompts based on user evaluations of AI outputs
B) By ignoring user input
C) By increasing model size
D) By limiting prompt length
Answer: A

73. What is the significance of “tokenization” in prompt engineering?
A) It splits text into units the AI model can process, affecting prompt length and meaning
B) Encrypts the prompt
C) Compresses the prompt
D) Generates tokens for security
Answer: A

74. What is the effect of “prompt truncation”?
A) Cutting off part of a prompt due to length limits, which may lose important context
B) Speeding up AI
C) Increasing output quality
D) Improving training speed
Answer: A

75. Which technique helps improve AI’s ability to follow instructions?
A) Instruction fine-tuning combined with prompt engineering
B) Ignoring prompt quality
C) Reducing dataset size
D) Increasing model randomness
Answer: A

76. How can “prompt conditioning” guide generative AI?
A) By embedding specific goals or styles within the prompt to shape output
B) By compressing data
C) By increasing token limits
D) By training on unlabeled data
Answer: A

77. What is the role of “exemplar diversity” in few-shot prompting?
A) Providing a range of example types to improve model generalization
B) Using identical examples only
C) Limiting example variety
D) Ignoring examples
Answer: A

78. What does “prompt interpolation” mean?
A) Combining multiple prompt styles or formats to improve AI response
B) Removing parts of a prompt
C) Shortening prompts
D) Compressing outputs
Answer: A

79. What does “prompt iteration” involve?
A) Repeatedly refining and testing prompts to optimize outputs
B) Writing prompts once
C) Ignoring prompt feedback
D) Increasing token size
Answer: A

80. Why is “clarity in instructions” important in prompts?
A) To reduce AI misunderstandings and generate precise outputs
B) To confuse the AI
C) To increase randomness
D) To reduce token length
Answer: A
81. What does “prompt specificity” refer to?
A) Using detailed and precise instructions in a prompt
B) Using vague and broad instructions
C) Making the prompt very short
D) Avoiding any instructions in the prompt
Answer: A

82. Which technique helps avoid repetitive or generic AI outputs?
A) Varying prompt phrasing and including creative constraints
B) Using the same prompt repeatedly
C) Increasing token limits only
D) Removing context from prompts
Answer: A

83. How does prompt engineering relate to AI explainability?
A) Clear prompts can help AI provide step-by-step explanations, improving interpretability
B) Prompts have no relation to explainability
C) Prompts reduce explainability
D) Only AI architecture affects explainability
Answer: A

84. What is the function of “prompt feedback loops”?
A) Continuously improving prompts based on AI output and user feedback
B) Feeding prompts into the AI repeatedly without change
C) Reducing training data size
D) Increasing model complexity
Answer: A

85. What is the impact of “prompt overfitting”?
A) AI produces outputs too narrowly tailored to a specific prompt format, reducing generalization
B) AI learns faster
C) AI output becomes more random
D) AI ignores the prompt
Answer: A

86. Which of the following is an example of a “conditional prompt”?
A) “If X happens, explain Y”
B) “Write a poem”
C) “Tell me a joke”
D) “Generate random text”
Answer: A

87. Why is it important to test prompts on different models?
A) Because different models may respond differently to the same prompt
B) To increase prompt length
C) To reduce token size
D) Testing is unnecessary
Answer: A

88. What does “prompt hierarchy” mean?
A) Organizing prompts in a layered structure, from general to specific
B) Using only one prompt
C) Randomizing prompts
D) Ignoring prompt order
Answer: A

89. What role does “prompt annotation” play?
A) Adding metadata or explanations to prompts to guide AI behavior
B) Encrypting prompts
C) Reducing prompt length
D) Training AI models
Answer: A

90. How does “prompt contrast” improve outputs?
A) By providing positive and negative examples in prompts to clarify expectations
B) By using only positive examples
C) By removing all examples
D) By shortening prompts
Answer: A

91. What is a “prompt embedding”?
A) A vector representation of the prompt text used internally by AI models
B) A summary of the prompt
C) Encryption of the prompt
D) The final output of the AI
Answer: A

92. How can ambiguity in prompts be reduced?
A) By using clear, specific language and providing examples
B) By shortening prompts only
C) By increasing randomness
D) By avoiding context
Answer: A

93. What is the “temperature” setting in AI generation used for?
A) Controlling randomness and creativity in the generated output
B) Cooling AI hardware
C) Increasing training speed
D) Reducing token limits
Answer: A

94. Why is it beneficial to include constraints in prompts?
A) To direct AI to produce outputs that meet specific criteria, such as format or tone
B) To confuse the AI
C) To increase token usage
D) To reduce AI creativity
Answer: A

95. What does “prompt tuning” involve?
A) Adjusting prompt wording to optimize AI response quality
B) Training the AI model
C) Compressing prompts
D) Increasing dataset size
Answer: A

96. What is the purpose of “few-shot learning” in prompting?
A) To give the AI a few examples to learn from before performing a task
B) To train the AI from scratch
C) To reduce the model size
D) To generate random outputs
Answer: A

97. What is a drawback of overly long prompts?
A) They may exceed the model’s context window, causing truncation and loss of important info
B) They always improve output
C) They reduce model speed
D) They increase randomness
Answer: A

98. How can prompt engineering help reduce hallucinations?
A) By providing clear, factual context and explicit instructions
B) By making prompts vague
C) By reducing training data
D) By increasing model randomness
Answer: A

99. What does “output conditioning” mean?
A) Guiding the AI to produce outputs with certain features or constraints
B) Increasing model size
C) Compressing outputs
D) Training the AI
Answer: A

100. Which of the following best describes “interactive prompting”?
A) A process where prompts are adjusted dynamically based on AI responses and user input
B) Writing one fixed prompt
C) Avoiding user interaction
D) Using only system prompts
Answer: A

101. Why is “token limit” important in prompt engineering?
A) It determines the maximum length of input and output combined that the AI can process
B) It sets the speed of AI
C) It compresses the model
D) It reduces training data
Answer: A

102. What is “prompt paraphrasing” used for?
A) To rewrite prompts differently to see which version yields better outputs
B) To shorten prompts
C) To encrypt prompts
D) To train AI
Answer: A

103. How can bias in prompt outputs be minimized?
A) By carefully selecting neutral and inclusive prompt wording
B) By ignoring prompt quality
C) By using biased training data
D) By reducing token limits
Answer: A

104. What is “prompt chaining”?
A) Feeding the output of one prompt as the input to the next prompt to handle complex tasks
B) Using a single prompt repeatedly
C) Training AI models with chains of data
D) Compressing prompts
Answer: A

105. What is the role of “exemplars” in few-shot prompting?
A) They are examples used to demonstrate desired outputs in the prompt
B) They are training datasets
C) They compress the prompt
D) They limit token usage
Answer: A

106. Which of the following affects prompt effectiveness the most?
A) Clear instructions and appropriate context
B) Prompt length alone
C) Hardware specifications
D) Data transfer speed
Answer: A

107. What does “prompt robustness” mean?
A) AI’s ability to produce good outputs despite slight variations in prompts
B) Strength of AI hardware
C) Length of the prompt
D) Model size
Answer: A

108. How does “meta-prompting” function?
A) AI generates or improves prompts itself to enhance task performance
B) Training AI on metadata
C) Compressing prompts
D) Increasing token limits
Answer: A

109. What is a “system prompt”?
A) Initial instructions that guide AI behavior before user prompts
B) A user’s question
C) Training data
D) Output format
Answer: A

110. Why is iterative testing important in prompt engineering?
A) To refine prompts and optimize AI outputs through trial and error
B) To train the AI
C) To compress data
D) To increase token limits
Answer: A
111. What is the main purpose of “zero-shot prompting”?
A) Asking the AI to perform a task without any examples
B) Providing many examples before asking a question
C) Training the AI model
D) Increasing prompt length
Answer: A

112. How does “few-shot prompting” differ from zero-shot prompting?
A) Few-shot includes some examples to guide the AI, zero-shot does not
B) Few-shot uses no examples
C) Zero-shot requires training
D) Both are the same
Answer: A

113. What is a “prompt template”?
A) A reusable structure or format for writing prompts
B) A type of dataset
C) AI hardware component
D) An output format
Answer: A

114. Which of these helps reduce AI hallucination during generation?
A) Providing clear, detailed context in the prompt
B) Using very short prompts
C) Increasing temperature setting
D) Avoiding constraints
Answer: A

115. What is “prompt injection” in the context of AI security?
A) A malicious attack inserting harmful instructions into prompts
B) A method for optimizing prompts
C) Training the AI
D) Reducing token usage
Answer: A

116. What is the role of “temperature” in text generation?
A) Controls the randomness of AI output—higher means more random
B) Measures AI hardware heat
C) Sets output length
D) Compresses the prompt
Answer: A

117. Why might one use “chain-of-thought prompting”?
A) To encourage the AI to reason through problems step-by-step
B) To shorten the prompt
C) To randomize outputs
D) To avoid context
Answer: A

118. What does “prompt conditioning” refer to?
A) Steering the AI to generate outputs with certain properties or styles via the prompt
B) Cooling the AI system
C) Compressing input data
D) Increasing token count
Answer: A

119. Which is a benefit of “interactive prompting”?
A) Allows real-time adjustment of prompts based on AI’s responses
B) Reduces the need for prompts
C) Eliminates AI errors completely
D) Speeds up training
Answer: A

120. What is the “context window” in generative AI?
A) The maximum length of input and output tokens the model can process at once
B) A user interface for AI
C) The training dataset size
D) A hardware component
Answer: A

121. How can prompt length affect AI output quality?
A) Too long can cause truncation; too short may lack context
B) Longer always means better
C) Shorter always means better
D) Length doesn’t matter
Answer: A

122. What is an “instruction prompt”?
A) A prompt that explicitly tells the AI what task to perform
B) A training dataset
C) A type of output
D) A hardware instruction
Answer: A

123. What does “prompt paraphrasing” help with?
A) Finding alternative ways to phrase prompts to improve results
B) Encrypting prompts
C) Compressing outputs
D) Speeding up AI
Answer: A

124. What does “prompt chaining” enable?
A) Breaking complex tasks into multiple sequential prompts
B) Compressing data
C) Randomizing outputs
D) Ignoring previous context
Answer: A

125. What is the risk of using ambiguous prompts?
A) AI may generate irrelevant or confusing responses
B) AI becomes faster
C) AI produces longer outputs
D) AI ignores the prompt
Answer: A

126. How can bias be introduced through prompts?
A) Through word choices or examples that favor certain viewpoints
B) By using short prompts
C) By increasing token limits
D) Bias is only from training data
Answer: A

127. What is the benefit of “exemplars” in few-shot prompting?
A) They provide examples that help guide AI’s output style and content
B) They compress data
C) They limit token size
D) They reduce training time
Answer: A

128. What does “output conditioning” mean in prompts?
A) Setting constraints or styles for the AI’s response
B) Compressing outputs
C) Increasing model parameters
D) Training AI
Answer: A

129. How does setting a fixed random seed impact AI outputs?
A) It makes outputs reproducible and consistent for the same prompt
B) It increases randomness
C) It changes model size
D) It trains the AI faster
Answer: A

130. What is the purpose of “negative prompting”?
A) To instruct AI on what to avoid generating in its response
B) To train AI negatively
C) To compress prompts
D) To increase output length
Answer: A

131. How can “prompt templates” enhance productivity?
A) By providing consistent and reusable prompt formats for common tasks
B) By reducing AI accuracy
C) By compressing prompts
D) By slowing generation
Answer: A

132. What is “meta-prompting”?
A) Using AI to generate or improve prompts automatically
B) Encrypting prompts
C) Training AI
D) Reducing token limits
Answer: A

133. Why is clarity essential in prompt design?
A) To minimize misunderstandings and improve output relevance
B) To increase output length
C) To reduce token limits
D) To randomize outputs
Answer: A

134. What happens if a prompt exceeds the AI’s context window?
A) Older parts of the prompt are truncated, possibly losing important info
B) AI ignores the prompt
C) AI speeds up
D) AI increases output randomness
Answer: A

135. What is a common method for prompt debugging?
A) Iteratively testing and refining prompts based on AI outputs
B) Re-training AI
C) Increasing dataset size
D) Changing hardware
Answer: A
136. What is the primary goal of prompt engineering?
A) To craft inputs that maximize the relevance and accuracy of AI outputs
B) To reduce model size
C) To speed up training
D) To increase randomness
Answer: A

137. Which parameter controls creativity vs. determinism in AI text generation?
A) Temperature
B) Token limit
C) Learning rate
D) Batch size
Answer: A

138. What is “contextual prompting”?
A) Including relevant background information in the prompt to improve AI understanding
B) Using only short prompts
C) Ignoring previous conversations
D) Compressing inputs
Answer: A

139. What is a disadvantage of too low a temperature setting?
A) The output can become repetitive and overly deterministic
B) The AI outputs random text
C) Outputs become too long
D) The AI ignores the prompt
Answer: A

140. What does “output length control” in prompts do?
A) Limits or specifies the maximum number of tokens in AI responses
B) Controls AI hardware
C) Trains AI faster
D) Compresses the prompt
Answer: A

141. Why is “prompt modularity” useful?
A) Breaking prompts into reusable components for flexible task design
B) Training AI faster
C) Compressing data
D) Increasing token limits
Answer: A

142. What is the role of “prompt bias mitigation”?
A) To reduce unwanted biases in AI outputs caused by prompt wording
B) To speed up generation
C) To increase randomness
D) To shorten prompts
Answer: A

143. How can “system prompts” be used?
A) To set the AI’s general behavior and tone before user interaction
B) To give user instructions only
C) To train the AI
D) To compress prompts
Answer: A

144. Which is true about “multi-turn prompting”?
A) It involves a sequence of prompts in a conversation to build context
B) It uses a single prompt only
C) It reduces AI accuracy
D) It compresses outputs
Answer: A

145. What is “prompt engineering automation”?
A) Using tools or AI to generate or optimize prompts automatically
B) Manually writing all prompts
C) Compressing prompts
D) Increasing token limits
Answer: A

146. What does “context window size” limit?
A) The total length of input plus output tokens the model can process at once
B) The model’s hardware size
C) The training dataset size
D) Output randomness
Answer: A

147. What is “instruction tuning”?
A) Fine-tuning AI models on datasets consisting of instructions and their responses to improve following prompts
B) Training the AI from scratch
C) Reducing prompt length
D) Increasing token limits
Answer: A

148. How does “prompt paraphrasing” improve prompt effectiveness?
A) By testing different wording to see which gets better AI responses
B) By encrypting prompts
C) By shortening prompts
D) By randomizing tokens
Answer: A

149. What is “output formatting” in prompts?
A) Specifying how the AI should present its answer (e.g., bullet points, JSON)
B) Compressing the prompt
C) Training AI
D) Increasing token limits
Answer: A

150. Why are “examples” important in few-shot prompting?
A) They demonstrate the expected input-output pattern to the AI
B) They compress data
C) They limit token usage
D) They reduce randomness
Answer: A

151. How can “negative examples” be used in prompts?
A) To show the AI what not to produce in its output
B) To confuse the AI
C) To increase randomness
D) To shorten prompts
Answer: A

152. What is “prompt robustness”?
A) The model’s ability to handle variations in prompt wording without major changes in output
B) Hardware durability
C) Prompt length
D) Model size
Answer: A

153. What is a benefit of “prompt templates”?
A) Consistency and efficiency in prompt creation for repeated tasks
B) Reduces output length
C) Encrypts prompts
D) Increases randomness
Answer: A

154. What is a “meta-prompt”?
A) A prompt that instructs the AI to generate or improve other prompts
B) The final output format
C) A training dataset
D) A hardware component
Answer: A

155. What is “prompt debugging”?
A) The process of testing and refining prompts to fix issues with AI responses
B) Training AI
C) Compressing data
D) Increasing token limits
Answer: A

156. What is the “temperature” value of zero likely to produce?
A) Completely deterministic outputs with no randomness
B) Completely random outputs
C) Longer outputs
D) Shorter outputs
Answer: A

157. What is the impact of increasing temperature?
A) Outputs become more diverse and creative but potentially less accurate
B) Outputs become shorter
C) Outputs become faster
D) No impact
Answer: A

158. How can prompts help reduce “AI hallucination”?
A) By providing clear, precise context and factual information
B) By using vague language
C) By shortening prompts
D) By increasing temperature
Answer: A

159. What is the “context window” measured in?
A) Tokens (units of text)
B) Bytes
C) Characters
D) Bits
Answer: A

160. What is the benefit of using “chain-of-thought” prompting?
A) Encourages the AI to explain reasoning steps, improving complex problem-solving
B) Shortens the prompt
C) Increases randomness
D) Reduces context window
Answer: A
161. What is “prompt personalization”?
A) Tailoring prompts to specific users or contexts for better relevance
B) Using generic prompts for everyone
C) Compressing prompts
D) Increasing output length
Answer: A

162. What does “prompt granularity” refer to?
A) The level of detail provided in a prompt
B) The length of AI output
C) Number of tokens used in training
D) Speed of AI response
Answer: A

163. Which of these improves prompt clarity?
A) Using simple, unambiguous language
B) Using jargon and complex words
C) Making prompts as vague as possible
D) Reducing token count only
Answer: A

164. What does “prompt sequencing” mean?
A) Ordering multiple prompts to guide AI through multi-step reasoning
B) Randomizing prompt order
C) Compressing prompts
D) Increasing token limits
Answer: A

165. How does “temperature” affect the predictability of AI outputs?
A) Higher temperature means less predictable and more creative responses
B) Temperature has no effect
C) Lower temperature means less random output
D) Both A and C
Answer: D

166. What is the benefit of “zero-shot prompting”?
A) The AI can generalize and perform tasks without prior examples
B) AI needs many examples
C) It requires retraining AI
D) It limits output length
Answer: A

167. What is “prompt calibration”?
A) Adjusting prompt wording and parameters to align AI output with desired goals
B) Training the AI model
C) Compressing prompts
D) Increasing token limits
Answer: A

168. How can “negative prompting” influence AI outputs?
A) By explicitly telling AI what content to avoid
B) By increasing output length
C) By compressing prompts
D) By randomizing outputs
Answer: A

169. What is a “prompt injection attack”?
A) A security vulnerability where harmful instructions are inserted into user inputs to manipulate AI outputs
B) A method to optimize prompts
C) An AI training technique
D) A way to shorten prompts
Answer: A

170. How does “prompt chaining” enhance complex task completion?
A) By breaking tasks into smaller prompts where each builds on the previous output
B) By compressing data
C) By randomizing prompts
D) By ignoring previous context
Answer: A

171. What is “prompt tuning”?
A) Fine-tuning prompts to optimize AI response quality without retraining the model
B) Training the AI model
C) Compressing prompts
D) Increasing token limits
Answer: A

172. How can “exemplars” help in prompt engineering?
A) They provide examples within prompts to guide AI behavior and output style
B) They compress data
C) They limit token usage
D) They reduce AI randomness
Answer: A

173. What is the function of a “system prompt”?
A) To set initial rules or behavior guidelines for AI before user interaction
B) To specify user questions
C) To compress prompts
D) To control output length
Answer: A

174. Why is it important to understand a model’s “context window”?
A) To ensure prompts plus expected outputs fit within the model’s token limit
B) To increase randomness
C) To reduce training data
D) To speed up AI
Answer: A

175. What is “interactive prompting”?
A) Dynamically adjusting prompts based on AI responses and user feedback during a session
B) Writing fixed prompts only
C) Ignoring AI outputs
D) Compressing prompts
Answer: A

176. What effect does increasing “max tokens” have on AI output?
A) Allows longer generated responses
B) Reduces AI speed
C) Compresses the prompt
D) Limits training data
Answer: A

177. What is the key goal of “prompt debugging”?
A) To identify and fix issues causing unsatisfactory AI responses
B) To train the AI
C) To increase output length
D) To reduce randomness
Answer: A

178. What is “meta-prompting”?
A) Having the AI generate or improve its own prompts for better results
B) Compressing prompts
C) Training AI models
D) Encrypting prompts
Answer: A

179. What does “prompt compression” mean?
A) Reducing prompt length while preserving meaning and context
B) Encrypting prompts
C) Increasing output randomness
D) Training AI
Answer: A

180. How can “prompt specificity” improve AI performance?
A) By providing detailed and clear instructions, reducing ambiguity
B) By making prompts shorter
C) By avoiding context
D) By randomizing tokens
Answer: A

181. What is a common cause of “prompt hallucination”?
A) Vague or insufficient context leading AI to generate inaccurate or fabricated info
B) Clear instructions
C) Using many examples
D) Increasing temperature
Answer: A

182. How does “temperature = 1” typically affect AI outputs?
A) Balances randomness and coherence in generation
B) Makes outputs deterministic
C) Makes outputs very repetitive
D) Truncates outputs
Answer: A

183. What is the benefit of “prompt modularity”?
A) Creating reusable prompt components to build complex prompts easily
B) Increasing randomness
C) Reducing token limits
D) Training AI faster
Answer: A

184. What is the impact of “few-shot prompting”?
A) Helps AI generalize tasks by giving a few examples in the prompt
B) Reduces output quality
C) Increases training time
D) Compresses data
Answer: A

185. How can you prevent “prompt injection” vulnerabilities?
A) By sanitizing user inputs and limiting commands in prompts
B) By increasing token size
C) By randomizing outputs
D) By shortening prompts
Answer: A

186. What does “prompt annotation” involve?
A) Adding explanations or metadata to prompts to clarify intent for better AI understanding
B) Compressing prompts
C) Increasing randomness
D) Training AI
Answer: A

187. What is “output conditioning” in prompt engineering?
A) Directing AI to produce outputs with specific traits or formats
B) Compressing outputs
C) Training AI
D) Increasing token count
Answer: A

188. What is “prompt iteration”?
A) Repeatedly refining prompts based on AI responses to improve results
B) Writing only one prompt
C) Randomizing tokens
D) Compressing prompts
Answer: A

189. How does “prompt paraphrasing” assist prompt engineering?
A) Testing different wording to find the most effective prompt version
B) Encrypting prompts
C) Compressing prompts
D) Increasing output length
Answer: A

190. What is the advantage of “prompt templates”?
A) Efficiency and consistency in creating similar prompts for repeated tasks
B) Reducing AI creativity
C) Increasing randomness
D) Training AI models
Answer: A
191. What is “prompt robustness”?
A) The AI’s ability to handle slight changes in prompt wording without major output changes
B) The AI’s hardware durability
C) The length of the prompt
D) The speed of AI response
Answer: A

192. Which of the following is a good practice in prompt engineering?
A) Using explicit instructions and examples
B) Using vague language
C) Keeping prompts as short as possible, even if unclear
D) Avoiding context
Answer: A

193. What does “prompt conditioning” enable?
A) Guiding AI to produce responses with specific style or constraints
B) Increasing AI randomness
C) Compressing data
D) Training AI models
Answer: A

194. What role do “exemplars” play in few-shot prompting?
A) Provide examples to teach AI the task format and expected response
B) Compress prompts
C) Increase temperature
D) Reduce output length
Answer: A

195. What is a drawback of very long prompts?
A) They may exceed the model’s context window, causing truncation
B) AI generates more accurate outputs
C) AI responds faster
D) They always improve performance
Answer: A

196. How can “prompt injection” attacks be mitigated?
A) By sanitizing user inputs and avoiding directly concatenating untrusted input into prompts
B) Increasing temperature
C) Increasing prompt length
D) Using vague prompts
Answer: A

197. What is the “context window” of a language model?
A) The maximum number of tokens the model can consider at once
B) The size of the model’s training data
C) The time it takes for AI to respond
D) The number of output tokens only
Answer: A

198. What is the effect of “temperature” being set to zero?
A) Outputs become deterministic and repeatable
B) Outputs become random
C) Outputs are shorter
D) Outputs are longer
Answer: A

199. What is the purpose of “chain-of-thought prompting”?
A) To encourage AI to explain reasoning step-by-step, improving complex problem solving
B) To reduce prompt length
C) To randomize outputs
D) To truncate responses
Answer: A

200. What is “instruction tuning”?
A) Fine-tuning models to better follow explicit instructions in prompts
B) Increasing prompt length
C) Compressing prompts
D) Increasing output randomness
Answer: A

201. How can “output formatting” be controlled via prompts?
A) By specifying the desired format like JSON, bullet points, or paragraphs in the prompt
B) By compressing the prompt
C) By training the AI
D) By increasing token count
Answer: A

202. What is the benefit of “interactive prompting”?
A) Allows iterative refinement of prompts based on previous AI responses
B) Limits prompt length
C) Speeds up model training
D) Randomizes output
Answer: A

203. What does “negative prompting” mean?
A) Specifying what AI should avoid in its output
B) Training AI to output negatives only
C) Reducing prompt length
D) Increasing randomness
Answer: A

204. What is “prompt paraphrasing” used for?
A) Testing different ways of wording a prompt to find the most effective one
B) Encrypting prompts
C) Compressing data
D) Increasing output length
Answer: A

205. How does “prompt chaining” benefit complex tasks?
A) By breaking tasks into a sequence of simpler prompts with incremental context
B) By compressing data
C) By randomizing prompt order
D) By ignoring previous context
Answer: A

206. What is a key feature of “few-shot prompting”?
A) Including a few examples to guide AI’s responses
B) Providing no examples
C) Training the model from scratch
D) Using only a single word prompt
Answer: A

207. What is “prompt tuning”?
A) The process of optimizing prompts for better AI output without retraining the whole model
B) Training a model on prompt data
C) Compressing prompts
D) Randomizing tokens
Answer: A

208. How does “prompt length” affect model performance?
A) Needs to be balanced—not too short to lack context, not too long to exceed context window
B) Always longer is better
C) Always shorter is better
D) No effect
Answer: A

209. Why is “clarity” important in prompt engineering?
A) To reduce ambiguity and improve relevance and accuracy of AI responses
B) To increase output randomness
C) To shorten outputs
D) To increase token limits
Answer: A

210. What is “prompt automation”?
A) Using tools or AI systems to generate or improve prompts automatically
B) Writing prompts manually only
C) Compressing prompts
D) Increasing token limits
Answer: A

211. What is the impact of “excessively vague prompts”?
A) Can lead to irrelevant or nonsensical AI outputs
B) Improves accuracy
C) Shortens output length
D) Improves speed
Answer: A

212. What is “system-level prompting”?
A) Providing global instructions that define AI behavior for a session or interaction
B) User query only
C) Compressing prompts
D) Training AI
Answer: A

213. What is the effect of “increasing max tokens”?
A) Allows longer AI-generated outputs
B) Shortens outputs
C) Increases randomness
D) Compresses prompts
Answer: A

214. What is “prompt personalization”?
A) Customizing prompts for specific users, domains, or contexts for better outputs
B) Using generic prompts only
C) Compressing prompts
D) Increasing randomness
Answer: A

215. How can “prompt annotation” help?
A) By adding metadata or comments to clarify prompt intent for developers or systems
B) By compressing data
C) By increasing output randomness
D) By training AI
Answer: A

216. What does “prompt iteration” involve?
A) Testing and refining prompts repeatedly to improve output quality
B) Writing a single prompt only
C) Randomizing tokens
D) Compressing data
Answer: A

217. What is the role of “exemplars” in prompt engineering?
A) Demonstrate examples of desired input-output pairs to guide AI
B) Compress data
C) Reduce token count
D) Randomize outputs
Answer: A

218. What is “meta-prompting”?
A) Using AI to generate or improve prompts autonomously
B) Encrypting prompts
C) Training AI
D) Increasing randomness
Answer: A

219. How does “temperature” affect output diversity?
A) Higher temperature increases diversity; lower decreases it
B) Temperature does not affect diversity
C) Lower temperature increases diversity
D) Temperature only affects output length
Answer: A

220. What is the best way to handle “prompt hallucination”?
A) Providing detailed, clear, and fact-based prompts with constraints
B) Using vague prompts
C) Increasing randomness
D) Reducing token limit
Answer: A
221. What does “contextual priming” in prompt engineering mean?
A) Including background information in the prompt to influence AI output
B) Reducing prompt length
C) Increasing randomness
D) Compressing tokens
Answer: A

222. Which of the following is NOT a typical method to improve prompt quality?
A) Adding clear instructions
B) Providing examples
C) Increasing model size
D) Specifying output format
Answer: C

223. What is a “prompt template”?
A) A reusable prompt structure with placeholders for dynamic content
B) A compressed prompt
C) A training dataset
D) A hardware specification
Answer: A

224. How can “few-shot prompting” help reduce hallucinations?
A) By showing the model correct examples to follow
B) By increasing randomness
C) By shortening the prompt
D) By training the model
Answer: A

225. What is the effect of “prompt specificity”?
A) Leads to more accurate and relevant AI responses
B) Increases output randomness
C) Reduces token limit
D) Shortens AI outputs
Answer: A

226. What does “prompt truncation” cause?
A) Loss of important context resulting in degraded AI output
B) Improved accuracy
C) Increased creativity
D) Longer outputs
Answer: A

227. What is “interactive prompting” often used for?
A) Refining prompts dynamically based on AI responses in real time
B) Writing fixed prompts only
C) Compressing prompts
D) Increasing token count
Answer: A

228. Why might “negative examples” be included in a prompt?
A) To illustrate undesired responses and steer AI away from them
B) To confuse AI
C) To shorten prompt length
D) To increase randomness
Answer: A

229. What is a potential problem with “prompt injection”?
A) Malicious users manipulating AI outputs by inserting harmful instructions
B) Improved prompt efficiency
C) Faster response times
D) Increased creativity
Answer: A

230. How does “prompt chaining” improve AI task handling?
A) By breaking down complex tasks into sequential smaller prompts
B) By randomizing prompt order
C) By reducing context window
D) By compressing data
Answer: A

231. What does “meta-prompting” involve?
A) Asking AI to generate or improve prompts itself
B) Encrypting prompts
C) Training models
D) Reducing token limits
Answer: A

232. What is a key consideration in “prompt tuning”?
A) Adjusting prompts without modifying the underlying model
B) Retraining the entire model
C) Compressing data
D) Randomizing outputs
Answer: A

233. What is the role of “exemplars” in prompt engineering?
A) Demonstrating example inputs and outputs within prompts to guide AI
B) Encrypting prompts
C) Compressing data
D) Increasing token count
Answer: A

234. Which parameter is adjusted to control randomness in output?
A) Temperature
B) Learning rate
C) Batch size
D) Context window
Answer: A

235. What is the best way to specify output format in a prompt?
A) Explicitly describe desired output style, such as JSON or bullet points
B) Use vague instructions
C) Leave format unspecified
D) Reduce prompt length
Answer: A

236. How can “prompt paraphrasing” benefit prompt design?
A) Testing different wording to find the most effective phrasing
B) Encrypting prompts
C) Compressing data
D) Reducing output length
Answer: A

237. What happens when the total input plus output tokens exceed the model’s context window?
A) Older parts of the conversation or prompt get truncated
B) AI crashes
C) Output shortens automatically
D) Output randomness increases
Answer: A

238. What does “prompt modularity” allow?
A) Reusing components of prompts for different tasks
B) Increasing randomness
C) Compressing data
D) Increasing token count
Answer: A

239. Why is “clarity” important in prompt writing?
A) To reduce ambiguity and improve relevance of AI responses
B) To increase output randomness
C) To shorten outputs
D) To increase token limits
Answer: A

240. What is “chain-of-thought” prompting designed to improve?
A) The AI’s reasoning and problem-solving capabilities
B) Output length
C) Output randomness
D) Token efficiency
Answer: A

241. How can “negative prompting” be used?
A) To specify content or styles the AI should avoid
B) To confuse the AI
C) To increase output randomness
D) To shorten prompt length
Answer: A

242. What is the risk of “excessive prompt length”?
A) Exceeding the model’s context window, leading to loss of information
B) AI producing better answers
C) Faster responses
D) Increasing randomness
Answer: A

243. What is “instruction tuning”?
A) Training or fine-tuning a model to better understand and follow instructions
B) Compressing prompts
C) Increasing output randomness
D) Reducing token count
Answer: A

244. Why is “system prompt” important?
A) It sets the AI’s overall behavior and tone for interactions
B) It compresses prompts
C) It shortens outputs
D) It randomizes responses
Answer: A

245. What is a “prompt injection attack”?
A) When a user inputs malicious instructions to manipulate AI behavior
B) A way to optimize prompts
C) A training technique
D) A data compression method
Answer: A

246. What is the advantage of “interactive prompting”?
A) Allows refining prompts dynamically for better output
B) Fixes prompt length
C) Compresses prompts
D) Randomizes tokens
Answer: A

247. How do “exemplars” improve AI output?
A) They demonstrate the pattern or format expected from the AI
B) They compress data
C) They increase output randomness
D) They reduce token limits
Answer: A

248. What is “prompt iteration”?
A) Repeatedly refining and testing prompts for improved performance
B) Writing a single prompt only
C) Randomizing tokens
D) Compressing data
Answer: A

249. Why is “temperature” important in prompt engineering?
A) It controls the creativity and randomness of AI responses
B) It changes model size
C) It compresses prompts
D) It increases token limits
Answer: A

250. How does “few-shot prompting” differ from “zero-shot prompting”?
A) Few-shot includes examples; zero-shot does not
B) Zero-shot includes examples; few-shot does not
C) Both require training the model
D) Neither includes examples
Answer: A
251. What is “prompt sensitivity”?
A) How small changes in a prompt affect the AI’s output
B) The length of the prompt
C) The AI model size
D) The number of tokens generated
Answer: A

252. What is a “context window” limitation in large language models?
A) The maximum number of tokens the model can process in one input
B) The number of layers in the neural network
C) The maximum output length only
D) The training dataset size
Answer: A

253. How does “explicit instruction” improve AI prompt responses?
A) By clearly stating what the AI should do, reducing ambiguity
B) By making prompts shorter
C) By increasing randomness
D) By compressing data
Answer: A

254. What is the purpose of “prompt decomposition”?
A) Breaking a complex task into simpler prompts for better processing
B) Compressing prompts
C) Randomizing output
D) Increasing token limits
Answer: A

255. Which technique is commonly used to reduce hallucination in AI outputs?
A) Providing detailed and precise prompts with examples
B) Using vague prompts
C) Increasing temperature to max
D) Shortening outputs
Answer: A

256. What is “prompt chaining” used for?
A) To perform multi-step reasoning by linking outputs from one prompt as input to the next
B) To randomize outputs
C) To shorten prompts
D) To increase token count
Answer: A

257. What is the benefit of “few-shot prompting” compared to zero-shot prompting?
A) It provides examples which improve accuracy on tasks
B) It shortens prompt length
C) It compresses data
D) It randomizes output
Answer: A

258. How does “prompt tuning” differ from model fine-tuning?
A) Prompt tuning optimizes prompts without changing the underlying model weights
B) It involves retraining the model
C) It compresses prompts
D) It reduces output length
Answer: A

259. What does “temperature” set to a high value do?
A) Increases randomness and creativity in AI output
B) Makes outputs deterministic
C) Shortens responses
D) Reduces vocabulary
Answer: A

260. Why is “prompt specificity” important?
A) Specific prompts guide the AI toward precise and relevant outputs
B) Specific prompts reduce output length
C) Specific prompts increase randomness
D) Specific prompts compress data
Answer: A

261. What is the risk of using “ambiguous prompts”?
A) AI might generate irrelevant or incorrect answers
B) AI generates better output
C) Output becomes shorter
D) Output randomness decreases
Answer: A

262. What is “negative prompting”?
A) Explicitly telling AI what not to include in the response
B) Training AI on negative data only
C) Shortening prompts
D) Increasing output randomness
Answer: A

263. What is a “system prompt”?
A) Instructions that set the AI’s overall behavior during a session
B) A user query
C) A data compression method
D) A training technique
Answer: A

264. How does “interactive prompting” enhance AI usability?
A) Allows users to refine prompts and get better results iteratively
B) Fixes prompt length
C) Compresses data
D) Randomizes output
Answer: A

265. What is “prompt paraphrasing”?
A) Rewording a prompt to find more effective ways of eliciting desired responses
B) Encrypting prompts
C) Compressing prompts
D) Increasing token counts
Answer: A

266. What does “output conditioning” mean in prompt engineering?
A) Directing AI to produce outputs with specific traits or formats
B) Compressing outputs
C) Increasing randomness
D) Training AI
Answer: A

267. What is “prompt annotation”?
A) Adding comments or metadata to clarify prompt intent
B) Compressing prompts
C) Encrypting prompts
D) Increasing output length
Answer: A

268. How does “temperature” affect determinism of output?
A) Lower temperature leads to more deterministic outputs
B) Temperature does not affect determinism
C) Higher temperature makes output deterministic
D) Temperature shortens outputs
Answer: A

269. What is the role of “exemplars” in prompts?
A) Providing examples to guide the AI’s responses
B) Compressing data
C) Encrypting prompts
D) Reducing token count
Answer: A

270. What does “prompt iteration” involve?
A) Continuously refining prompts to improve AI response quality
B) Writing only one prompt
C) Compressing prompts
D) Randomizing tokens
Answer: A

271. Why is “prompt clarity” essential?
A) It reduces ambiguity and improves relevance and accuracy of AI responses
B) It increases output randomness
C) It shortens output length
D) It increases token count
Answer: A

272. What is “prompt modularity”?
A) Creating reusable prompt components to build complex prompts efficiently
B) Increasing randomness
C) Compressing data
D) Training AI models
Answer: A

273. What is a “prompt injection attack”?
A) Malicious input designed to manipulate AI output behavior
B) A method for improving prompts
C) A training technique
D) A data compression method
Answer: A

274. What is the benefit of “chain-of-thought prompting”?
A) Helps AI reason through problems step-by-step for better accuracy
B) Shortens output
C) Increases randomness
D) Compresses prompts
Answer: A

275. What is “instruction tuning”?
A) Fine-tuning AI models to better follow user instructions
B) Compressing prompts
C) Increasing output randomness
D) Reducing token limits
Answer: A

276. Why include “system-level prompts”?
A) To set overall rules and behavior for AI interactions
B) To shorten user prompts
C) To increase output randomness
D) To compress data
Answer: A

277. How does “prompt compression” affect prompts?
A) Reduces prompt length while preserving meaning
B) Encrypts prompts
C) Randomizes outputs
D) Trains AI models
Answer: A

278. What does “prompt personalization” involve?
A) Tailoring prompts to specific users or contexts
B) Using generic prompts
C) Compressing prompts
D) Increasing output randomness
Answer: A

279. What is the effect of “excessively long prompts”?
A) They may exceed context window and cause truncation
B) They always improve output
C) They shorten output length
D) They reduce AI creativity
Answer: A

280. How does “max tokens” parameter influence AI outputs?
A) Controls the maximum length of the AI-generated response
B) Compresses prompts
C) Increases randomness
D) Reduces context window
Answer: A
281. What is “prompt diversity”?
A) Using varied prompt structures to achieve a range of AI outputs
B) Keeping prompts uniform
C) Reducing output randomness
D) Compressing prompts
Answer: A

282. Why is “temperature” adjustment useful in generative AI?
A) It controls the creativity versus determinism trade-off in outputs
B) It controls model size
C) It compresses prompts
D) It limits token count
Answer: A

283. What does “contextual relevance” in prompts ensure?
A) The prompt includes necessary background for accurate AI responses
B) The prompt is as short as possible
C) The output is randomized
D) The prompt is encrypted
Answer: A

284. What is the effect of “prompt ambiguity”?
A) It can cause the AI to produce vague or irrelevant answers
B) It improves AI accuracy
C) It shortens outputs
D) It increases output speed
Answer: A

285. What is the purpose of “output constraints” in prompt engineering?
A) To limit output length, format, or content for desired results
B) To compress outputs
C) To increase randomness
D) To train AI models
Answer: A

286. What is “zero-shot prompting”?
A) Asking AI to perform a task without any examples
B) Providing many examples in the prompt
C) Training the model on a new dataset
D) Compressing the prompt
Answer: A

287. What does “prompt calibration” refer to?
A) Adjusting prompts to achieve consistent and accurate outputs
B) Compressing prompts
C) Encrypting data
D) Increasing randomness
Answer: A

288. How can “prompt length” negatively impact AI performance?
A) If too long, it may exceed model context limits and lose important information
B) Longer prompts always improve performance
C) Prompt length has no effect
D) Shorter prompts confuse the AI
Answer: A

289. What is the function of “dynamic placeholders” in prompt templates?
A) To insert variable content dynamically at runtime
B) To compress prompts
C) To encrypt prompts
D) To randomize output
Answer: A

290. What is a “prompt feedback loop”?
A) Using AI outputs to iteratively refine and improve prompts
B) Compressing data
C) Encrypting prompts
D) Randomizing tokens
Answer: A

291. How does “meta-learning” relate to prompt engineering?
A) Models learn to generalize instructions better from varied prompts
B) It compresses prompts
C) It encrypts data
D) It randomizes outputs
Answer: A

292. What is “prompt anchoring”?
A) Providing a fixed reference point or example in a prompt to guide AI
B) Compressing prompts
C) Increasing randomness
D) Shortening outputs
Answer: A

293. What is the advantage of “progressive prompting”?
A) Gradually increasing complexity or detail in prompts for better AI understanding
B) Compressing prompts
C) Randomizing output
D) Reducing token count
Answer: A

294. What is “few-shot prompting” especially useful for?
A) Teaching AI to perform tasks with limited examples
B) Training the model from scratch
C) Compressing data
D) Encrypting prompts
Answer: A

295. Why might “negative examples” be included in a prompt?
A) To instruct AI what to avoid generating
B) To confuse AI
C) To reduce prompt length
D) To increase randomness
Answer: A

296. What does “prompt ensembling” involve?
A) Combining multiple prompt variants to generate diverse outputs
B) Compressing data
C) Encrypting prompts
D) Randomizing tokens
Answer: A

297. What is the impact of “high temperature” on AI-generated text?
A) Produces more creative and varied responses
B) Produces deterministic outputs
C) Shortens responses
D) Reduces vocabulary
Answer: A

298. What is “prompt sanitization”?
A) Removing malicious or harmful content from user inputs before using them in prompts
B) Compressing prompts
C) Encrypting prompts
D) Increasing token counts
Answer: A

299. What is a “prompt scaffold”?
A) Structured prompt design that supports step-by-step reasoning
B) Compressing prompts
C) Encrypting data
D) Randomizing outputs
Answer: A

300. What is the main goal of “instruction tuning”?
A) Making AI better at understanding and following natural language instructions
B) Compressing prompts
C) Increasing output randomness
D) Reducing token limits
Answer: A

LEAVE A REPLY

Please enter your comment!
Please enter your name here