LangChain MCQ

0
8

1. What is LangChain primarily used for?
A) Building blockchain applications
B) Developing applications powered by Large Language Models (LLMs)
C) Creating mobile applications
D) Designing databases
Answer: B) Developing applications powered by Large Language Models (LLMs)

2. Which of the following is NOT a core component of LangChain?
A) Chains
B) Agents
C) Prompts
D) SQL databases
Answer: D) SQL databases

3. In LangChain, what is a “Chain”?
A) A sequence of calls to LLMs and other components to perform a task
B) A data storage method
C) A user interface component
D) A cryptographic protocol
Answer: A) A sequence of calls to LLMs and other components to perform a task

4. What role do “Agents” play in LangChain?
A) They manage data storage
B) They autonomously decide which actions to take based on user input
C) They provide front-end UI components
D) They handle security and authentication
Answer: B) They autonomously decide which actions to take based on user input

5. Which of the following can LangChain integrate with to provide knowledge retrieval?
A) Vector databases
B) Blockchain ledgers
C) Email servers
D) Operating systems
Answer: A) Vector databases

6. What is a “Prompt Template” in LangChain?
A) A tool to format the output from an LLM
B) A predefined structure for inputs to an LLM
C) A database query language
D) A user interface design
Answer: B) A predefined structure for inputs to an LLM

7. How does LangChain help in handling external data sources?
A) It does not support external data sources
B) Through document loaders and retrievers that ingest and query external documents
C) By providing in-built databases only
D) By only supporting static data files
Answer: B) Through document loaders and retrievers that ingest and query external documents

8. Which programming language is LangChain primarily developed in?
A) Java
B) Python
C) C++
D) Ruby
Answer: B) Python

9. What is a common use case for LangChain?
A) Automating email responses using LLMs
B) Real-time multiplayer gaming
C) Designing graphics editors
D) Managing hardware resources
Answer: A) Automating email responses using LLMs

10. Which of the following best describes the “Memory” module in LangChain?
A) It stores code snippets for reuse
B) It allows an application to maintain state or context between LLM calls
C) It is a caching mechanism for database queries
D) It handles user authentication tokens
Answer: B) It allows an application to maintain state or context between LLM calls
11. What is the purpose of “Document Loaders” in LangChain?
A) Load and parse documents into a format usable by the chain
B) Generate new documents from scratch
C) Manage user permissions for documents
D) Encrypt documents
Answer: A) Load and parse documents into a format usable by the chain

12. What does an “LLM Wrapper” do in LangChain?
A) Encrypts language model outputs
B) Provides a standardized interface to interact with various LLM APIs
C) Creates visualizations for LLM outputs
D) Stores LLM results in a database
Answer: B) Provides a standardized interface to interact with various LLM APIs

13. How does LangChain typically connect to external APIs or tools?
A) Using built-in API connectors
B) Through “Tool” abstractions used by Agents
C) By manually coding HTTP requests outside of LangChain
D) It does not support external APIs
Answer: B) Through “Tool” abstractions used by Agents

14. What is the main advantage of using “Vector Stores” in LangChain?
A) Fast similarity search for retrieving relevant documents or data
B) Storing blockchain transactions
C) Saving user interface components
D) Encrypting data
Answer: A) Fast similarity search for retrieving relevant documents or data

15. What feature allows LangChain to maintain a conversation context across multiple turns?
A) Prompt Templates
B) Memory modules
C) Agents
D) Chains
Answer: B) Memory modules
Quick Explanations:

Chains: Think of chains as workflows connecting multiple components—LLMs, tools, APIs—in a sequence to fulfill a task. For example, first extract info from a document, then ask the LLM to summarize it.
Agents: Agents are intelligent decision-makers that can choose actions based on user inputs and context, often calling different tools or chains dynamically.
Memory: This module keeps track of conversation history or previous interactions, enabling the app to maintain context over multiple turns or queries.
Vector Stores: These are databases optimized for storing embeddings (numerical vector representations of text or data), which allow efficient similarity search for document retrieval.
Tools: Tools are external utilities or APIs the LangChain agent can call—like searching a database, querying a knowledge base, or running code.

16. What is the primary purpose of a “Retriever” in LangChain?
A) To execute code snippets
B) To fetch relevant documents or data based on a query
C) To train the LLM
D) To monitor system health
Answer: B) To fetch relevant documents or data based on a query

17. Which of the following is a commonly used vector store supported by LangChain?
A) MySQL
B) Pinecone
C) MongoDB
D) SQLite
Answer: B) Pinecone

18. What kind of tasks can LangChain’s “Prompt Templates” help automate?
A) Formatting data for LLM input
B) Data encryption
C) Network packet analysis
D) Compiling code
Answer: A) Formatting data for LLM input

19. In LangChain, how can developers customize the behavior of an LLM call?
A) By modifying the backend server
B) By adjusting parameters like temperature, max tokens, and stop sequences in the prompt or LLM wrapper
C) By changing the operating system
D) By using SQL queries
Answer: B) By adjusting parameters like temperature, max tokens, and stop sequences in the prompt or LLM wrapper

20. What does “zero-shot prompting” refer to in LangChain applications?
A) Prompting without any example demonstrations for the LLM
B) Prompting with thousands of examples
C) Ignoring prompts altogether
D) Prompting with database queries
Answer: A) Prompting without any example demonstrations for the LLM

21. Which LangChain component is best suited for chaining multiple prompt calls with varying inputs and outputs?
A) Memory
B) Chain
C) Retriever
D) Agent
Answer: B) Chain

22. How does LangChain handle large documents or datasets that cannot fit in one prompt?
A) It doesn’t support large documents
B) It splits documents into chunks and processes them individually
C) It compresses the document into a zip file
D) It uploads the document to the LLM server
Answer: B) It splits documents into chunks and processes them individually

23. Which LangChain module helps integrate Python code execution during the processing?
A) Agent with Tools
B) Chain
C) Memory
D) Prompt Template
Answer: A) Agent with Tools

24. LangChain can be used to build applications in which of the following domains?
A) Customer support chatbots
B) Content summarization
C) Knowledge retrieval systems
D) All of the above
Answer: D) All of the above

25. What is the advantage of using LangChain over directly calling LLM APIs?
A) LangChain provides reusable components and orchestration for complex workflows
B) LangChain is faster than any API
C) LangChain provides hardware acceleration
D) LangChain encrypts all LLM outputs by default
Answer: A) LangChain provides reusable components and orchestration for complex workflows
MCQs on LangChain

1. What does LangChain primarily help you build?
A) Mobile apps
B) Large Language Model (LLM) powered applications
C) Video games
D) Operating systems
Answer: B

2. Which programming language is LangChain mainly written in?
A) Java
B) Python
C) C++
D) JavaScript
Answer: B

3. In LangChain, what is a “Chain”?
A) A single API call
B) A sequence of calls and operations involving LLMs and other components
C) A database
D) A user interface module
Answer: B

4. What does an “Agent” do in LangChain?
A) Stores data
B) Makes autonomous decisions about which tools or chains to invoke
C) Creates UI elements
D) Encrypts data
Answer: B

5. What is a “Prompt Template” in LangChain?
A) Predefined input formatting for LLM prompts
B) An output formatter
C) A database query
D) A visualization tool
Answer: A

6. Which LangChain component allows keeping track of context or history between calls?
A) Chains
B) Memory
C) Agents
D) Tools
Answer: B

7. What is the main purpose of “Document Loaders”?
A) Load and parse documents for LangChain to process
B) Encrypt documents
C) Visualize documents
D) Delete documents
Answer: A

8. Which of the following is NOT a core concept in LangChain?
A) Agents
B) Chains
C) Tools
D) Blockchain
Answer: D

9. What type of database is commonly used with LangChain for similarity search?
A) Relational databases
B) Vector databases
C) Key-value stores
D) File systems
Answer: B

10. Which of these is a popular vector store supported by LangChain?
A) PostgreSQL
B) Pinecone
C) OracleDB
D) Redis
Answer: B

11. What is zero-shot prompting?
A) Prompting without any example demonstrations
B) Prompting with multiple examples
C) Prompting by skipping the prompt
D) Prompting with code execution
Answer: A

12. What role do “Tools” serve in LangChain?
A) External APIs or utilities an agent can call
B) User interface components
C) Encryption modules
D) Database connectors
Answer: A

13. How do Agents decide which tool or chain to use?
A) Randomly
B) Based on user input and context, using LLM reasoning
C) Predefined fixed flow
D) Based on time of day
Answer: B

14. Which LangChain component would you use to maintain a conversation over multiple turns?
A) Chains
B) Memory
C) Prompt Templates
D) Document Loaders
Answer: B

15. What does an “LLM Wrapper” do?
A) Wraps calls to different LLM APIs to provide a common interface
B) Stores LLM data
C) Creates UI components
D) Handles database connections
Answer: A

16. What is a common use case of LangChain?
A) Image editing
B) Chatbots and conversational agents
C) Video streaming
D) Operating system development
Answer: B

17. How does LangChain handle documents that are too large to fit in a single prompt?
A) Ignores extra content
B) Splits documents into chunks and processes them separately
C) Compresses the document
D) Converts the document into audio
Answer: B

18. What kind of similarity search does a vector store enable?
A) Exact matching
B) Semantic similarity search using embeddings
C) Random search
D) Binary search
Answer: B

19. How can you customize the behavior of an LLM call in LangChain?
A) By setting parameters like temperature and max tokens
B) By changing the operating system
C) By editing the database schema
D) By modifying network settings
Answer: A

20. What is the role of a Retriever?
A) To retrieve relevant documents or data from a vector store or knowledge base
B) To write new documents
C) To train LLMs
D) To handle user input
Answer: A

21. Which LangChain module enables calling external APIs or running Python code?
A) Chains
B) Agents with Tools
C) Memory
D) Prompt Templates
Answer: B

22. What is a key advantage of using LangChain over calling LLM APIs directly?
A) It offers an orchestration framework to build complex applications
B) It is always cheaper
C) It uses less memory
D) It has a built-in browser
Answer: A

23. How does LangChain help with prompt engineering?
A) By offering Prompt Templates to standardize and reuse prompts
B) By encrypting prompts
C) By converting prompts to code
D) By providing UI elements
Answer: A

24. Which of these is a reason to use memory in LangChain?
A) To store user preferences and conversation history
B) To store images
C) To encrypt data
D) To log errors
Answer: A

25. What does the term “embedding” mean in the context of LangChain?
A) Converting text or data into numerical vectors for semantic understanding
B) Compressing files
C) Encrypting data
D) Creating UI components
Answer: A

26. Which LangChain component is designed to execute a sequence of tasks?
A) Agent
B) Chain
C) Memory
D) Retriever
Answer: B

27. Which of the following is NOT an example of a Tool in LangChain?
A) A calculator API
B) A web search API
C) A UI button
D) A knowledge base query API
Answer: C

28. Can LangChain be integrated with OpenAI’s GPT models?
A) Yes
B) No
Answer: A

29. What is an example of a use case for Agents in LangChain?
A) Autonomous task planning and decision making
B) Data storage
C) UI rendering
D) Image processing
Answer: A

30. How does LangChain support document question answering?
A) By loading documents, splitting them into chunks, embedding, storing in vector stores, and retrieving relevant chunks for the LLM to answer
B) By directly searching Google
C) By converting documents to images
D) By translating documents to code
Answer: A

31. What does a “Callback Handler” do in LangChain?
A) Monitor or log events during execution, like LLM calls or agent actions
B) Send emails
C) Update databases
D) Manage user sessions
Answer: A

32. Which of these is a supported LangChain “Memory” type?
A) Conversation buffer memory
B) Disk storage memory
C) Cache memory
D) GPU memory
Answer: A

33. What is a “Retriever” typically paired with in LangChain?
A) Vector store
B) UI components
C) Encryption modules
D) Operating system
Answer: A

34. What is the benefit of “Chaining” calls in LangChain?
A) Allows complex workflows by linking multiple steps or calls together
B) Reduces network latency
C) Improves graphics rendering
D) Encrypts data
Answer: A

35. What is “Prompt Injection” in LangChain context?
A) The technique of inserting specific instructions or information into a prompt to guide the LLM
B) A security vulnerability
C) A UI bug
D) A type of database injection
Answer: A

36. LangChain’s architecture supports integration with which of the following?
A) Various LLM APIs like OpenAI, HuggingFace, Cohere
B) Operating system kernels
C) Graphics processing units only
D) Video games
Answer: A

37. How can LangChain’s “Agents” enhance user interaction?
A) By dynamically choosing which tools or chains to call based on input
B) By encrypting user data
C) By rendering 3D graphics
D) By optimizing network throughput
Answer: A

38. What is a “Document Retriever”?
A) A component that fetches relevant chunks of documents for LLM input
B) A tool for writing documents
C) A type of database
D) A UI element
Answer: A

39. Can LangChain work with local LLM models?
A) Yes
B) No
Answer: A

40. What is the benefit of using a “Prompt Template” over raw prompts?
A) Reusability and easier maintenance of prompt structures
B) Better graphics
C) Faster network speed
D) More storage
Answer: A

41. How does LangChain support multi-step reasoning?
A) By chaining multiple LLM calls or tools to break down complex tasks
B) By encrypting data
C) By rendering UI
D) By managing files
Answer: A

42. What is an example of a “Tool” that an Agent might use?
A) Calculator API
B) Web search API
C) Database query API
D) All of the above
Answer: D

43. Which LangChain module stores and manages conversational context?
A) Memory
B) Chains
C) Agents
D) Tools
Answer: A

44. What kind of applications can be built using LangChain?
A) Chatbots
B) Summarization tools
C) Question answering systems
D) All of the above
Answer: D

45. Which LangChain component helps in splitting documents into smaller pieces?
A) Document Splitter
B) Memory
C) Agent
D) Retriever
Answer: A

46. Which of the following is true about LangChain’s Agents?
A) They require manual specification of all steps
B) They use LLMs to decide actions dynamically
C) They cannot call external APIs
D) They only work with OpenAI models
Answer: B

47. What is an embedding vector?
A) A numeric representation of text that captures semantic meaning
B) A graphical image
C) An encryption key
D) A database index
Answer: A

48. What does LangChain’s “Callback Handler” facilitate?
A) Observability and logging of system events
B) Database management
C) User authentication
D) Memory storage
Answer: A

49. What does “Semantic Search” mean in LangChain?
A) Searching based on the meaning of text rather than exact keywords
B) Searching only in HTML files
C) Searching by file size
D) Searching by creation date
Answer: A

50. What is the recommended way to handle LLM token limits in LangChain?
A) Split inputs into chunks and process separately
B) Ignore the limit
C) Send the entire document as is
D) Convert text to binary
Answer: A
51. What type of memory is designed to keep track of conversation history in LangChain?
A) Buffer Memory
B) Disk Memory
C) Cache Memory
D) GPU Memory
Answer: A

52. How do LangChain Agents interact with external tools?
A) Agents call tools dynamically based on the task and input
B) Agents ignore external tools
C) Agents only use internal components
D) Agents require manual activation of tools
Answer: A

53. Which LangChain class manages persistent state across multiple calls?
A) Chain
B) Memory
C) Tool
D) Agent
Answer: B

54. How can you add a custom API as a tool for an Agent in LangChain?
A) By implementing a Tool interface that wraps the API
B) By modifying LangChain source code only
C) LangChain does not support custom APIs
D) By creating a new memory module
Answer: A

55. What is the role of “Retriever” in a LangChain pipeline?
A) Fetch relevant documents or data chunks from knowledge bases or vector stores
B) Store user preferences
C) Visualize output
D) Encrypt documents
Answer: A

56. How does LangChain help with long conversations?
A) Through memory modules that store and recall previous interactions
B) By limiting conversation length
C) By resetting context every turn
D) By compressing conversations
Answer: A

57. Which component is responsible for breaking documents into chunks in LangChain?
A) Document Splitter
B) Memory
C) Chain
D) Agent
Answer: A

58. What programming paradigm best describes LangChain’s architecture?
A) Functional programming
B) Modular and pipeline-based architecture
C) Object-oriented programming only
D) Procedural programming only
Answer: B

59. What is an “Agent Executor” in LangChain?
A) Component that runs an agent, deciding which tools to use step by step
B) Database connector
C) UI Renderer
D) File system
Answer: A

60. How can LangChain agents decide on multiple steps dynamically?
A) By prompting the LLM to plan and choose next steps based on output
B) By fixed step execution
C) By random selection
D) By user input only
Answer: A

61. What is the “LLMChain” class used for?
A) To combine a prompt template and an LLM call into a reusable chain
B) To connect databases
C) To visualize results
D) To manage memory
Answer: A

62. Which of the following is NOT a type of memory in LangChain?
A) Conversation Buffer Memory
B) Token Memory
C) Summary Memory
D) File System Memory
Answer: D

63. How does “Summary Memory” work in LangChain?
A) Summarizes the conversation history to reduce context length
B) Stores entire conversation without changes
C) Deletes old conversation data
D) Encrypts the conversation
Answer: A

64. Which LangChain feature helps to observe and debug calls made to the LLM?
A) Callback Handlers
B) Memory
C) Chains
D) Tools
Answer: A

65. What can a “Tool” in LangChain do?
A) Provide additional capabilities like search, calculations, or API calls to an Agent
B) Store conversation history
C) Render UI elements
D) Encrypt data
Answer: A

66. Which method would you use to integrate OpenAI’s GPT API with LangChain?
A) Using the OpenAI LLM wrapper class
B) Direct REST API calls outside LangChain
C) LangChain does not support OpenAI
D) Using a database connector
Answer: A

67. How are “Prompt Templates” useful in LangChain?
A) They allow dynamic generation and reuse of prompts with variable inputs
B) They store user data
C) They run code snippets
D) They connect to databases
Answer: A

68. What is the main challenge LangChain solves for large document processing?
A) Managing token limits of LLMs by chunking and retrieval
B) Encrypting documents
C) Speeding up the network
D) Rendering UI
Answer: A

69. What does “Vector Embedding” do in LangChain?
A) Converts text into a numerical vector that captures semantic meaning
B) Compresses data
C) Encrypts text
D) Converts images
Answer: A

70. Which of the following is NOT a supported vector store integration with LangChain?
A) Pinecone
B) Weaviate
C) MySQL
D) FAISS
Answer: C

71. What is the main advantage of using an Agent over a Chain?
A) Agents can decide dynamically which tool or chain to invoke next
B) Agents are faster
C) Agents don’t require prompts
D) Agents are only used for UI rendering
Answer: A

72. How does LangChain’s “Memory” differ from regular variables?
A) Memory persists across calls and manages conversational context
B) Memory is stored only temporarily
C) Memory encrypts data
D) Memory manages UI states
Answer: A

73. What is a “Callback Handler” used for?
A) To hook into and track events like LLM calls and agent decisions
B) To encrypt data
C) To store conversation memory
D) To connect databases
Answer: A

74. Which LangChain component is responsible for orchestrating multiple calls to LLMs or other APIs?
A) Chain
B) Memory
C) Retriever
D) Tool
Answer: A

75. What is the typical workflow when building a question answering app using LangChain?
A) Load documents → split → embed → store in vector DB → retrieve → answer using LLM
B) Convert documents to images → upload
C) Encrypt documents → store
D) Render UI → fetch data
Answer: A

76. What is “Prompt Injection” in the context of LLM applications?
A) Adding instructions inside the prompt to guide the model’s output
B) A security attack
C) UI bug
D) Database injection
Answer: A

77. How do LangChain Agents improve over basic prompt chains?
A) By enabling dynamic multi-tool usage and decision-making during runtime
B) By being faster
C) By encrypting prompts
D) By limiting token usage
Answer: A

78. Can LangChain be used with HuggingFace transformers?
A) Yes
B) No
Answer: A

79. What is the benefit of chaining LLM calls?
A) Breaking complex tasks into simpler sequential steps
B) Reducing network traffic
C) Encrypting data
D) Rendering UI
Answer: A

80. How does LangChain manage token limits when dealing with large contexts?
A) Uses memory summarization and chunking techniques
B) Ignores limits
C) Sends entire document without modification
D) Converts text to images
Answer: A

81. Which of these is a popular open-source vector store supported by LangChain?
A) FAISS
B) OracleDB
C) MongoDB
D) SQLite
Answer: A

82. How do LangChain Agents decide the next action to take?
A) By generating reasoning chains via LLM outputs
B) By random selection
C) Based on time only
D) Manual input only
Answer: A

83. What does a “Conversation Buffer Memory” keep track of?
A) The full history of the conversation in order
B) Only the last message
C) User authentication tokens
D) Encrypted data
Answer: A

84. What is a common use of “Callback Handlers”?
A) Logging and debugging LLM calls and agent actions
B) Encrypting data
C) Rendering UI
D) Storing documents
Answer: A

85. What is the function of “Chains” in LangChain?
A) To execute multiple LLM calls or other operations in a specified sequence
B) Encrypt data
C) Render user interface
D) Store documents
Answer: A

86. How can LangChain Agents access external data or APIs?
A) By calling Tools connected to APIs
B) They cannot access external data
C) By modifying LLM internals
D) Only via databases
Answer: A

87. Which LangChain class is used to define input-output mappings for chained components?
A) Chain
B) Memory
C) Agent
D) Callback
Answer: A

88. What is the benefit of using Prompt Templates with variables?
A) Allows flexible, reusable prompts that adapt to inputs
B) Encrypts prompts
C) Renders UI components
D) Stores data
Answer: A

89. How do Agents enhance user experience in LangChain apps?
A) By providing interactive, multi-step workflows and tool usage
B) By encrypting data
C) By managing UI layouts
D) By controlling hardware
Answer: A

90. What is a key consideration when integrating multiple LLM calls in LangChain?
A) Managing token limits and context
B) Avoiding UI errors
C) Reducing database load
D) Increasing encryption
Answer: A
91. What is the primary purpose of a vector store in LangChain?
A) Store embeddings for efficient similarity search
B) Store user passwords
C) Manage UI components
D) Encrypt data
Answer: A

92. Which of these is a vector store technology commonly used with LangChain?
A) FAISS
B) MySQL
C) Redis Cache
D) SQLite
Answer: A

93. How are embeddings typically created in LangChain?
A) Using embedding models like OpenAI’s text-embedding-ada-002
B) By hashing text
C) By encrypting text
D) By converting text to images
Answer: A

94. What is the advantage of semantic search over keyword search?
A) Finds results based on meaning, not just exact words
B) Uses less memory
C) Runs faster in all cases
D) Doesn’t require any data preprocessing
Answer: A

95. Which LangChain component is responsible for fetching relevant document chunks based on similarity?
A) Retriever
B) Memory
C) Chain
D) Agent
Answer: A

96. What is a common technique to reduce the size of conversation context in LangChain memory?
A) Summarization memory that condenses prior messages
B) Deleting all history after every call
C) Encrypting context
D) Ignoring token limits
Answer: A

97. What is an “Output Parser” in LangChain?
A) Component to process and extract structured output from LLM responses
B) UI renderer
C) Data encryptor
D) Database manager
Answer: A

98. Which technique can improve LLM responses by giving explicit examples?
A) Few-shot prompting
B) Zero-shot prompting
C) Encryption
D) UI layout changes
Answer: A

99. What does “Few-shot” prompting mean?
A) Providing a few examples in the prompt to guide the LLM’s output
B) No examples in the prompt
C) Encrypting the prompt
D) Changing the model architecture
Answer: A

100. How can you deploy a LangChain application?
A) As a web API, chatbot, or serverless function
B) Only on desktop apps
C) Only on mobile apps
D) Only on embedded devices
Answer: A

101. Which cloud service is commonly used to deploy LangChain applications?
A) AWS Lambda
B) Local files only
C) Desktop programs
D) Mobile apps
Answer: A

102. What is an important consideration when deploying LangChain apps?
A) Handling API rate limits and costs for LLM calls
B) Avoiding any network use
C) Using GPUs for UI rendering
D) Storing all data locally
Answer: A

103. How does LangChain help with “prompt chaining”?
A) By linking multiple prompts and LLM calls sequentially to solve complex tasks
B) Encrypting prompts
C) Rendering UI
D) Managing databases
Answer: A

104. What does “Retrieval-Augmented Generation” (RAG) mean?
A) Using retrieved documents as context to generate better LLM responses
B) Encrypting data
C) Rendering UI elements
D) Storing logs
Answer: A

105. Which of the following is an example of a real-world application of LangChain?
A) Customer service chatbot
B) Image editing tool
C) Video streaming service
D) Operating system
Answer: A

106. How does LangChain handle multi-modal inputs?
A) Currently mainly focused on text, but can integrate with other tools for multi-modal tasks
B) Supports images natively
C) Only supports audio
D) Only supports video
Answer: A

107. Which LangChain feature can be used to maintain state across sessions?
A) Persistent Memory
B) Temporary cache
C) UI component
D) Logging system
Answer: A

108. What is a common method for improving prompt effectiveness in LangChain?
A) Prompt Templates with variables and clear instructions
B) Ignoring token limits
C) Encrypting the prompt
D) Avoiding examples
Answer: A

109. How can LangChain be extended with custom functionality?
A) By creating custom Tools, Chains, or Memory modules
B) Modifying core LangChain code only
C) Only via plugins
D) It cannot be extended
Answer: A

110. What does a “Chain of Thought” prompting encourage?
A) Step-by-step reasoning in LLM output
B) Random answers
C) Skipping reasoning
D) Encrypting output
Answer: A

111. How do you handle token limits for large documents in LangChain?
A) Chunk the document and retrieve relevant chunks dynamically
B) Send the entire document at once
C) Ignore token limits
D) Convert document to image
Answer: A

112. Which vector store supports approximate nearest neighbor search?
A) FAISS
B) MySQL
C) Excel sheets
D) Text files
Answer: A

113. What is a “Toolkit” in LangChain?
A) A collection of tools bundled to solve specific types of problems
B) A UI framework
C) A database
D) An encryption library
Answer: A

114. What is the benefit of using “Agents with Toolkits”?
A) Enables agents to access multiple specialized tools dynamically
B) Speeds up database queries
C) Encrypts all data
D) Simplifies UI rendering
Answer: A

115. What does “embedding” mean when storing documents for search?
A) Representing documents as vectors capturing semantic content
B) Compressing documents
C) Encrypting documents
D) Converting documents to images
Answer: A

116. Which of the following is NOT a typical LangChain use case?
A) Automated code generation
B) Autonomous data analysis
C) 3D video rendering
D) Question answering
Answer: C

117. What’s a good practice when designing LangChain prompt templates?
A) Use clear instructions and placeholders for dynamic inputs
B) Use random text
C) Avoid variables
D) Use very long unstructured prompts
Answer: A

118. How can you improve the accuracy of LangChain outputs?
A) Use retrieval augmented generation (RAG) and refined prompt engineering
B) Increase network speed
C) Encrypt data
D) Use fewer tokens
Answer: A

119. What is the role of “Retriever” in RAG pipelines?
A) Fetch relevant documents to provide context to the LLM
B) Encrypt the query
C) Render the output
D) Store logs
Answer: A

120. What type of embedding models are commonly used with LangChain?
A) Transformer-based embedding models like OpenAI’s embedding APIs
B) Decision trees
C) Linear regressions
D) Support vector machines
Answer: A

121. How does LangChain facilitate fine-tuning prompt design?
A) By modular prompt templates and iterative testing
B) By restricting prompt length
C) By encrypting prompts
D) By disabling prompts
Answer: A

122. What kind of APIs can LangChain Agents use as Tools?
A) Any API exposed over HTTP that returns useful data or performs actions
B) Only LangChain-specific APIs
C) Only database APIs
D) Only UI APIs
Answer: A

123. How does LangChain support multi-turn conversations?
A) By using Memory modules to keep context across turns
B) By resetting context every turn
C) By ignoring previous inputs
D) By encrypting messages
Answer: A

124. What is an example of “zero-shot” prompting?
A) Asking the LLM to perform a task without any examples provided
B) Providing multiple examples
C) Encrypting prompts
D) Using images in prompts
Answer: A

125. What is the best way to handle retrieval from very large knowledge bases in LangChain?
A) Use vector stores with efficient similarity search
B) Use relational databases only
C) Search full text each time
D) Load entire knowledge base into memory
Answer: A

126. What is an “Agent’s scratchpad”?
A) Intermediate memory used by agents to keep track of reasoning and steps
B) User input field
C) UI component
D) Encryption key
Answer: A

127. How does LangChain enable tool usage in agents?
A) By providing interfaces for tools that agents can call via prompts and reasoning
B) By embedding tools inside LLM weights
C) Tools are unrelated to agents
D) Agents cannot call external tools
Answer: A

128. Can LangChain be used for automating workflows?
A) Yes, by chaining calls and using agents for decision-making
B) No
C) Only for UI rendering
D) Only for databases
Answer: A

129. What is the benefit of modular design in LangChain?
A) Easier testing, extension, and maintenance
B) Faster rendering
C) Lower network use
D) Data encryption
Answer: A

130. What is a “Chain-of-Thought” prompt used for?
A) Encouraging step-by-step reasoning in the model’s response
B) Encrypting prompts
C) Limiting output length
D) Storing conversation history
Answer: A
131. What is a “Multi-Agent” system in LangChain?
A) Multiple agents collaborating or competing to solve a task
B) A single agent with one tool
C) Agents that do not communicate
D) UI components
Answer: A

132. What is a benefit of using “Multi-Agent” systems?
A) Can solve complex problems via specialized agents working together
B) Slower processing
C) Less flexible
D) Encrypts data automatically
Answer: A

133. How can custom tools be added to a LangChain Agent?
A) By implementing a Tool interface with a name, description, and function
B) By editing the LangChain core library
C) By disabling agents
D) Tools cannot be customized
Answer: A

134. What should be included in the description of a custom tool?
A) Clear explanation of the tool’s purpose and expected input/output
B) Encryption key
C) UI layout
D) Database schema
Answer: A

135. What is the purpose of a “Toolkit” in LangChain?
A) Bundle multiple tools together for easier agent use
B) Manage UI components
C) Store conversation memory
D) Encrypt API keys
Answer: A

136. How can you handle API rate limits in LangChain deployments?
A) Use caching, retries, and throttling mechanisms
B) Ignore limits
C) Encrypt API keys only
D) Use multiple LLMs simultaneously
Answer: A

137. What is a common best practice when designing prompts for agents?
A) Provide clear instructions and constraints to guide tool usage
B) Use vague prompts
C) Avoid specifying tool names
D) Use very long prompts without structure
Answer: A

138. What does “ReAct” agent architecture combine?
A) Reasoning and acting (tool use) in a single loop
B) Encryption and decryption
C) UI rendering and database queries
D) Caching and logging
Answer: A

139. How does a ReAct agent decide what action to take next?
A) Based on reasoning from the LLM’s output including tool call instructions
B) Randomly
C) Predefined fixed order
D) User input only
Answer: A

140. What is “Toolformer”?
A) An approach where LLMs learn to invoke tools through self-supervised training
B) A UI framework
C) Database encryption tool
D) A logging system
Answer: A

141. Which of the following is NOT recommended when building LangChain applications?
A) Hard-coding prompts without templates
B) Modularizing code with chains and tools
C) Using memory to keep context
D) Implementing callback handlers for monitoring
Answer: A

142. What is “CallbackManager” used for?
A) Managing and triggering callbacks for LLM and agent events
B) Storing conversation history
C) Encrypting data
D) Handling API requests
Answer: A

143. How do you test LangChain applications effectively?
A) Unit test individual chains, tools, and memory modules separately
B) Only test entire applications
C) Avoid testing
D) Test only UI
Answer: A

144. Which LangChain class provides an interface for state persistence across runs?
A) Memory
B) Chain
C) Tool
D) AgentExecutor
Answer: A

145. What is “Streaming” in the context of LangChain and LLMs?
A) Receiving token-by-token responses from the LLM for faster interactivity
B) Streaming video data
C) Encrypting responses
D) Storing outputs
Answer: A

146. How can LangChain applications handle errors during LLM calls?
A) By using try-except blocks and callback handlers for error logging
B) Ignoring errors
C) Encrypting errors
D) Storing error messages in memory
Answer: A

147. What is the benefit of “Prompt Engineering” in LangChain?
A) Improve LLM response quality by crafting effective prompts
B) Encrypt prompts
C) Store prompts in databases
D) Reduce network latency
Answer: A

148. How do LangChain “Agents” differ from simple “Chains”?
A) Agents dynamically select tools and make decisions during execution, chains follow fixed sequences
B) Agents do not use LLMs
C) Chains can access external APIs, agents cannot
D) Agents only handle memory
Answer: A

149. What is the purpose of “Dynamic Tool Selection” in agents?
A) Allows agents to pick the best tool based on context dynamically
B) Fixes tool usage order
C) Prevents agents from using tools
D) Encrypts tool data
Answer: A

150. What does “Zero-shot” agent mean?
A) Agent performs tasks without prior training or examples by using LLM reasoning
B) Agent is pre-trained on specific data
C) Agent uses cached results
D) Agent requires human in the loop
Answer: A

151. How does LangChain support multi-step reasoning?
A) By chaining multiple prompts and LLM calls sequentially
B) By single-step calls only
C) By ignoring context
D) By encrypting reasoning
Answer: A

152. What is the role of “Tool Metadata”?
A) Helps agents understand what tools do and how to use them
B) Encrypts tools
C) Stores UI layout
D) Caches results
Answer: A

153. What does “LangChain Hub” refer to?
A) A repository of pre-built chains, agents, and tools shared by the community
B) A database system
C) Encryption library
D) UI framework
Answer: A

154. How can you improve agent interpretability?
A) Logging agent decisions and reasoning steps via callbacks
B) Encrypting agent logs
C) Avoiding logs
D) Disabling agents
Answer: A

155. Which is a common use case for callback handlers?
A) Real-time monitoring of LLM token usage and outputs
B) Encrypting data
C) Managing UI
D) Storing documents
Answer: A

156. What is “LangSmith”?
A) A tool/service for managing, monitoring, and analyzing LangChain applications
B) A UI framework
C) A vector store
D) Encryption library
Answer: A

157. How do LangChain agents handle ambiguous user queries?
A) They ask clarifying questions or choose tools to gather more info
B) Ignore the ambiguity
C) Encrypt the query
D) Return an error immediately
Answer: A

158. What is the role of “Prompt Validators”?
A) Ensure inputs to prompts are well-formed and valid before sending to the LLM
B) Encrypt prompts
C) Store prompts
D) Render prompts
Answer: A

159. How can you customize the behavior of LangChain’s default agent?
A) By subclassing and overriding methods or changing prompt templates
B) Cannot customize
C) By changing the LLM’s weights
D) Only by changing memory
Answer: A

160. What is “Self-Ask” prompting?
A) The model asks itself sub-questions to break down complex queries
B) Model answers immediately
C) Model ignores questions
D) Model encrypts answers
Answer: A

161. Which of the following can help reduce API costs in LangChain apps?
A) Using caching and local embeddings to minimize LLM calls
B) Increasing token length
C) Using multiple LLMs at once
D) Ignoring rate limits
Answer: A

162. How does LangChain help with building “AutoGPT”-style agents?
A) Provides modular tools, memory, and agents that can chain and self-direct tasks
B) Only supports single-step calls
C) Does not support automation
D) Only for UI apps
Answer: A

163. What is “Agent Planning”?
A) The process by which an agent outlines steps and tools to accomplish a task
B) Encrypting the agent’s decisions
C) Managing UI state
D) Storing logs
Answer: A

164. What kind of prompts are best for tool use instructions?
A) Clear, concise, and with examples on how to call tools
B) Vague and long
C) Random text
D) Encrypted
Answer: A

165. What is a “Router Agent”?
A) An agent that routes queries to specialized sub-agents or tools based on intent
B) A network device
C) UI component
D) Database indexer
Answer: A

166. How can LangChain agents be integrated with chat platforms?
A) Using webhooks, APIs, or SDKs to connect LangChain backend with chat frontends
B) Cannot integrate
C) Only via UI components
D) Only on desktop apps
Answer: A

167. What is a good way to monitor LangChain app performance?
A) Using callback handlers and logging frameworks
B) Ignoring logs
C) Encrypting logs only
D) Manually checking outputs
Answer: A

168. Which programming language is LangChain primarily developed in?
A) Python
B) Java
C) C++
D) JavaScript
Answer: A

169. How does LangChain simplify interaction with different LLM providers?
A) Provides unified wrappers and interfaces for various LLM APIs
B) Requires different code for each provider
C) Does not support multiple LLMs
D) Only supports OpenAI
Answer: A

170. What is “Memory Compression” in LangChain?
A) Reducing size of stored memory via summarization or selective retention
B) Encrypting memory
C) Ignoring memory
D) Storing memory as images
Answer: A
171. What is the role of Memory in LangChain applications?
A) To store and recall conversation or task context across interactions
B) Encrypt user data
C) Render UI components
D) Manage database transactions
Answer: A

172. Which type of memory stores recent conversation turns for immediate context?
A) ConversationBufferMemory
B) PersistentMemory
C) LocalStorage
D) CacheMemory
Answer: A

173. How can LangChain’s memory help maintain personalized user interactions?
A) By storing user preferences and prior inputs for future use
B) By encrypting messages
C) By resetting context each time
D) By ignoring past messages
Answer: A

174. What is a drawback of storing too much memory in LangChain?
A) Exceeding token limits for LLM input
B) Losing data
C) Encrypting data
D) Faster execution
Answer: A

175. What is the purpose of “Summarization Memory”?
A) Condensing long conversations into short summaries to save tokens
B) Encrypting summaries
C) Deleting old memory
D) Displaying UI
Answer: A

176. Which LangChain component helps with secure storage of API keys?
A) Environment Variables
B) Memory module
C) Chain module
D) UI component
Answer: A

177. Why is it important to manage API key security in LangChain apps?
A) To prevent unauthorized access and abuse of LLM APIs
B) To speed up processing
C) To improve UI
D) To cache results
Answer: A

178. How can you limit costs when using LLM APIs with LangChain?
A) Use caching and minimize unnecessary API calls
B) Increase model size
C) Encrypt API keys only
D) Use expensive models only
Answer: A

179. What is a recommended practice when integrating third-party APIs with LangChain?
A) Use retries and error handling to manage API failures gracefully
B) Ignore failures
C) Encrypt API responses
D) Avoid third-party APIs
Answer: A

180. How can LangChain applications improve latency?
A) Using streaming responses and caching intermediate results
B) Increasing token length
C) Using synchronous blocking calls only
D) Encrypting data
Answer: A

181. What is “Caching” in the context of LangChain?
A) Storing previous LLM outputs or API results to reuse and save costs
B) Encrypting data
C) Deleting data
D) Storing UI layouts
Answer: A

182. What is a “Retry Policy”?
A) Logic to automatically retry failed API calls with backoff strategies
B) Encrypt API keys
C) Store logs
D) Manage UI state
Answer: A

183. How can LangChain’s callback handlers assist in debugging?
A) By providing hooks to log inputs, outputs, and events during execution
B) Encrypting logs
C) Deleting errors
D) Managing database transactions
Answer: A

184. What is a common security concern in LangChain apps?
A) Leakage of sensitive data through prompts or outputs
B) Slow UI rendering
C) Excessive logging
D) Lack of multi-threading
Answer: A

185. How can you avoid data leakage in prompts?
A) Redact or sanitize sensitive information before sending to LLM
B) Ignore data privacy
C) Store all data locally
D) Encrypt UI components
Answer: A

186. What is an API key rotation?
A) Periodically changing API keys to reduce risk of compromise
B) Encrypting keys once
C) Deleting keys
D) Avoiding keys
Answer: A

187. Which LangChain class is used to integrate external APIs as tools?
A) Tool
B) Chain
C) Memory
D) AgentExecutor
Answer: A

188. How can LangChain support asynchronous API calls?
A) By using async functions and event loops in chains and tools
B) By blocking all calls
C) Only synchronous calls allowed
D) Encrypting API requests
Answer: A

189. What is the benefit of using async calls in LangChain?
A) Improved throughput and responsiveness for I/O-bound tasks
B) Reduced security
C) Increased cost
D) Poorer performance
Answer: A

190. What does tokenization refer to in LangChain?
A) Breaking down text into tokens that the LLM processes
B) Encrypting text
C) Rendering UI elements
D) Caching data
Answer: A

191. Why is understanding token limits important?
A) LLMs have maximum tokens per request; exceeding causes errors or truncation
B) To encrypt tokens
C) To speed up UI rendering
D) To reduce logs
Answer: A

192. How does LangChain help manage token limits?
A) By chunking documents and summarizing context
B) By encrypting text
C) Ignoring limits
D) Deleting history
Answer: A

193. What is “Rate Limiting”?
A) Controlling the number of API calls allowed in a time window to avoid throttling
B) Encrypting API keys
C) Logging all calls
D) Deleting API calls
Answer: A

194. How can you secure user data in LangChain apps?
A) Use encryption and follow data privacy best practices
B) Store all data unencrypted
C) Display all data publicly
D) Avoid backups
Answer: A

195. What is an advantage of “Batching” API requests?
A) Reduces overhead by sending multiple inputs in a single call
B) Encrypts calls
C) Deletes old data
D) Slows response time
Answer: A

196. How can LangChain handle large document retrieval efficiently?
A) Using vector stores and approximate nearest neighbor search
B) Full scan every time
C) Using relational databases only
D) Loading all documents into memory
Answer: A

197. What is “Latency” in API calls?
A) The delay between sending a request and receiving a response
B) Amount of data encrypted
C) Size of database
D) Number of logs
Answer: A

198. How can you monitor API usage in LangChain apps?
A) Use callback handlers and logging to track call frequency and costs
B) Avoid monitoring
C) Encrypt logs only
D) Disable API calls
Answer: A

199. Which environment variable is typically used to store OpenAI API key?
A) OPENAI_API_KEY
B) API_SECRET_KEY
C) USER_PASSWORD
D) TOKEN_LIMIT
Answer: A

200. What is a “Webhook” in integration with LangChain?
A) A URL endpoint to receive event notifications from external services
B) A database table
C) A UI widget
D) An encryption protocol
Answer: A
201. What is the primary purpose of a vector store in LangChain?
A) Store and search document embeddings for similarity search
B) Encrypt data
C) Render UI components
D) Manage API keys
Answer: A

202. Which of the following is NOT a popular vector store supported by LangChain?
A) Pinecone
B) FAISS
C) SQLite
D) Weaviate
Answer: C

203. What are document embeddings?
A) Numerical representations of text for similarity comparisons
B) Encrypted text
C) Raw text files
D) UI elements
Answer: A

204. How are embeddings typically created for LangChain vector stores?
A) Using embedding models like OpenAI’s text-embedding-ada-002
B) By hashing text
C) By encrypting text
D) By converting to images
Answer: A

205. What is the role of a Retriever in LangChain?
A) Query vector stores and retrieve relevant documents based on similarity
B) Store API keys
C) Encrypt data
D) Display UI
Answer: A

206. What is a common retrieval technique used by LangChain retrievers?
A) k-Nearest Neighbors (k-NN) search
B) Full text scan only
C) Random sampling
D) Encryption of documents
Answer: A

207. Which LangChain class loads data from PDFs?
A) PyPDFLoader
B) CSVLoader
C) JSONLoader
D) TextLoader
Answer: A

208. What is the main purpose of a Document Loader?
A) To load data from external sources into a LangChain-compatible format
B) Encrypt documents
C) Delete files
D) Manage memory
Answer: A

209. How can LangChain handle large documents that exceed token limits?
A) By splitting documents into smaller chunks using text splitters
B) Ignoring large documents
C) Encrypting documents
D) Loading entire document as one chunk
Answer: A

210. What does “RecursiveCharacterTextSplitter” do?
A) Splits text into chunks by recursively splitting on characters like paragraphs, sentences
B) Encrypts text
C) Combines text into one block
D) Deletes old text
Answer: A

211. Why use chunking in LangChain?
A) To fit text within token limits and improve retrieval accuracy
B) To encrypt data
C) To reduce UI complexity
D) To delete old documents
Answer: A

212. What is a “Chain” in LangChain?
A) A sequence of calls to LLMs, tools, or other chains that produce an output
B) A security protocol
C) UI framework
D) Memory store
Answer: A

213. What is the difference between SimpleChain and SequentialChain?
A) SimpleChain calls one LLM; SequentialChain calls multiple chains in sequence
B) No difference
C) SimpleChain is encrypted
D) SequentialChain is deprecated
Answer: A

214. What is a “MapReduceChain” used for?
A) Process large documents by mapping over chunks and then reducing results into summary
B) Encrypt data
C) Manage UI state
D) Store API keys
Answer: A

215. Which class allows you to combine multiple chains and aggregate their outputs?
A) SimpleSequentialChain
B) ChainExecutor
C) ToolAggregator
D) MemoryManager
Answer: A

216. How can you pass outputs from one chain as inputs to another?
A) Using output keys and input keys in chain configuration
B) By encrypting outputs
C) Manually copying data
D) Cannot pass outputs between chains
Answer: A

217. What is the benefit of chaining chains in LangChain?
A) Build complex workflows by composing simple building blocks
B) Encrypt data
C) Reduce API calls only
D) Render UI
Answer: A

218. What is “LLMChain”?
A) A chain that wraps a prompt and an LLM call together
B) Memory storage
C) Vector store
D) UI widget
Answer: A

219. How does LangChain handle asynchronous document loading?
A) Supports async loaders to improve IO performance
B) Only synchronous loaders
C) Encrypts loaders
D) Does not support async
Answer: A

220. What is the purpose of a “PromptTemplate”?
A) Define parameterized prompts that can be reused with different inputs
B) Encrypt prompts
C) Store documents
D) Render UI
Answer: A

221. What does “LLMChain” require as input?
A) A prompt template and an LLM instance
B) Only an LLM
C) Only a prompt template
D) Memory module
Answer: A

222. What is the “output_key” parameter in a chain?
A) The key used to store the chain’s output in the result dictionary
B) Encryption key
C) API key
D) UI component ID
Answer: A

223. What happens if input variables required by a PromptTemplate are missing?
A) It throws an error or exception
B) Uses default values
C) Encrypts inputs
D) Skips chain execution
Answer: A

224. What is the role of “DocumentLoaders” in ingestion pipelines?
A) Extract raw text or data from source files for downstream processing
B) Encrypt documents
C) Delete files
D) Store API keys
Answer: A

225. How do LangChain chains handle errors by default?
A) Propagate exceptions up unless caught explicitly
B) Encrypt errors
C) Ignore errors silently
D) Store errors in memory
Answer: A

226. What is “ChainOfThought” prompting?
A) Prompting technique that encourages the LLM to reason step-by-step
B) Encrypting thoughts
C) Skipping reasoning
D) Compressing output
Answer: A

227. What is a common way to test chains?
A) Unit test with mock inputs and verify outputs
B) Only manual testing
C) Ignore testing
D) Encrypt test data
Answer: A

228. Which LangChain component allows parallel processing of chunks?
A) MapReduceChain
B) SimpleChain
C) SequentialChain
D) MemoryChain
Answer: A

229. How do you save and load chains for reuse?
A) Serialize chain config or use LangChain Hub for sharing
B) Encrypt chains
C) Only save LLM weights
D) Chains cannot be saved
Answer: A

230. What is a “RetrieverQAChain”?
A) A chain that combines a retriever and an LLM for question answering over documents
B) Encrypts QA data
C) Stores QA in memory
D) A UI widget
Answer: A

231. How does “StuffDocumentsChain” work?
A) Combines all retrieved documents into one input for the LLM
B) Encrypts documents
C) Deletes documents
D) Renders UI
Answer: A

232. What is a drawback of StuffDocumentsChain?
A) May exceed token limits with large documents
B) Encrypts data
C) Slow UI rendering
D) Ignores documents
Answer: A

233. Which chain type helps with summarizing many documents first?
A) MapReduceChain
B) StuffDocumentsChain
C) SimpleChain
D) SequentialChain
Answer: A

234. What does the “PromptTemplate.format()” method do?
A) Inserts input variables into the template to generate a prompt string
B) Encrypts the prompt
C) Deletes template
D) Stores template in memory
Answer: A

235. Can you create custom chains in LangChain?
A) Yes, by subclassing Chain and defining _call method
B) No, only built-in chains allowed
C) Only via UI
D) Only via API
Answer: A

236. What is the purpose of “Chain.run()”?
A) Execute the chain with given inputs and return output
B) Encrypt chain
C) Delete chain
D) Load chain
Answer: A

237. Which LangChain class is designed for question answering over a vector database?
A) VectorDBQA
B) Tool
C) AgentExecutor
D) Memory
Answer: A

238. What is a “Retriever” used for in VectorDBQA?
A) Fetch relevant documents based on query embeddings
B) Encrypt documents
C) Store keys
D) Render UI
Answer: A

239. How can you optimize vector search speed?
A) Use approximate nearest neighbor libraries like FAISS or Annoy
B) Encrypt vectors
C) Use full brute-force scan only
D) Load all vectors in memory without indexing
Answer: A

240. What does “Embedding” dimension affect?
A) The size of vector representation; impacts accuracy and performance
B) Encryption strength
C) UI complexity
D) Token limit
Answer: A

241. What is a “Retriever” chain input?
A) Query text or question
B) API key
C) User ID
D) Encrypted text
Answer: A

242. What is a benefit of “Hybrid Retrieval”?
A) Combines vector similarity and keyword matching for better results
B) Encrypts queries
C) Ignores queries
D) Deletes documents
Answer: A

243. What does “Streaming” mode do in LLMChain?
A) Sends tokens to client as soon as they are generated for faster response
B) Encrypts tokens
C) Stores tokens
D) Deletes tokens
Answer: A

244. How do you customize prompt templates?
A) Use variables enclosed in curly braces like {variable_name}
B) Hard-code text only
C) Encrypt templates
D) Use binary format
Answer: A

245. What is a “BufferWindowMemory”?
A) Memory that stores a sliding window of recent conversation turns
B) Encrypted memory
C) Deleted memory
D) Cached UI state
Answer: A

246. How does LangChain support document ingestion from web pages?
A) Using WebBaseLoader or custom HTML loaders
B) Manual copy-paste only
C) Encrypting web pages
D) Only PDFs supported
Answer: A

247. What is “TextSplitter” in LangChain?
A) Component to split large text into manageable chunks
B) Encrypt text
C) Delete text
D) Store text
Answer: A

248. What does “Chain.link()” do?
A) Connects the output of one chain to the input of another
B) Encrypts chains
C) Deletes chains
D) Saves chains
Answer: A

249. What is “ChainExecutor”?
A) Component that runs one or more chains and manages inputs and outputs
B) Encryption handler
C) UI manager
D) Memory store
Answer: A

250. Why is modular chain design recommended?
A) Easier testing, debugging, and reusability
B) More encryption
C) Slower execution
D) Complex UI
Answer: A
251. What is OpenAI function calling in LangChain used for?
A) Letting the LLM decide when and how to call predefined functions/tools
B) Encrypting functions
C) Rendering UI
D) Writing SQL queries only
Answer: A

252. How are OpenAI tools defined in LangChain?
A) As tool objects with name, description, and a function signature
B) As LLMChains
C) As database entries
D) As prompts
Answer: A

253. What does LangChain’s tool decorator help you do?
A) Easily expose Python functions as tools for agents
B) Encrypt Python functions
C) Export UI
D) Store chains
Answer: A

254. What is the primary benefit of OpenAI’s function calling feature?
A) Structured tool usage with predictable outputs
B) Faster model execution
C) UI simplification
D) Reduces token cost
Answer: A

255. Which format does OpenAI function calling use for tool schema?
A) JSON Schema
B) YAML
C) XML
D) HTML
Answer: A

256. What is LangGraph?
A) A graph-based framework to build multi-step, multi-agent workflows using LangChain
B) A vector store
C) A memory manager
D) A UI renderer
Answer: A

257. What are nodes in LangGraph?
A) Functions or chains that perform an action
B) UI components
C) Encrypted blocks
D) Tools
Answer: A

258. What is a LangGraph edge?
A) The transition between nodes, often based on outputs
B) An encryption key
C) Memory storage
D) UI template
Answer: A

259. Which library does LangGraph use under the hood?
A) NetworkX
B) TensorFlow
C) SQLAlchemy
D) FastAPI
Answer: A

260. What is LangGraph ideal for?
A) Conditional, looped, and branching flows between LangChain components
B) Simple prompts
C) Vector storage
D) Frontend frameworks
Answer: A

261. What does LangServe do?
A) Deploys LangChain apps as REST APIs using FastAPI
B) Encrypts apps
C) Manages memory
D) Hosts static files
Answer: A

262. What framework does LangServe use?
A) FastAPI
B) Flask
C) Django
D) React
Answer: A

263. How can LangServe expose your LangChain chain as an API?
A) from langserve import add_routes
B) from langchain import deploy_api
C) from langgraph import serve
D) from toolformer import route
Answer: A

264. What is the default API endpoint when exposing with LangServe?
A) /invoke
B) /llm
C) /data
D) /encrypt
Answer: A

265. Which command runs a LangServe app?
A) uvicorn main:app
B) serve_chain
C) langchain run
D) python deploy.py
Answer: A

266. How can you visualize LangGraph flows?
A) Use .draw() method to generate a graph diagram
B) Print node names
C) Use TensorBoard
D) Encrypt the graph
Answer: A

267. What type of chains can LangGraph manage?
A) Stateful, multi-agent, and conditional chains
B) Only single-step chains
C) Only encryption chains
D) UI chains only
Answer: A

268. Which of the following is NOT a valid LangChain deployment strategy?
A) Running as a browser extension
B) Using LangServe
C) Deploying as FastAPI microservice
D) Running in serverless cloud functions
Answer: A

269. What is a “tool call” in OpenAI’s model response?
A) A special JSON object instructing a tool to be invoked
B) Encryption key
C) UI directive
D) Memory deletion trigger
Answer: A

270. How do you ensure tool safety with function calling?
A) Validate inputs and use JSON Schema for strict input types
B) Encrypt all inputs
C) Skip validation
D) Only use POST requests
Answer: A

271. What is LangChain Expression Language (LCEL)?
A) A declarative way to compose chains and flows
B) A UI markup language
C) Encryption tool
D) A SQL dialect
Answer: A

272. What is the .invoke() method in LangServe API?
A) Method to send input to a chain and get output
B) Encrypt method
C) UI loader
D) Prompt renderer
Answer: A

273. Which LangChain module supports OpenAI Tools API natively?
A) langchain.agents.agent_toolkits
B) langchain.tools.render
C) langchain.ui.viewer
D) langchain.storage.encryptor
Answer: A

274. What is a clear advantage of LangGraph over vanilla agents?
A) Deterministic control flow and better state management
B) UI control
C) Faster vector storage
D) Better encryption
Answer: A

275. How can LangServe be deployed to the cloud?
A) Using Docker + Uvicorn + FastAPI on any cloud service
B) Only using LangChain Hub
C) Only on-premise
D) Requires LangGraph
Answer: A

276. What does the LangServe /docs endpoint provide?
A) Interactive Swagger API docs
B) Memory storage
C) Chain caching
D) LLM key encryption
Answer: A

277. How can you run LangGraph with cycles (loops)?
A) Define return paths between nodes explicitly
B) Enable loop=True in chain
C) Encrypt node IDs
D) Use memory chaining
Answer: A

278. What is an edge condition in LangGraph?
A) A function that determines which node to visit next based on state
B) Token limit
C) Encryption key
D) Memory reset
Answer: A

279. What can a LangServe route accept as input?
A) JSON body with input keys matching the chain’s variables
B) Encrypted blob
C) SQL query
D) React component
Answer: A

280. Why are function-calling agents preferred for tool-heavy applications?
A) They allow structured, secure, and predictable tool invocation
B) They encrypt tools
C) They remove tools
D) They are UI-native
Answer: A

281. What does LangChain’s Runnable interface do?
A) Standardizes execution of chains, tools, and graphs
B) Encrypts input
C) Renders UI
D) Stores vectors
Answer: A

282. How does LangServe handle multiple chains?
A) By assigning each to a different API route
B) Encrypts all chains
C) Stores chains in memory
D) Merges chains into one
Answer: A

283. What is the format of OpenAI’s tool-calling response?
A) JSON with tool_calls key
B) CSV
C) HTML
D) Raw prompt
Answer: A

284. What can cause function-calling to fail?
A) Missing or mismatched input schema
B) Too much memory
C) UI timeout
D) Token mismatch
Answer: A

285. How can you create dynamic workflows with LangGraph?
A) Use conditional edges that route based on previous node output
B) Use hardcoded logic
C) Encrypt graph edges
D) Avoid branching
Answer: A

286. What is the purpose of LangServe’s config.yml?
A) To define which chains/tools to expose via API
B) To encrypt keys
C) To render UI
D) To store cache
Answer: A

287. What is the role of input_schema in LangServe?
A) Validate incoming API request body
B) Encrypt input
C) Hide input
D) Store prompt history
Answer: A

288.What happens if a required input is missing in a LangServe request?
A) Returns a 422 Unprocessable Entity error
B) Encrypts missing field
C) Returns 200 OK
D) Ignores missing input
Answer: A

289.Can LangServe be containerized?
A) Yes, using Docker with uvicorn and FastAPI

290. Can LangServe be containerized?
A) Yes, using Docker with uvicorn and FastAPI
B) No, it only runs on local environments
C) Only on Windows
D) Only via LangChain Hub
Answer: A

291. What is an example use case for LangGraph?
A) Multi-agent customer support workflow with memory and branching
B) Static prompt testing
C) Vector indexing only
D) UI styling
Answer: A

292. What is required to register a function for OpenAI function calling in LangChain?
A) A JSON schema with parameters and a callable function
B) A SQL table
C) Only a prompt
D) A UI layout
Answer: A

293. How does LangServe help scale LangChain applications?
A) By turning chains into production-ready APIs accessible over HTTP
B) By encrypting outputs
C) By skipping logs
D) By deleting inputs after execution
Answer: A

294. What does the .add_routes() function do in LangServe?
A) Maps chains or tools to REST endpoints
B) Encrypts endpoints
C) Creates memory logs
D) Auto-generates UIs
Answer: A

295. In LangGraph, what enables dynamic decisions during workflow execution?
A) Conditional edges based on function output or state
B) Static node-to-node mapping
C) UI flags
D) Token truncation
Answer: A

296. How do you ensure secure API access when deploying LangServe?
A) Add authentication, rate limiting, and HTTPS at deployment level
B) Disable logs
C) Use localhost only
D) Increase token size
Answer: A

297. Can LangServe support both GET and POST methods for APIs?
A) Yes, depending on the route config
B) Only POST
C) Only GET
D) Only via WebSocket
Answer: A

298. Which best practice improves performance of LangChain apps in production?
A) Caching LLM responses and limiting token size
B) Ignoring memory
C) Encrypting vector indexes
D) Storing full document in memory
Answer: A

299. How can LangChain workflows be made testable and maintainable?
A) Modular design using chains, tools, and prompt templates
B) Hardcoding everything in one chain
C) Avoiding memory
D) Skipping validation
Answer: A

300. What makes LangChain suitable for real-world LLM applications?
A) Modular architecture, tool and memory support, agents, and deployability
B) Only prompt engineering
C) Only for chatbots
D) Only works offline
Answer: A

LEAVE A REPLY

Please enter your comment!
Please enter your name here