AashishAIHub commited on
Commit
1c7ee42
Β·
verified Β·
1 Parent(s): 07077d5

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. GenAI-AgenticAI/app.js +28 -1
GenAI-AgenticAI/app.js CHANGED
@@ -1090,6 +1090,16 @@ res = index.query(vector=query_emb, top_k=<span class="number">10</span>,
1090
 
1091
  <h3>6. Agent Safety</h3>
1092
  <p>Agents take <strong>real actions</strong>. Safety measures: (1) Human-in-the-loop for destructive actions. (2) Sandboxing code execution. (3) Permission models. (4) Max iterations. (5) Cost/budget limits.</p>
 
 
 
 
 
 
 
 
 
 
1093
  </div>`,
1094
  code: `
1095
  <div class="section">
@@ -1210,6 +1220,16 @@ result = crew.kickoff()</div>
1210
 
1211
  <h3>5. Common Pitfalls</h3>
1212
  <p>(1) <strong>Over-engineering</strong> β€” single ReAct agent often outperforms poorly designed multi-agent. (2) <strong>Context leakage</strong> β€” agents sharing too much irrelevant context. (3) <strong>Error compounding</strong> β€” LLM summaries between agents introduce errors. Always pass structured data. (4) <strong>Cost explosion</strong> β€” N agents = N times the API calls.</p>
 
 
 
 
 
 
 
 
 
 
1213
  </div>`,
1214
  code: `
1215
  <div class="section">
@@ -1298,7 +1318,14 @@ graph.add_edge(<span class="string">"writer"</span>, END)</div>
1298
  <p><strong>Parallel:</strong> Model outputs multiple independent tool calls in one response. Execute all simultaneously. GPT-4o supports this natively. <strong>Sequential:</strong> One call at a time, each depending on previous results. Parallel is 3-5x faster for independent operations.</p>
1299
 
1300
  <h3>4. Model Context Protocol (MCP)</h3>
1301
- <p>MCP (Anthropic, 2024) is an open standard for connecting AI assistants to tools and data sources. Instead of each provider having their own format, MCP standardizes <strong>tool servers</strong> that expose capabilities. Supports: tools (actions), resources (read data), and prompts (templates). Adopted by Claude Desktop, Cursor, and growing ecosystem.</p>
 
 
 
 
 
 
 
1302
 
1303
  <h3>5. Provider Comparison</h3>
1304
  <table>
 
1090
 
1091
  <h3>6. Agent Safety</h3>
1092
  <p>Agents take <strong>real actions</strong>. Safety measures: (1) Human-in-the-loop for destructive actions. (2) Sandboxing code execution. (3) Permission models. (4) Max iterations. (5) Cost/budget limits.</p>
1093
+
1094
+ <h3>7. The LangChain Ecosystem</h3>
1095
+ <p>LangChain has evolved from a simple wrapper into a comprehensive suite for production AI:</p>
1096
+ <table>
1097
+ <tr><th>Component</th><th>Purpose</th><th>When to Use</th></tr>
1098
+ <tr><td><strong>LangChain Core</strong></td><td>Standard interface for LLMs, prompts, tools, and retrievers</td><td>Building the basic components of your app</td></tr>
1099
+ <tr><td><strong>LangGraph</strong></td><td>Orchestration framework for stateful, multi-actor applications</td><td>Building complex, cyclic agents and workflows</td></tr>
1100
+ <tr><td><strong>LangSmith</strong></td><td>Observability, tracing, and evaluation platform</td><td>Debugging, testing, and monitoring in production</td></tr>
1101
+ <tr><td><strong>LangServe</strong></td><td>REST API deployment using FastAPI</td><td>Serving chains/agents as production endpoints</td></tr>
1102
+ </table>
1103
  </div>`,
1104
  code: `
1105
  <div class="section">
 
1220
 
1221
  <h3>5. Common Pitfalls</h3>
1222
  <p>(1) <strong>Over-engineering</strong> β€” single ReAct agent often outperforms poorly designed multi-agent. (2) <strong>Context leakage</strong> β€” agents sharing too much irrelevant context. (3) <strong>Error compounding</strong> β€” LLM summaries between agents introduce errors. Always pass structured data. (4) <strong>Cost explosion</strong> β€” N agents = N times the API calls.</p>
1223
+
1224
+ <h3>6. Agent-to-Agent (A2A) Protocols</h3>
1225
+ <p>For agents to collaborate at scale, they need standardized communication protocols (A2A). Unlike human-to-agent chat, A2A relies on strict payloads.</p>
1226
+ <table>
1227
+ <tr><th>Protocol / Concept</th><th>Description</th></tr>
1228
+ <tr><td><strong>FIPA ACL</strong></td><td>Legacy agent communication language specifying performatives (e.g., REQUEST, INFORM, PROPOSE). Still conceptually relevant for modern A2A state machines.</td></tr>
1229
+ <tr><td><strong>Agent Protocol (AI Engineer Foundation)</strong></td><td>A single open-source OpenAPI specification for agent interaction. Standardizes how to list tasks, execute steps, and upload artifacts.</td></tr>
1230
+ <tr><td><strong>Message Passing Interfaces</strong></td><td>Modern frameworks use structured JSON schema validation for passing state. Agent A emits a JSON payload that perfectly matches the input schema expected by Agent B.</td></tr>
1231
+ <tr><td><strong>Pub/Sub Eventing</strong></td><td>Agents publish events to a message broker (like Kafka or Redis) and other agents subscribe, enabling asynchronous, decoupled A2A swarms.</td></tr>
1232
+ </table>
1233
  </div>`,
1234
  code: `
1235
  <div class="section">
 
1318
  <p><strong>Parallel:</strong> Model outputs multiple independent tool calls in one response. Execute all simultaneously. GPT-4o supports this natively. <strong>Sequential:</strong> One call at a time, each depending on previous results. Parallel is 3-5x faster for independent operations.</p>
1319
 
1320
  <h3>4. Model Context Protocol (MCP)</h3>
1321
+ <p><strong>Model Context Protocol (MCP)</strong> (Anthropic, 2024) is an open-source standard solving the "N-to-M integration problem" between AI assistants and data sources. Instead of writing custom integrations, developers build reusable MCP servers.</p>
1322
+ <table>
1323
+ <tr><th>MCP Component</th><th>Role</th><th>Web Analogy</th></tr>
1324
+ <tr><td><strong>MCP Host</strong></td><td>The AI application (e.g., Claude Desktop, Cursor)</td><td>Web Browser</td></tr>
1325
+ <tr><td><strong>MCP Client</strong></td><td>Runs inside the Host, manages server connections</td><td>HTTP Client</td></tr>
1326
+ <tr><td><strong>MCP Server</strong></td><td>Secure, lightweight program giving access to data/tools</td><td>Web Server</td></tr>
1327
+ </table>
1328
+ <p>Servers expose three primitives to the LLM: <strong>Resources</strong> (read-only data like files or DB schemas giving context), <strong>Tools</strong> (executable functions like "run query"), and <strong>Prompts</strong> (reusable system instruction templates). This separates the reasoning engine from the data layer, allowing immense scalability.</p>
1329
 
1330
  <h3>5. Provider Comparison</h3>
1331
  <table>