Prompt LLM to use parallel tool calls for independent operations
The MCP backend already executes multiple tool calls in parallel
(Promise.allSettled), but the system prompt never told the model it could
issue multiple calls at once. LLMs default to sequential tool use due to
training biases. Adding an explicit PARALLEL TOOL CALLS instruction
encourages models to batch independent tool calls in a single response,
reducing round-trips and improving latency, while warning against
parallelizing dependent calls.
https://claude.ai/code/session_01GGYJ1UyJvkQL38oU7hNedP