Recommendation Engine
The recommendation stage is implemented in recommender/engine.py and recommender/llm_reasoning.py.
Inputs
- analyzed task (
TaskDefinition) - optional preferred provider
- optional Tavily API key
Data Sources
- web search via Tavily (
WebSearchProvider) - MCP registry data from:
- official MCP registry (
registry.modelcontextprotocol.io) - Smithery fallback (
registry.smithery.ai)
- official MCP registry (
- skills catalogs from GitHub:
anthropics/skillsgithub/awesome-copilot
Parallel Fetch
Recommendation fetches in parallel:
- web results
- relevant MCP servers (filtered)
- full skills catalog
LLM Output Contract
LLM must return JSON with:
llmprovider/model/reasonframeworkmcp_serversskillsprompt_strategyestimated_tokensestimated_cost
Skill Filtering Rules
Returned skills are filtered against the fetched skill catalog.
- unknown skills are dropped
- ambiguous names without
sourceare dropped - canonical output is
"name (source)"
