How Trackr Researches Your Tools
7 research agents. 25+ data sources. One structured report. Watch the entire pipeline in real time.
Research Steps
Data Sources
Total Time
Per Report
Map Site
Crawl the tool's website to discover all pages and build a sitemap. We identify key pages like pricing, features, and about — then select the most informative ones for deep scraping.
~$0.01 per crawl · 15s avgScrape Pages
Scrape the selected pages in parallel, converting HTML to clean markdown. We extract text content, pricing tables, feature lists, and metadata — everything the AI needs to understand the tool.
~$0.004 per page · 4 pages parallelReview Sites
Search major review platforms for real user feedback. We query G2, Capterra, TrustRadius, and Product Hunt to gather ratings, pros/cons, and sentiment from verified users.
~$0.01 per search · 5 review sitesTrust & Reputation
Verify the tool's trustworthiness by checking funding data, company size, tech stack adoption, and ecosystem signals. This builds a composite trust score from multiple independent sources.
~$0.01 per search · 4 trust sourcesReddit Deep Dive
Run three parallel Reddit searches to find real user experiences, competitive comparisons, and known pain points. Results are merged and deduplicated for the most authentic user signal.
~$0.03 total · 3 parallel queriesCompetitive Intel
Use Perplexity's reasoning model to analyze the competitive landscape. Identifies key competitors, maps market positioning, and surfaces funding, team size, and recent developments.
~$0.02 per query · deep reasoningAI Synthesis
Feed all collected data into GPT-4o with a structured Zod schema. The model synthesizes a comprehensive report with scores, pros/cons, competitor analysis, and a final recommendation.
~$0.04 per report · structured outputReady to research your first tool?
Submit any tool URL. Get a scored report in under 2 minutes.
No credit card required · Free plan available