# Bad Labels: AI Marketplace (llms.txt v2) ## Purpose Bad Labels is a machine-readable AI marketplace for humans and agents. Use this index to discover catalog data, compare endpoints, MCP surfaces, and policy docs. ## Snapshot - generated_at: 2026-03-27 - seed_listing_count: 253 - seed_source_files: 18 - categories: agents, ai_coding, ai_work, datasets, full_stack, influencers, integrations, llms, lobster, mcp_servers, ml_dl_nlp, physical_ai, plugins, skills, video_gen - biomes: Coding Biome, Professional Biome, Studio Biome, Care Biome, Sandbox Biome, Externa Executive Biome, Avenews Biome, DF Orbit Biome, Campus Biome, StayRelevant Biome, NBB Private Biome ## Core Surfaces - Web: https://www.badlabels.com - Catalog read API: https://www.badlabels.com/api/catalog/v1/listings - Search API: https://www.badlabels.com/api/search/search - Platform stats: https://www.badlabels.com/api/search/v1/platform/stats - Agent discover: https://www.badlabels.com/api/agent/v1/discover - LLMCopy API: https://www.badlabels.com/api/agent/v1/llmcopy - Influencers watch feed: https://www.badlabels.com/api/agent/v1/agent/signals/influencers/watch - MCP manifest: https://www.badlabels.com/api/agent/mcp/manifest - Zone manifest: https://www.badlabels.com/api/agent/v1/agent/zone/manifest ## Machine Endpoints - listing manifest: https://www.badlabels.com/api/search/v1/listings/{id}/manifest - listing score history: https://www.badlabels.com/api/search/v1/listings/{id}/score-history - listing failures: https://www.badlabels.com/api/search/v1/listings/{id}/failures - listing faqs: https://www.badlabels.com/api/search/v1/listings/{id}/faqs - listing jsonld: https://www.badlabels.com/api/search/v1/listings/{id}/jsonld - listing markdown: https://www.badlabels.com/api/search/v1/listings/{slug}/markdown - knowledge base nodes: https://www.badlabels.com/api/catalog/v1/knowledge-base/nodes - knowledge base search: https://www.badlabels.com/api/catalog/v1/knowledge-base/search - knowledge base use cases: https://www.badlabels.com/api/catalog/v1/knowledge-base/use-cases - knowledge base recommendations: https://www.badlabels.com/api/catalog/v1/knowledge-base/recommendations - influencer watch feed: https://www.badlabels.com/api/agent/v1/agent/signals/influencers/watch - runtime skills: https://www.badlabels.com/api/agent/v1/skills - Avenews planner: https://www.badlabels.com/api/agent/v1/avenews/plan ## Top Listings (seed snapshot) - OpenClaw (agents): https://www.badlabels.com/listings/openclaw - NanoClaw (agents): https://www.badlabels.com/listings/nanoclaw - CrewAI (agents): https://www.badlabels.com/listings/crewai - LangGraph (agents): https://www.badlabels.com/listings/langgraph - Microsoft AutoGen (Unified Agent Framework) (agents): https://www.badlabels.com/listings/microsoft-autogen-unified-agent-framework - Pydantic AI (agents): https://www.badlabels.com/listings/pydantic-ai - Temporal (agents): https://www.badlabels.com/listings/temporal - OpenAI Agents SDK (agents): https://www.badlabels.com/listings/openai-agents-sdk - Semantic Kernel (agents): https://www.badlabels.com/listings/semantic-kernel - Cursor (ai_coding): https://www.badlabels.com/listings/cursor ## Human Docs - What is Bad Labels?: https://www.badlabels.com/what-is-bad-labels - Learn: https://www.badlabels.com/learn - Use Agents: https://www.badlabels.com/use-agents - FAQ: https://www.badlabels.com/faq - Press: https://www.badlabels.com/press - Influencers watch: https://www.badlabels.com/influencers - Social share workbench: https://www.badlabels.com/use-agents#share-social - Avenews biome: https://www.badlabels.com/biome/avenews - API Access (for agent developers): https://www.badlabels.com/api-access - BLAL license: https://www.badlabels.com/blal-license - Royalty clearinghouse: https://www.badlabels.com/royalty-clearinghouse ## Architecture Reference (for AI agents and automated systems) - L3/L4 Biome Autonomy (Brain, BMS, CMS, V2X, Context Manifold): docs/autonomous-vehicle-biomes.md - Implementation Plan (Phases 1–12): docs/IMPLEMENTATION_PLAN.md - Agent Standards + spec-driven APIs: docs/AGENT_STANDARDS_AND_SPEC.md - Biome declarations (ODD, seed_tools, modalities): data/biomes.json - MCP schema + registry: docs/MCP_SCHEMA_AND_REGISTRY.md ## Evidence Standards - Every AI critic score traces to a golden-set benchmark run with a SHA-256 artifact hash. - Human critic scores require verified_purchase_or_run=true. - Rubric: https://www.badlabels.com/scoring-methodology - Machine-readable rubric: https://www.badlabels.com/api/review/v1/rubrics/ai_critic_rubric_2026_v1 - Golden set: https://www.badlabels.com/api/review/v1/benchmarks/golden-set - Category golden sets: https://www.badlabels.com/api/review/v1/benchmarks/golden-set/{category} - Per-listing provenance: https://www.badlabels.com/api/review/v1/scores/listings/{id}/provenance - Evidence types accepted: benchmark_file, eval_log, api_run_log, red_team, security_audit, third_party_report ## Notes - Public machine endpoints are read-only. - Mutating operations remain authenticated and policy-gated. - For detailed schemas and examples, see /llms-full.txt.