AI Metadata

Every package published to LPM is automatically analyzed using AI. The analysis extracts structured metadata from your source code that powers semantic search, package page displays, and the MCP Server API.

What Happens on Publish

After a package is published, a background job:

  1. Extracts source code from the tarball (.js, .ts, .jsx, .tsx, .mjs, .cjs files - larger packages are processed in chunks)
  2. Runs AI analysis to generate a package summary, security scan, and error quality assessment
  3. Generates structured API docs - extracts functions, classes, interfaces, type aliases, enums, and variables with full signatures, parameters, return types, and descriptions
  4. Generates an LLM usage guide - an optimized context document with purpose, quick-start code, key exports, common patterns, gotchas, and when-to-use guidance
  5. Generates an embedding vector for semantic search
  6. Extracts compatibility data from package.json (types, module format, frameworks, runtime)

Analysis typically completes within a few minutes of publishing. Results appear on your package page, in API responses, and through the MCP Server.

Package Summary

The AI reads your actual source code and generates:

FieldDescriptionExample
description1-2 sentence plain-English summary"Type-safe form validation with Zod schemas and React hook integration"
capabilities3-6 concrete things the package can do"Form validation, Schema-based rules, React hooks, Custom error messages"
useCases2-4 real scenarios"Validating user registration forms, Multi-step form wizards"
tags3-8 lowercase search keywords["validation", "forms", "react", "zod"]
complexityCodebase complexity"simple", "moderate", or "complex"
targetAudienceWho should use this"React developers building forms"
bestForIdeal use case in one sentence"React apps that need schema-driven form validation with TypeScript"
notForWhen NOT to use this"Server-side validation without React - use Zod directly"
quickStartMinimal working code snippetimport { useForm } from '...'

The AI bases all of this on the code's actual behavior - not on README claims, comments, or marketing text.

API Documentation

LPM generates structured API documentation directly from your source code and type definitions:

ElementWhat's extracted
FunctionsName, signature, parameters with types, return type, description
ClassesConstructor, methods, properties, inheritance
InterfacesProperties, methods, type parameters
Type aliasesDefinition, type parameters
EnumsMembers with values
VariablesName, type, description

This powers the lpm_api_docs MCP tool and the API docs section of lpm_package_context. Having TypeScript types or JSDoc annotations in your source code significantly improves the quality of generated API docs.

LLM Usage Guide

A condensed, LLM-optimized context document generated for each package version:

SectionPurpose
PurposeWhat the package does in 1-2 sentences
Quick startMinimal working code to get started
Key exportsMost important exports with signatures
Common patternsTypical usage patterns with code examples
GotchasCommon mistakes and how to avoid them
When to useIdeal scenarios and alternatives

AI coding agents use this through lpm_llm_context or lpm_package_context to understand how to correctly use a package without reading the full README.

Compatibility Data

Extracted statically from package.json and the tarball - no AI needed, 100% deterministic:

FieldSourceExample
hasTypes.d.ts files or types fieldtrue
moduleTypetype, module, exports fields"esm", "cjs", or "dual"
minNodeVersionengines.node field"18"
treeShakeablesideEffects field + module formattrue
bundleSizeTarball size24576 (bytes)
unpackedSizeExtracted size98304 (bytes)
frameworkspeerDependencies["react"]
runtimeexports conditions["node", "browser"]

Security Scan

Detects risky patterns in source code:

  • eval() and new Function() usage
  • Prototype pollution patterns
  • Path traversal vulnerabilities
  • Regular expression denial of service (ReDoS)
  • Unsafe deserialization
  • Dynamic require() with user input
  • Hardcoded secrets or API keys

Findings include severity, location, and fix suggestions.

Error Quality Assessment

Evaluates error handling patterns:

  • Are errors typed and specific?
  • Do catch blocks handle errors meaningfully?
  • Is async error handling correct?
  • Are user-facing error messages helpful?

Scored as good, fair, or poor.

How It's Used

ConsumerWhat they see
Package pageAI insights section with summary, security findings, and quality indicators
Website searchSemantic search powered by embeddings - finds packages by intent, not just keywords
MCP Serverlpm_package_info for metadata, lpm_api_docs for API reference, lpm_llm_context for usage guide, or lpm_package_context for all three combined
API consumersGET /api/registry/@lpm.dev/owner.package includes ai field in the response

Opting Out

AI analysis is automatic by default for all published packages. You can opt out of AI processing for your private packages in your account settings:

When opted out, private packages skip AI analysis, security scanning, and AI-generated documentation after publishing. Deterministic data extraction (TypeScript types, compatibility info from package.json) still runs regardless of this setting.

Pool and marketplace packages always require AI processing. If you change a private package's distribution mode to Pool or Marketplace, AI analysis will automatically run on the latest version. This ensures all publicly distributed packages have quality metadata, security scans, and documentation for consumers.

Source code is extracted temporarily for analysis and discarded afterward - it is not stored or used for model training. If your package has no analyzable source files (e.g., binary-only packages), the analysis is skipped.

Improving Your AI Metadata

The quality of generated metadata depends on your source code clarity:

  • Use descriptive export names - formatDate is more informative than fmt
  • Include TypeScript types - improves hasTypes compatibility signal and IntelliSense coverage
  • Use ESM exports - enables tree-shaking detection
  • Add engines field - surfaces minimum Node.js version
  • Declare peerDependencies - enables framework detection (react, vue, etc.)
  • Set sideEffects: false - signals tree-shakeability to bundlers and LPM

These also improve your quality score, so it's a double benefit.