AI Metadata
Every package published to LPM is automatically analyzed using AI. The analysis extracts structured metadata from your source code that powers semantic search, package page displays, and the MCP Server API.
What Happens on Publish
After a package is published, a background job:
- Extracts source code from the tarball (
.js,.ts,.jsx,.tsx,.mjs,.cjsfiles - larger packages are processed in chunks) - Runs AI analysis to generate a package summary, security scan, and error quality assessment
- Generates structured API docs - extracts functions, classes, interfaces, type aliases, enums, and variables with full signatures, parameters, return types, and descriptions
- Generates an LLM usage guide - an optimized context document with purpose, quick-start code, key exports, common patterns, gotchas, and when-to-use guidance
- Generates an embedding vector for semantic search
- Extracts compatibility data from
package.json(types, module format, frameworks, runtime)
Analysis typically completes within a few minutes of publishing. Results appear on your package page, in API responses, and through the MCP Server.
Package Summary
The AI reads your actual source code and generates:
| Field | Description | Example |
|---|---|---|
description | 1-2 sentence plain-English summary | "Type-safe form validation with Zod schemas and React hook integration" |
capabilities | 3-6 concrete things the package can do | "Form validation, Schema-based rules, React hooks, Custom error messages" |
useCases | 2-4 real scenarios | "Validating user registration forms, Multi-step form wizards" |
tags | 3-8 lowercase search keywords | ["validation", "forms", "react", "zod"] |
complexity | Codebase complexity | "simple", "moderate", or "complex" |
targetAudience | Who should use this | "React developers building forms" |
bestFor | Ideal use case in one sentence | "React apps that need schema-driven form validation with TypeScript" |
notFor | When NOT to use this | "Server-side validation without React - use Zod directly" |
quickStart | Minimal working code snippet | import { useForm } from '...' |
The AI bases all of this on the code's actual behavior - not on README claims, comments, or marketing text.
API Documentation
LPM generates structured API documentation directly from your source code and type definitions:
| Element | What's extracted |
|---|---|
| Functions | Name, signature, parameters with types, return type, description |
| Classes | Constructor, methods, properties, inheritance |
| Interfaces | Properties, methods, type parameters |
| Type aliases | Definition, type parameters |
| Enums | Members with values |
| Variables | Name, type, description |
This powers the lpm_api_docs MCP tool and the API docs section of lpm_package_context. Having TypeScript types or JSDoc annotations in your source code significantly improves the quality of generated API docs.
LLM Usage Guide
A condensed, LLM-optimized context document generated for each package version:
| Section | Purpose |
|---|---|
| Purpose | What the package does in 1-2 sentences |
| Quick start | Minimal working code to get started |
| Key exports | Most important exports with signatures |
| Common patterns | Typical usage patterns with code examples |
| Gotchas | Common mistakes and how to avoid them |
| When to use | Ideal scenarios and alternatives |
AI coding agents use this through lpm_llm_context or lpm_package_context to understand how to correctly use a package without reading the full README.
Compatibility Data
Extracted statically from package.json and the tarball - no AI needed, 100% deterministic:
| Field | Source | Example |
|---|---|---|
hasTypes | .d.ts files or types field | true |
moduleType | type, module, exports fields | "esm", "cjs", or "dual" |
minNodeVersion | engines.node field | "18" |
treeShakeable | sideEffects field + module format | true |
bundleSize | Tarball size | 24576 (bytes) |
unpackedSize | Extracted size | 98304 (bytes) |
frameworks | peerDependencies | ["react"] |
runtime | exports conditions | ["node", "browser"] |
Security Scan
Detects risky patterns in source code:
eval()andnew Function()usage- Prototype pollution patterns
- Path traversal vulnerabilities
- Regular expression denial of service (ReDoS)
- Unsafe deserialization
- Dynamic
require()with user input - Hardcoded secrets or API keys
Findings include severity, location, and fix suggestions.
Error Quality Assessment
Evaluates error handling patterns:
- Are errors typed and specific?
- Do catch blocks handle errors meaningfully?
- Is async error handling correct?
- Are user-facing error messages helpful?
Scored as good, fair, or poor.
How It's Used
| Consumer | What they see |
|---|---|
| Package page | AI insights section with summary, security findings, and quality indicators |
| Website search | Semantic search powered by embeddings - finds packages by intent, not just keywords |
| MCP Server | lpm_package_info for metadata, lpm_api_docs for API reference, lpm_llm_context for usage guide, or lpm_package_context for all three combined |
| API consumers | GET /api/registry/@lpm.dev/owner.package includes ai field in the response |
Opting Out
AI analysis is automatic by default for all published packages. You can opt out of AI processing for your private packages in your account settings:
- Personal packages: Dashboard → Settings → Preferences
- Organization packages: Dashboard → Org Settings → General
When opted out, private packages skip AI analysis, security scanning, and AI-generated documentation after publishing. Deterministic data extraction (TypeScript types, compatibility info from package.json) still runs regardless of this setting.
Pool and marketplace packages always require AI processing. If you change a private package's distribution mode to Pool or Marketplace, AI analysis will automatically run on the latest version. This ensures all publicly distributed packages have quality metadata, security scans, and documentation for consumers.
Source code is extracted temporarily for analysis and discarded afterward - it is not stored or used for model training. If your package has no analyzable source files (e.g., binary-only packages), the analysis is skipped.
Improving Your AI Metadata
The quality of generated metadata depends on your source code clarity:
- Use descriptive export names -
formatDateis more informative thanfmt - Include TypeScript types - improves
hasTypescompatibility signal and IntelliSense coverage - Use ESM exports - enables tree-shaking detection
- Add
enginesfield - surfaces minimum Node.js version - Declare
peerDependencies- enables framework detection (react, vue, etc.) - Set
sideEffects: false- signals tree-shakeability to bundlers and LPM
These also improve your quality score, so it's a double benefit.