Quality Score
Every package on LPM gets a quality score out of 100 points. The score is computed from automated checks across four categories and displayed on the package page. The number and type of checks varies by ecosystem.
| Ecosystem | Checks |
|---|---|
| JavaScript | 29 |
| Swift | 25 |
| XCFramework | 21 |
Score Tiers
| Tier | Score |
|---|---|
| Excellent | 90+ |
| Good | 70-89 |
| Fair | 50-69 |
| Needs Work | Below 50 |
Check Before Publishing
Run quality checks locally without publishing:
lpm publish --check
Set a minimum score threshold to enforce quality in CI:
lpm publish --min-score 80
If the score is below the threshold, the publish is blocked.
JavaScript Checks (29)
Documentation (22 points)
| Check | Points | What it looks for |
|---|---|---|
| README exists | 8 | README file with at least 100 characters |
| Install instructions | 3 | Installation section in README |
| Usage examples | 3 | Code blocks showing how to use the package |
| API documentation | 2 | API reference section |
| CHANGELOG | 3 | CHANGELOG file |
| LICENSE | 3 | LICENSE file or license field in package.json |
Code Quality (29 points)
| Check | Points | What it looks for |
|---|---|---|
| Type definitions | 8 | TypeScript .d.ts files or types field |
| IntelliSense coverage | 4 | Full IntelliSense/autocomplete support |
| ESM exports | 3 | ESM module support (type: "module" or .mjs) |
| Tree-shakable | 3 | Named exports that enable tree-shaking |
| No eval patterns | 3 | Absence of eval() or new Function() |
| Engine requirements | 1 | engines field in package.json |
| Exports map | 3 | package.json exports field |
| Minimal dependencies | 3 | 7 or fewer production dependencies |
| Source maps | 1 | .map files included |
Testing (11 points)
| Check | Points | What it looks for |
|---|---|---|
| Test files | 7 | Test files (.test.js, .spec.js, etc.) |
| Test script | 4 | Valid test script in package.json |
Package Health (38 points, 12 checks)
| Check | Points | What it looks for |
|---|---|---|
| Description | 3 | Meaningful package description |
| Keywords | 1 | Keywords array in package.json |
| Repository URL | 2 | repository field |
| Homepage URL | 1 | homepage field |
| Reasonable size | 3 | Bundle under 1MB |
| No vulnerabilities | 5 | No known security vulnerabilities |
| No lifecycle scripts | 2 | No preinstall, install, postinstall, preuninstall, uninstall, or postuninstall scripts |
| Maintenance health | 4 | Published within the last 90 days |
| Semver consistency | 4 | No wild version jumps |
| Author verified | 3 | Author has linked GitHub or LinkedIn |
| Agent Skills | 7 | Package has 1+ approved Agent Skills in .lpm/skills/ |
| Comprehensive Skills | 3 | Package has 3+ approved Agent Skills |
Why lifecycle scripts affect your score
Lifecycle scripts (postinstall, preinstall, etc.) execute automatically when someone installs your package. They are the primary vector for supply chain attacks — malicious packages use them to run arbitrary code on install. Packages without lifecycle scripts score higher because they are safer to install. If your package legitimately needs a postinstall script, the 2-point deduction is minor and your score can still reach Excellent.
Swift Checks (25)
Documentation (22 points)
| Check | Points | What it looks for |
|---|---|---|
| README exists | 8 | README file with at least 100 characters |
| Install instructions | 3 | Mentions Package.swift, .package(, Swift Package Manager, or lpm add |
| Usage examples | 3 | Code blocks showing how to use the package |
| API documentation | 2 | API reference section |
| CHANGELOG | 3 | CHANGELOG file |
| LICENSE | 3 | LICENSE file |
Code Quality (31 points)
| Check | Points | What it looks for |
|---|---|---|
| Platform declarations | 6 | platforms array in Package.swift (e.g., iOS 16+, macOS 13+) |
| Recent tools version | 5 | Swift tools version 5.9 or later |
| Multi-platform | 4 | Support for 2+ Apple platforms |
| Public API | 5 | Swift source files in Sources/ directory |
| Doc comments | 7 | Documentation comments on public APIs (/// DocC style) |
| Minimal dependencies | 4 | 10 or fewer Swift package dependencies |
Testing (11 points)
| Check | Points | What it looks for |
|---|---|---|
| Test files | 7 | Test targets in manifest or files in Tests/ |
| Test script | 4 | Test targets defined in Package.swift |
Package Health (36 points)
| Check | Points | What it looks for |
|---|---|---|
| Description | 3 | Meaningful package description |
| Keywords | 1 | Keywords metadata |
| Repository URL | 2 | Repository link |
| Homepage URL | 1 | Homepage link |
| Reasonable size | 3 | Package under 1MB |
| No vulnerabilities | 5 | No known security vulnerabilities |
| Maintenance health | 4 | Published within the last 90 days |
| Semver consistency | 4 | No wild version jumps |
| Author verified | 3 | Author has linked GitHub or LinkedIn |
| Agent Skills | 7 | Package has 1+ approved Agent Skills in .lpm/skills/ |
| Comprehensive Skills | 3 | Package has 3+ approved Agent Skills |
XCFramework Checks (21)
Documentation (22 points)
Same as Swift - README, install instructions, usage examples, API docs, changelog, and license.
Code Quality (40 points)
| Check | Points | What it looks for |
|---|---|---|
| Valid Info.plist | 10 | Parseable Info.plist with at least one platform slice |
| Multiple platform slices | 15 | ≥4 targets=15 · 3=11 · 2=7 · 1=3 · 0=0 |
| Reasonable binary size | 5 | ≤10MB=5 · ≤50MB=4 · ≤100MB=3 · ≤200MB=1 · >200MB=0 |
| Architecture support | 10 | Includes arm64 architecture |
XCFrameworks are pre-built binaries and cannot ship test targets, so there is no Testing category. The 40 code quality points reward platform and architecture coverage - the things that actually determine whether the framework works for a given consumer.
Package Health (38 points)
| Check | Points | What it looks for |
|---|---|---|
| Description | 3 | Meaningful package description |
| Keywords | 1 | Keywords metadata |
| Repository URL | 2 | Repository link |
| Homepage URL | 1 | Homepage link |
| Reasonable size | 3 | ≤10MB=3 · ≤50MB=2 · ≤100MB=1 · >100MB=0 (MB scale - XCFrameworks are binary) |
| No vulnerabilities | 5 | No known security vulnerabilities |
| Maintenance health | 5 | Published within the last 90 days |
| Semver consistency | 4 | No wild version jumps |
| Author verified | 4 | Author has linked GitHub or LinkedIn |
| Agent Skills | 7 | Package has 1+ approved Agent Skills in .lpm/skills/ |
| Comprehensive Skills | 3 | Package has 3+ approved Agent Skills |
Several checks are estimated locally and verified or augmented by the server after publishing:
| Check | Ecosystem | Server behavior |
|---|---|---|
| No eval patterns | JS | Server scans source tarball for eval( and new Function( |
| No lifecycle scripts | JS | Server checks scripts field for install hooks |
| IntelliSense coverage | JS | Server awards partial credit for JSDoc @param/@returns |
| Public API | Swift | Server scans for public declarations |
| Doc comments | Swift | Server scans for /// DocC comments |
| No vulnerabilities | All | Server checks vulnerability database |
| Maintenance health | All | Server checks publish history |
| Semver consistency | All | Server also checks version history for wild jumps |
| Author verified | All | Server checks linked GitHub or LinkedIn |
| Agent Skills | All | Server checks if 1+ approved skills exist |
| Comprehensive Skills | All | Server checks if 3+ approved skills exist |
Publish-Time Security
Beyond quality scoring, LPM performs additional security checks when you publish:
- Manifest validation — The server extracts
package.jsonfrom inside your tarball and compares it against the metadata in your publish request. If name, version, dependencies, scripts, or bin fields don't match, the publish is rejected. This prevents manifest confusion attacks. - Vulnerability scanning — Dependencies are checked against the OSV vulnerability database.
- Skills validation — Agent Skills in
.lpm/skills/are validated for correct frontmatter and structure.
If you see a "Manifest validation failed" error during publish, it means your local package.json doesn't match what's inside the tarball npm built. This usually resolves by running npm pack again or checking that no build step modified package.json after packing.
Maintenance Decay
Quality scores decay over time. If a package has not been updated in over 90 days, the maintenance health points are gradually deducted. This encourages authors to keep packages up to date.
AI Analysis
After publishing, LPM runs an automated AI analysis on your package source code. This generates:
- A plain-English summary, capabilities list, and target audience
- Decision-support fields (
bestFor,notFor,quickStart) for AI agents - Security pattern detection (eval, prototype pollution, path traversal, etc.)
- Error handling quality assessment
- Compatibility metadata (types, module format, frameworks, runtime)
- An embedding vector for semantic search
AI insights appear on your package page, power search results, and are available to AI coding agents via the MCP Server. See AI Metadata for the full list of generated fields and how to improve your results.