Hey HN. I've been working on askill, a CLI package manager for agent skills (SKILL.md files used by Claude Code, Codex, Cursor, etc.).
There are already several skill directories and installers out there (skills.sh, skillregistry.io, and others). I saw the Show HN for skills.sh a few weeks ago and noticed comments asking for version management, proper uninstalls, and more transparency around what gets installed. Those are exactly the problems I'd been working on, so I figured it was worth sharing.
What askill does differently:
1. AI safety scoring. Every skill indexed on askill.sh gets an automated review across five dimensions: safety, clarity, completeness, actionability, and reusability. The full breakdown is visible before you install. This was motivated by a simple concern — a SKILL.md tells your agent what to do, what commands to run, how to behave. Trusting random files from GitHub without any review felt like the early days of npm before anyone thought about supply chain security.
2. Real package management. askill publish lets authors release versioned skills with semver. askill add @scope/name@^1.0 resolves versions. askill update and askill remove do what you'd expect. Skills can declare dependencies on other skills. None of the existing tools I've seen handle versioning or dependency resolution.
3. Precise installs. askill add @scope/name installs one skill. Most alternatives operate at the repo level — if a repo has 12 skills you only want 1, you still get all 12. askill also lets you install from GitHub directly (askill add gh:owner/repo@skill-name) if the skill hasn't been published.
4. Cross-agent symlinks. Skills are written to .agents/skills/ (canonical location) and symlinked into each agent's expected directory (.claude/skills/, .codex/skills/, .cursor/skills/, etc.). One install, all agents see it. This also means removal is clean — delete the canonical copy and all symlinks go away.
5. Open indexing. An automated crawler finds SKILL.md files across public GitHub repos and indexes them. Authors can also run askill submit <github-url> to trigger indexing of a specific repo. No manual curation.
The AI scoring pipeline runs hourly. It re-evaluates whenever the source SKILL.md content changes. The scoring is done by an LLM with 11 heuristic rules as guardrails (detecting auto-generated content, internal config paths, hardcoded secrets, etc.). I'm under no illusions that LLM-based review is perfect, but it's a starting point and better than nothing.
Hey HN. I've been working on askill, a CLI package manager for agent skills (SKILL.md files used by Claude Code, Codex, Cursor, etc.).
There are already several skill directories and installers out there (skills.sh, skillregistry.io, and others). I saw the Show HN for skills.sh a few weeks ago and noticed comments asking for version management, proper uninstalls, and more transparency around what gets installed. Those are exactly the problems I'd been working on, so I figured it was worth sharing.
What askill does differently:
1. AI safety scoring. Every skill indexed on askill.sh gets an automated review across five dimensions: safety, clarity, completeness, actionability, and reusability. The full breakdown is visible before you install. This was motivated by a simple concern — a SKILL.md tells your agent what to do, what commands to run, how to behave. Trusting random files from GitHub without any review felt like the early days of npm before anyone thought about supply chain security.
2. Real package management. askill publish lets authors release versioned skills with semver. askill add @scope/name@^1.0 resolves versions. askill update and askill remove do what you'd expect. Skills can declare dependencies on other skills. None of the existing tools I've seen handle versioning or dependency resolution.
3. Precise installs. askill add @scope/name installs one skill. Most alternatives operate at the repo level — if a repo has 12 skills you only want 1, you still get all 12. askill also lets you install from GitHub directly (askill add gh:owner/repo@skill-name) if the skill hasn't been published.
4. Cross-agent symlinks. Skills are written to .agents/skills/ (canonical location) and symlinked into each agent's expected directory (.claude/skills/, .codex/skills/, .cursor/skills/, etc.). One install, all agents see it. This also means removal is clean — delete the canonical copy and all symlinks go away.
5. Open indexing. An automated crawler finds SKILL.md files across public GitHub repos and indexes them. Authors can also run askill submit <github-url> to trigger indexing of a specific repo. No manual curation.
The AI scoring pipeline runs hourly. It re-evaluates whenever the source SKILL.md content changes. The scoring is done by an LLM with 11 heuristic rules as guardrails (detecting auto-generated content, internal config paths, hardcoded secrets, etc.). I'm under no illusions that LLM-based review is perfect, but it's a starting point and better than nothing.
The CLI is open source (MIT): https://github.com/avibe-bot/askill
Browse indexed skills: https://askill.sh
Happy to answer questions about the architecture, the scoring system, or anything else.