Your AI skills don't belong in your repo

We're riding the AI hype train.
We don't know exactly where it's going, but everyone's building while it moves.
And in the middle of that, we might be making a mistake, we're putting our entire AI layer inside our repositories.
The thing nobody notices (until is late)
You shipped your repo. Maybe it's open source. Maybe it's a private project with 50 contributors. Either way, you ran git push and moved on.
But somewhere in that push, sitting quietly between your source code and your README, there's a folder called .claude/. Maybe .cursor/rules/. Maybe both.
Inside: skills, agents, prompts, hooks. The files that tell your AI how to think about your code. How to review PRs. What patterns to enforce. What to ignore.
You committed them because that's the default. Every AI tool tells you to. "Just add this file to your project." So you did.
Nobody told you that you just published your AI's playbook.
Not a config file. Not a theme preference. The actual instructions that shape how your AI reads, writes, and judges your code, sitting in a public repo, downloadable with a single git clone.
For a private repo with a tight team, maybe that's fine.
For an open source project, you just gave away your playbook for free.
For a company with 200 developers, contractors, and CI pipelines pulling your repo, you've given everyone the same level of access to your AI logic as they have to your utility functions.
The not so open source trap
Open source has a simple deal: you share your code, others use it, contribute, improve it. Everybody wins. But that deal was designed for code. Not for the layer that sits on top of it.
When you open source a project with AI artifacts committed to the repo, you're not just sharing how your software works. You're sharing how you work with AI. That's not code. That's competitive advantage.
"But it's open source, everything should be open."
Should it? We’ve seen a version of this before with E2E tests. At some point, they stop being “tests” and become a full blueprint of how your product behaves. Flows, edge cases, expected outputs.
And if you expose all of that, reproducing the system becomes trivial. AI artifacts are the same thing, just more explicit.
"We're not open source, so this doesn't apply to us."
It does. Just differently.
If someone gets access to your repo, authorized or not they don't just get your code. They get the full AI kit on top. Every prompt, every agent, every heuristic your team built. Not because they were looking for it. Because it was sitting right there, bundled with everything else.
The pattern you already know
Secrets? .env + vault. Never committed. Infrastructure? Terraform, separate state, different access controls.
Your AI artifacts? Committed alongside your CSS.
You already solved this problem for every other layer of your stack. You just haven't solved it for this one yet.
What you can do right now
Decide what's shareable and what's not. Some artifacts are generic ("use TypeScript strict mode"). Those are fine in the repo. Others encode how your team works, reviews, and makes decisions. Those probably shouldn't be public.
.gitignore the sensitive parts. It's not perfect, someone has to manually manage what goes in and what stays out, and it breaks the "just clone and go" workflow. But it's better than shipping everything by default.
Keep a separate repo. Move your AI artifacts to a dedicated repository with tighter access controls. Your team clones it separately. It's manual, it doesn't sync automatically, and you'll end up with a script or two to glue it together, but at least your AI layer isn't bundled with every git clone of your main project.
But... this won't scale well. Managing .gitignore rules across tools, keeping artifacts in sync manually, onboarding new devs with "copy this folder from Slack", that falls apart fast. But it's a start.
What if this was automated?
This is what we asked ourselves: what if your AI artifacts worked like dependencies?
Your repo declares what you need. The content lives in a registry. You commit the manifest, not the artifacts, the same way you commit package.json, not node_modules.
That's grekt. An artifact manager for AI tooling (skills, agents, rules, mcp's...)
Your repo says "I use this code review agent at version 2.1.0." But the actual prompts, the logic, the heuristics, those live in the registry, not in your git history.
Anyone who clones your repo knows what you use. They don't get how it thinks.
# grekt.yaml — lives in your repo
# .grekt/artifacts don't (is gitignored)
artifacts:
@yourteam/code-review-agent: 2.1.0
@yourteam/security-skill: 1.4.0
What we're building next
Today, grekt separates your artifacts from your repo. Your AI logic doesn't travel with git clone. That's the foundation.
But separation is step one. We're working on what comes after:
Granular access controls: not everyone who accesses the code should access every artifact
Audit logs: who installed what, when, and which version
Team-level permissions: different roles, different visibility
We're not there yet. But you can't control access to something that ships inside every git clone. First you separate it. Then you control it.
Know your AI stack. But don't let everyone else know it too.

