<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[grekt | Know your AI stack - Audit everything. Trust nothing]]></title><description><![CDATA[Updates, releases, and thoughts about the grekt ecosystem. Audit, manage, and version your MCPs, agents, skills, and AI tools across Claude , Cursor, and more. And allow your projects or teams have reproducible configurations on every machine.]]></description><link>https://blog.grekt.com</link><generator>RSS for Node</generator><lastBuildDate>Thu, 23 Apr 2026 15:41:51 GMT</lastBuildDate><atom:link href="https://blog.grekt.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Your AI skills don't belong in your repo]]></title><description><![CDATA[We're riding the AI hype train.We don't know exactly where it's going, but everyone's building while it moves.
And in the middle of that, we might be making a mistake, we're putting our entire AI laye]]></description><link>https://blog.grekt.com/your-ai-skills-don-t-belong-in-your-repo</link><guid isPermaLink="true">https://blog.grekt.com/your-ai-skills-don-t-belong-in-your-repo</guid><dc:creator><![CDATA[grekt]]></dc:creator><pubDate>Sat, 21 Mar 2026 20:09:55 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/69a84e90e55311e40f05fc27/2ea9efc9-bb76-4b90-b5e7-a348659e4af9.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We're riding the AI hype train.<br />We don't know exactly where it's going, but everyone's building while it moves.</p>
<p>And in the middle of that, we might be making a mistake, we're putting our entire AI layer inside our repositories.</p>
<h2>The thing nobody notices (until is late)</h2>
<p>You shipped your repo. Maybe it's open source. Maybe it's a private project with 50 contributors. Either way, you ran <code>git push</code> and moved on.</p>
<p>But somewhere in that push, sitting quietly between your source code and your README, there's a folder called <code>.claude/</code>. Maybe <code>.cursor/rules/.</code> Maybe both.</p>
<p>Inside: <code>skills</code>, <code>agents</code>, <code>prompts</code>, <code>hooks</code>. <strong>The files that tell your AI how to think about your code</strong>. How to review PRs. What patterns to enforce. What to ignore.</p>
<p>You committed them because that's the default. Every AI tool tells you to. "<em>Just add this file to your project</em>." So you did.</p>
<p><strong>Nobody told you that you just published your AI's playbook.</strong></p>
<p>Not a config file. Not a theme preference. The actual instructions that shape how your AI reads, writes, and judges your code, sitting in a public repo, downloadable with a single git clone.</p>
<ul>
<li><p>For a private repo with a tight team, maybe that's fine.</p>
</li>
<li><p>For an open source project, you just gave away your playbook for free.</p>
</li>
<li><p>For a company with 200 developers, contractors, and CI pipelines pulling your repo, you've given everyone the same level of access to your AI logic as they have to your utility functions.</p>
</li>
</ul>
<h2>The not so open source trap</h2>
<p>Open source has a simple deal: you share your code, others use it, contribute, improve it. Everybody wins. <strong>But that deal was designed for code</strong>. Not for the layer that sits on top of it.</p>
<p>When you open source a project with AI artifacts committed to the repo, you're not just sharing how your software works. You're sharing how you work with AI. <strong>That's not code. That's competitive advantage.</strong></p>
<h3>"But it's open source, everything should be open."</h3>
<p>Should it? We’ve seen a version of this before with E2E tests. At some point, they stop being “tests” and become a full blueprint of how your product behaves. Flows, edge cases, expected outputs.</p>
<p>And if you expose all of that, reproducing the system becomes trivial. AI artifacts are the same thing, just more explicit.</p>
<h3><strong>"We're not open source, so this doesn't apply to us."</strong></h3>
<p>It does. Just differently.</p>
<p>If someone gets access to your repo, authorized or not they don't just get your code. They get the full AI kit on top. Every prompt, every agent, every heuristic your team built. Not because they were looking for it. Because it was sitting right there, bundled with everything else.</p>
<h1>The pattern you already know</h1>
<p>Secrets? .env + vault. Never committed. Infrastructure? Terraform, separate state, different access controls.</p>
<p>Your AI artifacts? Committed alongside your CSS.</p>
<p>You already solved this problem for every other layer of your stack. <strong>You just haven't solved it for this one yet.</strong></p>
<h2><strong>What you can do right now</strong></h2>
<ul>
<li><p><strong>Decide what's shareable and what's not</strong>. Some artifacts are generic ("use TypeScript strict mode"). Those are fine in the repo. Others encode how your team works, reviews, and makes decisions. Those probably shouldn't be public.</p>
</li>
<li><p><strong>.gitignore the sensitive parts</strong>. It's not perfect, someone has to manually manage what goes in and what stays out, and it breaks the "just clone and go" workflow. But it's better than shipping everything by default.</p>
</li>
<li><p><strong>Keep a separate repo</strong>. Move your AI artifacts to a dedicated repository with tighter access controls. Your team clones it separately. It's manual, it doesn't sync automatically, and you'll end up with a script or two to glue it together, but at least your AI layer isn't bundled with every git clone of your main project.</p>
</li>
</ul>
<p><strong>But... this won't scale well</strong>. Managing .gitignore rules across tools, keeping artifacts in sync manually, onboarding new devs with "copy this folder from Slack", that falls apart fast. But it's a start.</p>
<h1>What if this was automated?</h1>
<p>This is what we asked ourselves: what if your AI artifacts worked like dependencies?</p>
<p>Your repo declares what you need. The content lives in a registry. You commit the manifest, not the artifacts, the same way you commit package.json, not node_modules.</p>
<p><strong>That's</strong> <a href="https://grekt.com"><strong>grekt</strong></a><strong>. An artifact manager for AI tooling (skills, agents, rules, mcp's...)</strong></p>
<p>Your repo says "I use this code review agent at version 2.1.0." But the actual prompts, the logic, the heuristics, those live in the registry, not in your git history.</p>
<p>Anyone who clones your repo knows what you use. They don't get how it thinks.</p>
<pre><code class="language-yaml"># grekt.yaml — lives in your repo
# .grekt/artifacts don't (is gitignored)

artifacts:
  @yourteam/code-review-agent: 2.1.0
  @yourteam/security-skill: 1.4.0
</code></pre>
<h2><strong>What we're building next</strong></h2>
<p>Today, grekt separates your artifacts from your repo. Your AI logic doesn't travel with git clone. That's the foundation.</p>
<p>But separation is step one. We're working on what comes after:</p>
<ul>
<li><p>Granular access controls: not everyone who accesses the code should access every artifact</p>
</li>
<li><p>Audit logs: who installed what, when, and which version</p>
</li>
<li><p>Team-level permissions: different roles, different visibility</p>
</li>
</ul>
<p>We're not there yet. But you can't control access to something that ships inside every git clone. First you separate it. Then you control it.  </p>
<p><strong>Know your AI stack</strong>. But don't let everyone else know it too.</p>
<p><a href="https://grekt.com">grekt</a> · <a href="https://docs.grekt.com">Docs</a> · <a href="https://github.com/orgs/grekt-labs">GitHub</a></p>
]]></content:encoded></item><item><title><![CDATA[You don't know your AI stack.]]></title><description><![CDATA[AI tooling has no supply chain. No audit trail. No npm audit equivalent. No lockfile you can trust.
You install MCPs, agents, skills, hooks. They get access to your files, your context, your code.  
A]]></description><link>https://blog.grekt.com/you-don-t-know-your-ai-stack</link><guid isPermaLink="true">https://blog.grekt.com/you-don-t-know-your-ai-stack</guid><category><![CDATA[AI]]></category><category><![CDATA[#ai-tools]]></category><category><![CDATA[ai agents]]></category><category><![CDATA[AI-automation]]></category><category><![CDATA[aitools]]></category><category><![CDATA[skills]]></category><category><![CDATA[Security]]></category><dc:creator><![CDATA[grekt]]></dc:creator><pubDate>Wed, 04 Mar 2026 16:04:26 GMT</pubDate><content:encoded><![CDATA[<p>AI tooling has no supply chain. No audit trail. No npm audit equivalent. No lockfile you can trust.</p>
<p>You install MCPs, agents, skills, hooks. They get access to your files, your context, your code.  </p>
<p>And then nobody checks again. Not you. Not your team. Not the tool that gave you access to them in the first place.</p>
<p>That's why grekt exists.</p>
<h2><strong>Why this matters now</strong></h2>
<p>The AI tooling ecosystem is growing faster than anyone's ability to verify it. New MCPs weekly. New agent frameworks monthly. Everyone building, everyone shipping, but the missing piece is auditing.</p>
<p>This isn't a criticism. It's the natural state of a young ecosystem. But "young" doesn't mean "safe to ignore." The tools are already in your projects. The question is whether you know what they're doing there.</p>
<h2>What grekt is</h2>
<p>A CLI (and soon a dashboard ;D). Runs on your machine or your infra. Looks at every AI artifact in your stack and tells you what it finds.</p>
<p>Version drift. Stale configurations. Security gaps. Tools you forgot were there. Then it points you in the right direction to fix it.</p>
<p>No cloud dependency. No account. Your machine, your data, your terminal.</p>
<h2>What this blog is for</h2>
<p>We're not here to predict the future of AI. This blog exists to share things that are useful.</p>
<p>Audit findings. Patterns across real stacks. And research on the state of AI tooling security and trusting.</p>
<p>First real content drops soon.</p>
<p>Know your stack.</p>
]]></content:encoded></item></channel></rss>