Free Software Isn’t Free When Your Engineer Quits
Do you feel like you are picking up the crumbs after your engineer has quit. Read this to understand what open-source ERP actually works. Under 4-mins, I promise.
Open-source ERP sounds like a smart call. No license fees. Total flexibility. Full control.
But what I’ve seen, especially with early-stage teams, is that the real cost shows up later.
You pick a self-hosted ERP because you want independence. But then the one engineer who set it up leaves. Documentation is half-written. Updates break your integrations. Suddenly, the “free” system is costing you developer time, velocity, and investor patience.
I’m not anti-open source. But if you choose it for core infrastructure, you have to treat it like infrastructure. Not a weekend project.
That means ownership. Internal clarity. Exit-proofing. Planning for the future version of your company, not just the current one.
The founders who get burned aren’t foolish. They’re just betting on tech without budgeting for transition.
A platform you can’t understand without the person who installed it is not a platform. It’s a liability.
Yes, You Can Use Open Source. No, You Can’t Ignore Patents.
Open-source tools move fast. That’s why early-stage founders love them.
But once you’re in deep tech, or doing anything defensible, patents come into play, and that’s where a lot of founders get caught off guard.
Just because a tool is open doesn’t mean it’s free from IP risk. Some licenses are fine. Some come with strings. Some use patented techniques, and if you’re not watching, you might be the one who has to untangle it during diligence.
I’ve seen founders lose time, deals, and confidence over patent conflicts they didn’t even know they were exposed to.
The point isn’t to panic. The point is to know where the line is between free and protected, and to treat open-source as a strategic asset, not just a shortcut.
If your roadmap includes investors or acquirers, don’t wait to start thinking like someone they can trust.
If You Can’t Explain Your AI, Regulators Will Assume the Worst
AI used to be a back-end function. A quiet assist. Something that shaved time off a workflow. Now it’s in healthcare, hiring, finance, education its centre stage. It’s scoring resumes, allocating credit, drafting medical notes.
And in every sector, the same problem keeps showing up: the model works, but nobody can explain why.
Founders say, “it gets the job done.” Investors might accept that. Regulators won’t.
This isn’t about bad intentions. Most teams are moving fast, working with what they have. But the issue isn’t speed. It’s assumptions.
We assume that if the model performs, we’re fine. We assume that explainability is a UX bonus, not a compliance feature. We assume regulators will understand what we mean by “the system is continuously learning.”
But when decisions affect people — where they live, what job they get, whether they get a loan—those assumptions fall apart.
Trust doesn’t come from results. It comes from visibility, traceability, and the ability to answer questions before they’re asked.
If you’re serious about AI, get serious about how you explain it.
Not to impress anyone. Not to tick a box.
To stay in business.
Stay tuned for the next edition to learn more about opensource licensing, cybersecurity, AI and Web3.


