Laboratories work differently from one another, requirements shift constantly, and a rigid off-the-shelf system can rarely capture every operational or clinical nuance.
New technologies, particuarly AI-assisted development tools, are reshaping what is economically possible in software development. Code is getting cheaper to produce, development is becoming more iterative, and smaller or more specialised use cases are increasingly viable to build. For laboratories, this is good news on the whole. Meaningful individualisation is becoming more achievable, better value for money is within reach, and niche applications that were previously too complex or too low-volume to justify are now entering the conversation.
This is precisely why it matters more than ever not to mistake extensibility for a free-for-all. The real question is not simply whether something can be built quickly, but rather whether an extension can be embedded meaningfully and durably into the messy reality of a working laboratory.
What Has Not Changed
Despite various new tools and new momentum, some things remain constant. Software will always be a business process. Extending a laboratory system does not just mean adding features — it means reaching into specialist workflows, lines of responsibility, and the routines people rely on every day.
There is an old saying from the software world that still rings true: If you have no clue, add a switch or two. In environments as complex as laboratories, it is tempting to patch over unclear processes with extra options, special-case rules, or configuration toggles. This feels flexible in the moment, but tends to accumulate into unnecessary complexity over time. The most important question, then, has not changed: what should the process actually look like? Only once that is answered properly does it make sense to ask what kind of extension — if any — is the right response.
Clean APIs, well-defined extension points, and a clear boundary between the stable core system and local customisations remain just as essential as ever. Extensibility only holds up over time when ownership is clearly established — who is responsible for which processes, and who is accountable for any given extension or integration. This is not a secondary concern: it sits at the heart of maintainability and smooth operations.
And as systems become more individualised, robust and well-documented core processes actually become more important, not less. The more local variation you want to support, the more you depend on stable foundations and clear traceability. Documentation is not a bureaucratic formality here — it is what makes extensions understandable, maintainable, auditable, and safe to build on over time.
The role of business analysts — or anyone filling that translator function between specialist knowledge and technical implementation — remains vital. Not every special requirement is strategically worth pursuing. Some genuinely create value and reflect real differentiation. Others are historical accidents: old workarounds that calcified into habits, or quirks driven by organisational rather than clinical logic. The ability to tell these apart, i.e., to know when a bespoke solution is genuinely justified and when standardisation would serve everyone better, is as important as it has ever been.
The Five Foundations of Sustainable Lab Software Extensions
- Clean APIs & extension points — Clearly defined interfaces between core system and local customisations.
- Stable, scalable core processes — The more local variation you support, the more you depend on solid foundations.
- Clear governance & ownership — Who is responsible for which processes, and who is accountable for each extension.
- Thorough documentation — The prerequisite for anything being maintainable and auditable long-term.
- Business analysis & process judgement — The ability to distinguish genuine differentiation from unnecessary complexity and knowing when a bespoke solution is justified and when standardisation serves everyone better.
What Is Different
What has changed is how extensions can be built — both economically and methodologically. Specialised applications for niche use cases are increasingly realistic, as falling development costs make smaller-volume scenarios viable. This is particularly relevant for laboratories, where highly specific processes, unusual referrer arrangements, or local operational quirks have historically been handled through workarounds or manual steps.
Code is cheaper to produce, and closer alignment between software and real-world workflows is possible without prohibitive training costs. When systems reflect how people actually work, adoption tends to be stronger and efficiency gains more tangible. Individualisation is not becoming an end in itself — but it is becoming far more accessible than it once was.
The technical baseline is shifting too. Difficult-to-maintain custom scripts are increasingly recognised as a liability rather than a solution. What should replace them is real code, held to real quality standards: versioned, tested, documented, and properly integrated into the broader architecture. In complex laboratory environments, this matters enormously. A quick workaround without tests or documentation remains a risk — even when it can now be generated faster than ever with modern tools.
There is also a meaningful shift in how much users and specialist teams can be involved in shaping extensions and processes. This is a real opportunity. It does not mean, however, that complex applications should be cobbled together through unstructured “vibe coding.” In a laboratory setting, that would be dangerously short-sighted. The more valuable use of AI is as an interactive thinking partner — a way to explore process options, variations, and possibilities iteratively alongside the people who know the work best. Operational knowledge can feed into development earlier and more precisely, without sacrificing architectural integrity, governance, or quality.
Used well, AI can genuinely improve productivity in software development and raise quality even in complex environments. For customers, that can translate into better value for money — and mean that niche use cases, long deprioritised because the numbers never quite added up, finally become worth pursuing.
In Summary
Extension and customisation will remain central to laboratory software for as long as laboratory software reflects business processes — which is to say, indefinitely. AI does not change that. What it does change is what is possible: development is cheaper, more iterative, and increasingly viable even for smaller and more specialised use cases.
Which makes structure more important than ever, not less. Good extension concepts depend on stable, scalable, and well-documented core processes; clean APIs and extension points; clear governance around ownership; and the judgement to distinguish genuine differentiation from unnecessary complexity. The role of specialist knowledge and business analysis remains central to all of it.
The opportunity that new technology offers is not the ability to build complex laboratory software faster in an unprincipled way. It is the ability to build better processes — more precisely, more economically, and in genuine collaboration with the people who use them. When that balance is struck, the result is laboratory software that is not just flexible, but maintainable, trustworthy, and genuinely relevant to how laboratories actually work.