Five mistakes I see businesses make when buying contact centre technology
Most contact centre tech projects under-deliver. The reason is rarely the tech itself.
I’ve spent the last twenty-plus years on every side of these decisions. Sometimes as the buyer choosing a system. Sometimes as the leader inheriting one I didn’t pick. Sometimes as the consultant brought in eighteen months later to work out why the brilliant new platform isn’t delivering what was promised. The patterns are remarkably consistent. The tech is rarely the villain. The decisions made before, during and after the purchase are almost always where the value gets quietly lost.
These are the five mistakes I see most often, in roughly the order they happen.
1. Buying without living in the current experience first
Every contact centre tech engagement I take on starts the same way: I sit with advisors while they do their actual job. I watch which systems they reach for and which they avoid. I use the tools as a customer would — picking up the phone, opening the chat, navigating the IVR, completing the journey. I look at where the friction lives and where the workarounds have built up.
Five minutes of that is worth an hour of someone in head office telling you what they think is happening.
The reason this matters for tech decisions is simple: if you don’t know what’s actually broken in your current operation, you can’t know what to buy. You’ll buy what the vendor demonstrates well, or what the loudest internal stakeholder is asking for, or what looks impressive on a feature comparison spreadsheet. None of those are the same as buying what your team and your customers actually need. The result is a system that solves a problem you didn’t have while leaving the problem you did have firmly in place.
2. Not being clear what problem you’re trying to solve
Most tech buying decisions get framed too broadly. “We need a new contact centre platform.” “We need to consolidate our tech stack.” “We need to deploy AI.”
Those aren’t problems. They’re vague aspirations dressed up as briefs.
A real problem statement looks more like: “Our advisors are spending 25% of their handle time on after-call work, and our most experienced people are leaving partly because of that.” Or: “Our self-service deflects volume but increases complaints when it fails, because customers feel they’ve been pushed away rather than helped.” Or: “We’re growing 40% year on year and our current platform charges per-seat in a way that means we’re paying for capacity we don’t need.”
Each of those statements points to a different solution. The first might be a summarisation tool. The second might be redesigning the self-service journey, not buying more of it. The third might be a renegotiation rather than a replacement. Without that level of specificity, you’re not buying technology — you’re buying optimism.
The test I apply: if you can’t explain the problem in two sentences to someone outside your business, you’re not ready to talk to vendors yet.
3. Buying for the operation you might be, not the one you are
This is the most expensive mistake of the five — and it’s worth being precise about.
Buyers will always project some sense of their future operation into the buying decision. That’s reasonable; nobody wants to replace a platform every eighteen months. The problem isn’t planning ahead. The problem is buying specific functionality for an imagined future you can’t actually predict, and locking yourself into it today.
It manifests in two ways. The first is the platform decision: all-in-one suite versus best-of-breed components. The all-in-one is seductive — one supplier, one contract, one data model, the promise of seamless integration across every channel and function you might ever need. The best-of-breed approach is more rigorous — you accept some integration complexity in exchange for genuinely strong tools in each area. Neither is universally right. What makes the decision wrong is choosing based on the operation you imagine running in three years rather than the one you actually run today.
The second is over-specifying within whatever you’ve chosen. Every modern platform comes with functionality you don’t currently need: workforce management modules, speech analytics, advanced reporting, automation suites, AI capabilities. The vendor’s commercial pitch is built around the idea that you’ll grow into it. Sometimes that’s true. More often, the unused functionality becomes a tax — you’re paying licence fees for capabilities sitting dormant, while your core use case underperforms because nobody’s optimising for it.
The discipline worth applying isn’t don’t plan for the future — it’s buy for flexibility, not for predictions. A platform that’s modular, that scales sensibly, that lets you add and remove components as your needs actually emerge, gives you the genuine optionality buyers think they’re getting from over-buying. A platform sold to you on the basis of what you’ll need in 2029 is selling you a forecast nobody can make accurately, and charging you for it today.
Buy what you can credibly use, well, in the near term. Make sure what you buy can grow with you. Treat any specific feature you’re buying for “the future” with healthy suspicion.
4. Underestimating the people change required
A new platform isn’t just a tech project. It’s a change project that happens to involve technology.
Every contact centre tech decision I’ve seen succeed has had three things alongside the platform itself: a properly resourced training plan, redesigned processes that actually use the new capabilities (rather than replicating the old ones on the new system), and a leadership team prepared to back the team through the inevitable performance dip in the first few months.
Every contact centre tech decision I’ve seen disappoint has skimped on at least one of those three. Usually two. Sometimes all three.
The reason this gets underestimated is structural. The business case for the technology is built around the cost of the technology — licences, implementation, integration. The people change is a different budget line, often owned by a different team, sometimes not budgeted at all. The result is that organisations spend hundreds of thousands on the platform and nothing on the change required to use it well. The platform then under-delivers, and the post-mortem typically blames the vendor rather than the buyer.
The test I’d apply: if your business case doesn’t include the cost of training, process redesign, and leadership time over the first six to twelve months, your business case is wrong.
5. Skipping the pilot — or running a pilot designed to succeed
The most experienced buyers I’ve worked with insist on a pilot before any major contact centre tech purchase. The least experienced ones either skip it entirely, or run a pilot so carefully designed that it can’t fail.
A pilot designed to succeed looks like this: a small, motivated team using the new platform on hand-picked easy cases, with the vendor providing white-glove support, in a tightly controlled environment. The pilot delivers exactly what the demo promised. The decision is made. Twelve months later the production rollout struggles, and nobody can quite explain why.
A real pilot looks different. It runs against actual operational conditions — real customer mix, real volumes, real edge cases, real handoffs between systems. The vendor is involved but not over-involved. The team is representative rather than hand-picked. There’s a clear definition of what would constitute the pilot succeeding and what would constitute the pilot failing — and a credible willingness to walk away if the latter happens.
The point of the pilot isn’t to confirm the decision you’ve already made. It’s to test it.
A final reframe
If I had to compress all five into one principle, it would be this: the technology decision is the easy bit. The hard bits are everything around it.
Knowing what you’re buying for. Understanding the operation you’re buying into. Resisting the seductive promise of capability you don’t yet need. Resourcing the people change properly. Testing under real conditions before you commit.
Most contact centre tech projects don’t fail because the technology was wrong. They fail because the buyer didn’t do the work that surrounds the technology — and no system, however good, compensates for that.
If you’re navigating one of these decisions and would like to talk through what you’re seeing, I’m easy to find. mark@soothconsulting.xyz.
