<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Mark Adams]]></title><description><![CDATA[Honest writing on customer service operations — strategy, design, and the messy bits in between. By someone who's run them, not just advised on them.]]></description><link>https://www.markjadams.co.uk</link><generator>Substack</generator><lastBuildDate>Sat, 16 May 2026 21:20:23 GMT</lastBuildDate><atom:link href="https://www.markjadams.co.uk/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Mark Adams]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[markjadams@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[markjadams@substack.com]]></itunes:email><itunes:name><![CDATA[Mark Adams]]></itunes:name></itunes:owner><itunes:author><![CDATA[Mark Adams]]></itunes:author><googleplay:owner><![CDATA[markjadams@substack.com]]></googleplay:owner><googleplay:email><![CDATA[markjadams@substack.com]]></googleplay:email><googleplay:author><![CDATA[Mark Adams]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Zero customer service]]></title><description><![CDATA[It&#8217;s a bold statement, and a great place to start a debate among service leaders, service teams, and customers alike.]]></description><link>https://www.markjadams.co.uk/p/zero-customer-service</link><guid isPermaLink="false">https://www.markjadams.co.uk/p/zero-customer-service</guid><dc:creator><![CDATA[Mark Adams]]></dc:creator><pubDate>Sun, 15 Mar 2026 08:00:00 GMT</pubDate><content:encoded><![CDATA[<p>It&#8217;s a bold statement, and a great place to start a debate among service leaders, service teams, and customers alike. It brings out views from all perspectives. Let&#8217;s take a look at the subject &#8212; which actually isn&#8217;t a new concept.</p><p>For decades, companies treated customer service as a cost centre: built around call queues, scripted responses, and rigid escalation paths. The dominant model assumed that problems were inevitable and the goal was to resolve them efficiently. Success was measured in average handle time, closure rates, deflection performance, and of course the cost to serve.</p><p>Today, that mindset is outdated. Yet many organisations still use metrics that have been around forever &#8212; leaders aren&#8217;t sure where those metrics originated, and the metrics themselves provide diminishing insight. Consumers, meanwhile, are more informed and clearer on their expectations, rather than passively accepting whatever service experience they&#8217;re handed.</p><p>In their 2011 book <em>The Best Service is No Service</em>, Bill Price and David Jaffe offered a game-changing approach, showing how organisations were using the wrong metrics to measure customer service. Customer service, they argued, is only needed when something has already gone wrong &#8212; and eliminating the need for service is the best way to satisfy customers. While 2011 feels like a long time ago, they were onto something. And what they said still stands.</p><p>Today, customers want seamless digital experiences and expect frictionless journeys. They don&#8217;t want better apologies; they want fewer reasons to make contact. In a world where everyone seems busy all the time, and organisations are focused on reducing or deflecting contact, the idea that the best customer service is no customer service continues to gather momentum. A win-win, surely?</p><p>If only it were that simple. Recent research shows that for all the digital channels that exist, and the belief that &#8220;people don&#8217;t want to speak to people,&#8221; being able to call an organisation still ranks high in importance for consumers. This can be to talk through a complex situation in detail, to get reassurance, or simply to hear it from a human. In these instances, service is a necessity for the customer and adds genuine value.</p><p>So it&#8217;s not really about zero customer service. It&#8217;s about scenario-based service. There are some moments &#8212; driven by customer demand, organisational benefit, or both &#8212; where service is the best option, and other moments where it&#8217;s a sign that something earlier in the journey hasn&#8217;t worked.</p><p>The idea of zero customer service doesn&#8217;t mean ignoring customers. It means designing products, policies, and processes so intuitive and reliable that support is rarely needed. Clear onboarding. Proactive communication. Transparent pricing. Self-serve tools that eliminate confusion before it becomes a ticket. By shifting focus from reactive support to preventive design, companies reduce costs, increase loyalty, and build trust. When nothing goes wrong, service becomes invisible.</p><p>But there&#8217;s a paradox worth acknowledging here. Service that&#8217;s invisible can also be service that&#8217;s unvalued. If a customer has never had a reason to interact with you &#8212; never had a problem solved, never had a question answered, never had a person on the other end of a call show up well &#8212; they may also have no felt sense of why they should stay. At the next renewal, the next switching opportunity, the next conversation with a friend asking for recommendations, your service simply isn&#8217;t part of their thinking. The very success of designing the operation away has taken away the moments where loyalty would have been built.</p><p>This is why scenario-based service matters. The goal isn&#8217;t to hide service &#8212; it&#8217;s to design products and journeys that don&#8217;t <em>need</em> service for the routine, while ensuring that the moments where service <em>does</em> show up are moments that visibly add value. A car insurance customer who never has a claim experiences nothing of your service operation across years of cover; the moment a claim does happen, the experience either reinforces every reason to renew or undermines them. Energy customers go years without contacting their supplier; one misbilled bill or one query about a tariff change is the entire emotional impression they&#8217;ll carry into the next switching decision.</p><p>When nothing goes wrong, service should be invisible. When something does go wrong &#8212; or when a customer reaches for service deliberately &#8212; that moment should be unmistakably good.</p><p>The art of great service design, as with many things, is getting the right balance. Removing unnecessary effort for both customer and advisor. Not feeling the need to offer every channel all the time. Demonstrating real value in the moments when you do directly interact with your customers.</p><p>If you&#8217;d like to talk through what this might look like in your business, I&#8217;m easy to find. <a href="mailto:mark@soothconsulting.xyz">mark@soothconsulting.xyz</a>.</p>]]></content:encoded></item><item><title><![CDATA[The dynamic operating model]]></title><description><![CDATA[Every business has an operating model, whether it&#8217;s consciously or unconsciously designed and deployed.]]></description><link>https://www.markjadams.co.uk/p/the-dynamic-operating-model</link><guid isPermaLink="false">https://www.markjadams.co.uk/p/the-dynamic-operating-model</guid><dc:creator><![CDATA[Mark Adams]]></dc:creator><pubDate>Sat, 15 Nov 2025 08:00:00 GMT</pubDate><content:encoded><![CDATA[<p>Every business has an operating model, whether it&#8217;s consciously or unconsciously designed and deployed. What&#8217;s your operating model and does it deliver for you and your customers?</p><p>Almost all the work I undertake involves engaging with a client&#8217;s operating model (OM). This can include designing a new OM from the ground up for a pre-launch company or a start-up in its early growth stages, evaluating and updating an existing OM, or entirely reimagining a company&#8217;s operations and implementing a new OM.</p><p>Over the years I&#8217;ve seen the whole range. From very detailed Target Operating Model documentation running into tens of pages (TL;DR), to the other end of the scale &#8212; businesses that have organically developed and people just got stuff done. Unsurprisingly the second approach is mostly seen in start-ups where everyone needs to muck in on whatever the priority is. It&#8217;s cost effective, builds teamwork (mostly), and focuses people on the things that are most important.</p><p>The thing is, as a business grows, that approach can turn into chaos, confusion and cost through mistakes, lack of clarity and lack of coordination.</p><p>When I talk to clients about operating models there can be concerns that an OM might slow down quick changes that are needed, or that the result could be too rigid for their business, or that it means lots of new roles and therefore cost.</p><p>In my career I have definitely seen those pitfalls come to fruition. In a recent McKinsey &amp; Company study on operating models they found that &#8220;only 23% were highly successful&#8221; and &#8220;63% were somewhat successful.&#8221; That&#8217;s not great, particularly when you think about the effort and potential disruption the work may have brought about.</p><p>I believe an OM review and resultant changes can avoid all of those pitfalls if done correctly. It doesn&#8217;t need to be overly engineered, cost a lot of money, or take so long that opportunities are missed. Done right, it can be freeing, build engagement, and be deployed on a &#8216;just in time&#8217; basis.</p><p>My approach is to quickly come to know a business through both qualitative and quantitative work. I review data, look at the goals of the business, and talk to people inside it &#8212; the approach is the same whether there are 10, 100, or 1,000 people in the business. Understanding the now and the future objectives means I can identify and describe the gap. Once we&#8217;ve described that, we design the approach to closing it.</p><p>I build modular operating models that can be deployed over time as the business needs them &#8212; supporting growth, working more effectively, reducing cost to serve, all of these, or something else.</p><p>If any of this resonates and you&#8217;d like to talk it through, I&#8217;m easy to find. <a href="mailto:mark@soothconsulting.xyz">mark@soothconsulting.xyz</a>.</p>]]></content:encoded></item><item><title><![CDATA[Five mistakes I see businesses make when buying contact centre technology]]></title><description><![CDATA[Most contact centre tech projects under-deliver.]]></description><link>https://www.markjadams.co.uk/p/five-mistakes-i-see-businesses-make</link><guid isPermaLink="false">https://www.markjadams.co.uk/p/five-mistakes-i-see-businesses-make</guid><dc:creator><![CDATA[Mark Adams]]></dc:creator><pubDate>Sat, 15 Jul 2023 07:00:00 GMT</pubDate><content:encoded><![CDATA[<p>Most contact centre tech projects under-deliver. The reason is rarely the tech itself.</p><p>I&#8217;ve spent the last twenty-plus years on every side of these decisions. Sometimes as the buyer choosing a system. Sometimes as the leader inheriting one I didn&#8217;t pick. Sometimes as the consultant brought in eighteen months later to work out why the brilliant new platform isn&#8217;t delivering what was promised. The patterns are remarkably consistent. The tech is rarely the villain. The decisions made before, during and after the purchase are almost always where the value gets quietly lost.</p><p>These are the five mistakes I see most often, in roughly the order they happen.</p><h3>1. Buying without living in the current experience first</h3><p>Every contact centre tech engagement I take on starts the same way: I sit with advisors while they do their actual job. I watch which systems they reach for and which they avoid. I use the tools as a customer would &#8212; picking up the phone, opening the chat, navigating the IVR, completing the journey. I look at where the friction lives and where the workarounds have built up.</p><p>Five minutes of that is worth an hour of someone in head office telling you what they think is happening.</p><p>The reason this matters for tech decisions is simple: if you don&#8217;t know what&#8217;s actually broken in your current operation, you can&#8217;t know what to buy. You&#8217;ll buy what the vendor demonstrates well, or what the loudest internal stakeholder is asking for, or what looks impressive on a feature comparison spreadsheet. None of those are the same as buying what your team and your customers actually need. The result is a system that solves a problem you didn&#8217;t have while leaving the problem you did have firmly in place.</p><h3>2. Not being clear what problem you&#8217;re trying to solve</h3><p>Most tech buying decisions get framed too broadly. &#8220;We need a new contact centre platform.&#8221; &#8220;We need to consolidate our tech stack.&#8221; &#8220;We need to deploy AI.&#8221;</p><p>Those aren&#8217;t problems. They&#8217;re vague aspirations dressed up as briefs.</p><p>A real problem statement looks more like: &#8220;Our advisors are spending 25% of their handle time on after-call work, and our most experienced people are leaving partly because of that.&#8221; Or: &#8220;Our self-service deflects volume but increases complaints when it fails, because customers feel they&#8217;ve been pushed away rather than helped.&#8221; Or: &#8220;We&#8217;re growing 40% year on year and our current platform charges per-seat in a way that means we&#8217;re paying for capacity we don&#8217;t need.&#8221;</p><p>Each of those statements points to a different solution. The first might be a summarisation tool. The second might be redesigning the self-service journey, not buying more of it. The third might be a renegotiation rather than a replacement. Without that level of specificity, you&#8217;re not buying technology &#8212; you&#8217;re buying optimism.</p><p>The test I apply: if you can&#8217;t explain the problem in two sentences to someone outside your business, you&#8217;re not ready to talk to vendors yet.</p><h3>3. Buying for the operation you might be, not the one you are</h3><p>This is the most expensive mistake of the five &#8212; and it&#8217;s worth being precise about.</p><p>Buyers will always project some sense of their future operation into the buying decision. That&#8217;s reasonable; nobody wants to replace a platform every eighteen months. The problem isn&#8217;t planning ahead. The problem is buying <em>specific functionality</em> for an imagined future you can&#8217;t actually predict, and locking yourself into it today.</p><p>It manifests in two ways. The first is the platform decision: all-in-one suite versus best-of-breed components. The all-in-one is seductive &#8212; one supplier, one contract, one data model, the promise of seamless integration across every channel and function you might ever need. The best-of-breed approach is more rigorous &#8212; you accept some integration complexity in exchange for genuinely strong tools in each area. Neither is universally right. What makes the decision wrong is choosing based on the operation you imagine running in three years rather than the one you actually run today.</p><p>The second is over-specifying within whatever you&#8217;ve chosen. Every modern platform comes with functionality you don&#8217;t currently need: workforce management modules, speech analytics, advanced reporting, automation suites, AI capabilities. The vendor&#8217;s commercial pitch is built around the idea that you&#8217;ll grow into it. Sometimes that&#8217;s true. More often, the unused functionality becomes a tax &#8212; you&#8217;re paying licence fees for capabilities sitting dormant, while your core use case underperforms because nobody&#8217;s optimising for it.</p><p>The discipline worth applying isn&#8217;t <em>don&#8217;t plan for the future</em> &#8212; it&#8217;s <em>buy for flexibility, not for predictions</em>. A platform that&#8217;s modular, that scales sensibly, that lets you add and remove components as your needs actually emerge, gives you the genuine optionality buyers think they&#8217;re getting from over-buying. A platform sold to you on the basis of what you&#8217;ll need in 2029 is selling you a forecast nobody can make accurately, and charging you for it today.</p><p>Buy what you can credibly use, well, in the near term. Make sure what you buy can grow with you. Treat any specific feature you&#8217;re buying for &#8220;the future&#8221; with healthy suspicion.</p><h3>4. Underestimating the people change required</h3><p>A new platform isn&#8217;t just a tech project. It&#8217;s a change project that happens to involve technology.</p><p>Every contact centre tech decision I&#8217;ve seen succeed has had three things alongside the platform itself: a properly resourced training plan, redesigned processes that actually use the new capabilities (rather than replicating the old ones on the new system), and a leadership team prepared to back the team through the inevitable performance dip in the first few months.</p><p>Every contact centre tech decision I&#8217;ve seen disappoint has skimped on at least one of those three. Usually two. Sometimes all three.</p><p>The reason this gets underestimated is structural. The business case for the technology is built around the cost of the technology &#8212; licences, implementation, integration. The people change is a different budget line, often owned by a different team, sometimes not budgeted at all. The result is that organisations spend hundreds of thousands on the platform and nothing on the change required to use it well. The platform then under-delivers, and the post-mortem typically blames the vendor rather than the buyer.</p><p>The test I&#8217;d apply: if your business case doesn&#8217;t include the cost of training, process redesign, and leadership time over the first six to twelve months, your business case is wrong.</p><h3>5. Skipping the pilot &#8212; or running a pilot designed to succeed</h3><p>The most experienced buyers I&#8217;ve worked with insist on a pilot before any major contact centre tech purchase. The least experienced ones either skip it entirely, or run a pilot so carefully designed that it can&#8217;t fail.</p><p>A pilot designed to succeed looks like this: a small, motivated team using the new platform on hand-picked easy cases, with the vendor providing white-glove support, in a tightly controlled environment. The pilot delivers exactly what the demo promised. The decision is made. Twelve months later the production rollout struggles, and nobody can quite explain why.</p><p>A real pilot looks different. It runs against actual operational conditions &#8212; real customer mix, real volumes, real edge cases, real handoffs between systems. The vendor is involved but not over-involved. The team is representative rather than hand-picked. There&#8217;s a clear definition of what would constitute the pilot succeeding <em>and</em> what would constitute the pilot failing &#8212; and a credible willingness to walk away if the latter happens.</p><p>The point of the pilot isn&#8217;t to confirm the decision you&#8217;ve already made. It&#8217;s to test it.</p><h3>A final reframe</h3><p>If I had to compress all five into one principle, it would be this: <strong>the technology decision is the easy bit. The hard bits are everything around it.</strong></p><p>Knowing what you&#8217;re buying for. Understanding the operation you&#8217;re buying into. Resisting the seductive promise of capability you don&#8217;t yet need. Resourcing the people change properly. Testing under real conditions before you commit.</p><p>Most contact centre tech projects don&#8217;t fail because the technology was wrong. They fail because the buyer didn&#8217;t do the work that surrounds the technology &#8212; and no system, however good, compensates for that.</p><p>If you&#8217;re navigating one of these decisions and would like to talk through what you&#8217;re seeing, I&#8217;m easy to find. <a href="mailto:mark@soothconsulting.xyz">mark@soothconsulting.xyz</a>.</p>]]></content:encoded></item></channel></rss>