Tech vs. Humanity is the Wrong Debate: What the Data Says About the Future of the Agent Role
minutes
For the past few years, the conversation around AI in customer support has often sounded like a tug-of-war: technology vs. people, automation vs. empathy, efficiency vs. experience. But the latest CCW Digital market study suggests something more practical and more interesting is happening.
AI is not replacing the need for human support. It’s changing where humans create the most value.
That shift matters because many organizations are already seeing measurable upside from AI, but they are still early in defining how the human role should evolve alongside it. In other words, the opportunity is real, but the operating model is still catching up.
Read the full CCW digital market study
AI is helping, but it’s not a magic fix
One of the most grounded takeaways in the report is that AI has not been a silver bullet for customer contact. It has not erased slow experiences or eliminated agent frustration. But leaders do see clear progress.
According to the study, nearly 89% of leaders say AI is improving operational efficiency, and 40% call those gains significant. AI is also making a strong impact on data and analytics (84% positive impact, with 40% saying significant). Customer experience and employee experience improvements are also notable, with 82% and 81% respectively reporting positive impact.
That combination is important because it shows leaders are not only chasing cost reduction. They are also pursuing better outcomes across quality and satisfaction. In fact, top AI investment goals include cost savings (87%), improved interaction quality (83%), improved interaction efficiency (81%), and stronger customer satisfaction scores (80%).
This is the headline many teams need to hear: the market is not choosing between efficiency and experience. It’s trying to improve both at the same time.
The “AI made things more human” claim is only partially true
This is where the report becomes especially useful.
A large majority of leaders believe AI has increased humanity in the customer experience to some degree. But the data shows that improvement is uneven. Only 21.3% say both self-service and agent-led interactions became more human. Another 33.0% say only self-service improved, and 28.7% say only agent-led interactions improved. Meanwhile, 17.0% say their experiences have not become more human.
It suggests many organizations are deploying AI, but not yet designing the full system around it. If self-service is more capable but agents are still constrained by fragmented tools, weak coaching, or rigid scripts, customers may not feel the promised leap in experience quality. The technology may improve one part of the journey while the rest of the operation still creates friction.
This is where AI maturity becomes a practical lens for leaders. Maturity is not just about adopting more advanced AI tools. It’s about aligning people, workflows, governance, measurement, and decision-making so AI can improve the full customer journey instead of isolated moments. In early stages, organizations often see gains in one area while other parts of the experience lag behind. As maturity grows, AI and human support work together more consistently, with clearer roles, stronger orchestration, and better outcomes across channels.
The real strategy question is not “AI or human?” It’s “when, where, and how do they work together?”
The study makes it clear that leaders do not share a single vision for frontline support, but most do believe AI should play a meaningful role.
In an ideal model, about 29.8% say AI should handle most frontline communication with a human in the loop. Another 21.3% prefer a blended AI + human approach in most conversations. Only 19.2% prefer humans to handle most frontline interactions while AI supports behind the scenes.
What determines when a human should step in? The top factors are practical, not philosophical. Leaders prioritize issue complexity (89.4%), issue stakes or significance (80.4%), and real-time contact volume (75.5%).
That’s an important shift in mindset. Mature AI strategy is not about proving automation can handle everything. It’s about routing the right work to the right resource at the right time.
Human value is getting clearer, even if role design is not
The study strongly reinforces what many frontline teams already know: human agents still have differentiated strengths that matter in high-value moments.
Leaders overwhelmingly cite empathy, understanding the stakes of a situation, relating to real-world experiences, and making off-script decisions as core human advantages. In one of the strongest signals in the report, roughly 94% say humans are fundamentally better at showing emotional empathy/concern, and more than 90% also point to strengths in understanding stakes and making nuanced decisions.
At the same time, organizations are far less confident about how those strengths translate into a future-ready role.
Only 12.8% say they have a complete understanding of how agent roles and responsibilities will transform with AI. Most (63.8%) say they understand how core day-to-day tasks will change, but they have not mapped the new tasks and functions agents will own. Another 23.4% are still evaluating the impact.
This is one of the biggest takeaways in the entire study: companies may be investing in AI faster than they are redesigning the human role around it.
Buy-in is conditional, and that is not a bad thing
A lot of AI messaging assumes agents will automatically welcome “more complex work” once automation takes repetitive tasks off their plate. The report suggests the reality is more nuanced.
Only a small share of leaders say agents are fully all-in on that shift. The majority say agent interest is conditional on specifics like the nature of the work, changing processes, and compensation/career pathing.
That is not resistance for the sake of resistance. It’s a rational response to ambiguity.
If the role is becoming more emotionally demanding, more consultative, and more complex, people want to know what support they will receive, how success will be measured, and whether compensation and growth paths will reflect the change. That is a design challenge for leadership, not a mindset problem to pin on the frontline.
The skills gap is real, but the environment gap may be even bigger
The report also challenges a common assumption: if agents are struggling in an AI-enabled future, the issue must simply be skills.
Skills matter, but the data points to deeper operational friction.
Only 13.8% of leaders believe their current agents are suited for complex customer support plus adjacent work like sales, data analysis, and bot training. Most say agents can handle complex support conversations, but not necessarily the broader work slate AI transformation may introduce.
At the same time, leaders report significant barriers that would slow even capable agents:
- fragmented systems and tools (73.4%)
- insufficient data/knowledge frameworks (71.3%)
- inadequate coaching (69.2%)
- unsuitable compensation/career paths (69.2%)
- unprepared supervisors/managers (68.1%)
This is the heart of the issue. Many organizations are asking what kind of agent they need next, when they also need to ask what kind of environment enables that agent to succeed.
The most promising AI use cases are the ones that reduce friction and improve context
One of the strongest sections of the study focuses on agent augmentation, and the priorities are telling.
Leaders are not just asking for flashy AI features. They are prioritizing capabilities that remove internal friction and help agents show up better in the conversation.
Top agent-facing AI priorities for 2026 include:
- intelligent search/knowledge management (94%+)
- call summarization (89%)
- workflow automation and visualization (88%)
- copilot/next-best action guidance (87%)
- post-call automation (82%)
And when leaders think about what helps agents build customer relationships, they point to context-rich support: access to case summaries, next-best action recommendations, complete purchase/support history, and real-time quality coaching. Nearly all respondents rate these as important, with especially strong support for case summaries and next-best action guidance.
That should reframe how teams think about AI ROI. The best investments are not just the ones that deflect contacts. They are the ones that improve the quality, confidence, and consistency of the interactions that still require a human.
Measurement is the quiet risk most teams are still underestimating
Many organizations say AI is working. Fewer can prove exactly how, where, and why.
The report shows a meaningful measurement gap:
- 21.4% say they are not measuring or evaluating AI outcomes
- 33.3% assess some outcomes but lack a consistent framework
- 29.8% measure ROI but cannot always trace impact to specific initiatives
- only 15.5% say they can fully measure outcomes and attribute impact with precision
There is a similar gap in employee-level tool usage analysis. Just 19.2% actively monitor how agents use AI tools, while 36.2% are not evaluating utilization at all.
This matters because without measurement, AI can become a budget line and a belief system instead of a performance system.
What is holding AI transformation back?
Budget is the top barrier, with about 80.7% citing cost as a challenge to scaling AI investments. But the report shows the bottleneck is broader than dollars alone.
Leaders also cite:
- concerns about existing tech/systems/data frameworks (79.8%)
- risk management concerns like hallucinations, compliance, and security (79.8%)
- employee adoption/enthusiasm (67.9%)
- uncertainty about how to measure and evaluate opportunities (66.7%)
In other words, the challenge is not simply “we need more AI.” It’s more like, “we need a more coordinated operating model for AI.”
That includes better cross-functional alignment, clearer prioritization, stronger change management, and a more practical proof-of-value framework.
The future of the agent role is more human, not less, but only if organizations design for it
The most important takeaway from this whitepaper is not that AI is winning or humans are winning. It’s that customer contact leaders are entering a design phase.
They are learning that technology can improve efficiency and analytics quickly. They are also learning that the human side of the transformation takes more intentional work: role clarity, coaching, workflow redesign, better tools, better measurement, and a culture that supports people doing more emotionally complex work.
Workplace culture, leadership quality, compensation/career paths, and flexibility all rank among the top factors for engagement and retention in this next chapter.
The organizations that get this right will stop treating AI as a replacement strategy and start using it as an amplification strategy.
Not “tech vs. humanity.”
Tech that removes friction. Humans who deliver judgment, empathy, and trust. And a system designed so each makes the other better.
AI in CX: Where Liveops fits in
As organizations move from AI experimentation to AI maturity, the challenge is rarely just choosing the right technology. The bigger challenge is making sure people, processes, and customer support operations evolve with it. That is where Liveops can help.
Liveops brings together scalable customer support solutions, experienced talent, and AI-enabled operational strategies to help brands reduce friction, improve consistency, and deliver better outcomes across both self-service and human-assisted interactions.
The goal is not to force AI into the experience. It is to build a smarter, more connected support model where technology and human expertise work together.
Related Resources
Stop outsourcing, start outsmarting
Join the brands redefining customer experience with Liveops. Empathetic agents, tech-powered delivery, and the flexibility to meet every moment. Let’s talk.
Explore flexible customer experience solutions


