All Insights
#AIAdoption
February 2026

GenAI at Work - Listening to Concerns and Leading with Clarity

Rolling out Generative AI in the workplace is more about people than platforms. Over the past year and half, I’ve helped a number of organisations launch GenAI initiatives - and nearly every one of them has surfaced questions, worries, or resistance from staff (with some common themes). These concerns are not signs of failure; they’re signs that people are paying attention. In this article, I want to share the most common concerns I’ve encountered - and how organisations can respond in ways that build trust, not tension.

By Steve Harris

Rolling out Generative AI in the workplace is more about people than platforms. Over the past year and half, I’ve helped a number of organisations launch GenAI initiatives - and nearly every one of them has surfaced questions, worries, or resistance from staff (with some common themes). These concerns are not signs of failure; they’re signs that people are paying attention. In this article, I want to share the most common concerns I’ve encountered - and how organisations can respond in ways that build trust, not tension.

What is it?

When any major technology shift takes place - especially one as transformative and hyped (it sill has so much hype around it - but the reality is catching up) as GenAI - concerns naturally arise. Employees wonder:

  • Will my job be replaced?
  • Is this environmentally sustainable?
  • How do I know this is ethically and legally sound?
  • Will my role become less meaningful—or just more automated?

These are not trivial concerns. They deserve real, thoughtful and specific responses, not just blanket reassurances. Organisational Change Management 9OCM) plays a critical role in ensuring these conversations happen early, often, and with transparency (most of the organisations I work with use Microsoft’s platform, the responses reflect that).

What does it mean from a business perspective?

These concerns aren’t just noise - they’re strategic indicators. They signal where the business needs to pay attention, provide clarity, and take responsible action.

Here are the most common concerns I’ve encountered, and how to approach them - firstly Acknowledge them as valid and then provide a context specific response:

Job Security

Acknowledge: We recognise that the introduction of new technology can create anxiety, especially around how work may change. Tasks will evolve as part of this transition.

Response: Our focus is on using GenAI to reduce manual, repetitive effort and free up time for higher-value, meaningful activities. This is backed by policy and shaped by conversations with staff about where AI can help - not replace - people.

Impact on the Environment

Acknowledge: We understand the concern about GenAI’s environmental footprint, including energy consumption and data centre impacts.

Response: Microsoft, our primary approved GenAI provider, is working toward being carbon-negative by 2030, using 100% renewable energy and implementing advanced designs like zero-water and closed-loop cooling. We’ll keep monitoring these developments to ensure our tools align with our sustainability goals (and provide links - see below).

Use of Copyrighted Works in Training

Acknowledge: We hear concerns around how AI models are trained and the use of copyright or proprietary materials.

Response: We select and approve tools that align with legal standards, licensing requirements, and industry best practices. The legal landscape is evolving fast - we review our tools and guidance regularly to stay compliant with new regulations and norms.

Impact on My Role

This may speak to a deeper sense of loss of identity within the work being done.

Acknowledge: New technology can reshape roles, and that uncertainty can be uncomfortable - even if jobs aren’t going away.

Response: The goal is to automate time-consuming, error-prone tasks - especially those that take time away from higher-value work. Where experienced staff already have effective workflows or templates, automation may not be worthwhile. We want GenAI to support and amplify your expertise - not displace it.

What do I do with it?

If you’re a leader, project sponsor, or GenAI champion, what’s your role when these concerns show up? A mix of empathy, transparency, and clear policy goes a long way. Here are some actionable steps:

  • Acknowledge, don’t dismiss. When concerns are raised, start with “That’s a valid point.” It signals respect and builds trust.
  • Anchor in values. Make it clear that your GenAI strategy aligns with people-first values and responsible tech use.
  • Use specifics, not slogans. Show examples of how GenAI has helped real teams - without cutting headcount or diminishing expertise.
  • Highlight responsible choices. Reinforce that tool selection includes sustainability, compliance, and staff input - not just cost or hype.
  • Keep feedback loops alive. Encourage open discussion and update your policies and practices as both technology and team needs evolve.

GenAI isn’t just a tool - it’s a shift in how work gets done. And like any shift, it brings uncertainty. But if we engage those concerns directly - without minimising them - we unlock trust, not just efficiency. The goal isn’t blind enthusiasm. It’s responsible adoption, with eyes wide open.

Want to Discuss This Topic?

Steve is always happy to have a direct conversation.