New Content Item (1)

Guardrails, Not Gatekeepers: Scaling AI Responsibly

05.07.2026

Your employees are already using AI.

It’s not hypothetical; it’s a reality of modern work.

The question leaders should be asking isn’t “How do we control it?” It’s “How do we guide it?” Unmanaged AI adoption creates real risk, especially around data privacy and intellectual property. But responsible AI adoption can also unlock meaningful gains: faster, data-driven decisions and more time for higher-value work.

What Many People Miss About Public Generative AI Tools

Most popular generative AI tools used by consumers learn from what users type into the chat. Depending on the product and settings, that input may be retained and used to improve future versions of the model. In plain English, it means sensitive information (customer data, pricing, contracts, proprietary processes) can end up outside your control if employees paste it into the wrong place.

If you’re not providing employees a secure tool, they’re probably using their own. We polled more than 120 associates and found that 21% had no idea their data may not be secure once entered into a public AI tool. That risk is one reason we chose a “Copilot for all” strategy at Sonepar, putting Copilot in the hands of as many associates as possible. If they’re using AI, we want them to use a tool that protects our data.

AI Use at Work Is Already Here

When we surveyed a sample of our associates, the results matched what I’m hearing from peers across industries: AI use is already widespread in the workplace.

  • 78% of our surveyed associates said they already use AI daily or weekly for work. 77% said the same for their personal life.
  • Sentiment was largely positive, but there’s uncertainty out there: 52% said they’re “excited,” 13% were “unsure,” and about 34% said they feel “a little of everything.”

The most common use cases won’t surprise you: drafting text and summarizing information. In other words, employees are using AI in workflows where confidential context is likely to show up: emails, proposals, meeting notes, and internal documentation.

Responsible AI Starts with the Right Choices

The first step for responsible adoption is choosing the right platform. If you want your teams to use AI safely, you have to give them an option that’s designed for enterprise use, where data protection, identity management, and administrative controls are built in.

At Sonepar, we chose Microsoft Copilot because we believe Microsoft’s approach to enterprise security, compliance, and data boundaries best aligns with how a global distributor needs to operate. 

We have a strong partnership with Microsoft at the global level that’s enabling us to deploy secure, enterprise‑grade AI capabilities throughout our operations in 40 countries around the world.

How We’re Encouraging Responsible AI Adoption at Sonepar

Technology alone doesn’t change behavior. People do. That’s why we’re treating AI like any other meaningful transformation. We need to illustrate the possibilities, help associates understand how their role is impacted, communicate the urgency of adoption, address concerns, and enable grassroots champions.

1) “AI Day” 

Copilot isn’t just a new tool and shouldn’t be introduced like one. We launched AI Day to kick off our rollout of Copilot licenses throughout the organization, gathering senior leaders from all our U.S. operating companies to learn about Copilot and the plan for adoption. An open forum for discussion gave them the opportunity to ask questions, but it also tipped us off to some of the concerns the wider organization could have, allowing us to head those off proactively.

2) Champions, Training Resources and Grassroots Momentum

With the speed and scale of adoption needed, a large enterprise can’t have gatekeepers for AI. We need all hands on deck to get thousands of associates excited about and equipped to use Copilot. To expedite adoption, we identified champions at each of our operating companies in the U.S. These champions received advanced training and are acting as local ambassadors for AI and can more quickly field questions from associates. They are also tasked with raising concerns from the field to leadership so we’re hearing from the front line about what’s working and what’s not. We’re also establishing centralized training resources for all associates, including a prompt library, and encouraging grassroots exchange. People won’t adopt new tools because they’re inspired by abstract statements about big possibilities. They’ll adopt it when a colleague in a similar role looks at a process that used to take hours or days and says, “You’re still doing that manually? It takes me 5 minutes with Copilot!”

The Right Tools Meet the Right Culture

If you’re trying to drive AI adoption without creating avoidable risk, two areas are key: tools (what people use) and culture (what people do when no one’s watching).

Most employees aren’t eager to bypass policy. They’re just trying to get work done. If the approved tool is hard to access, slow, or unclear, people will default to what’s familiar. That’s why “responsible AI” is about making the right option the path of least resistance.

When instructing associates on what’s approved and what’s not, explain your reasoning. At Sonepar, we chose Microsoft Copilot because of their proven commitment to keeping enterprise data secure. When associates ask why they can’t use other tools, we explain the very real risks to data security in clear terms. What you type into an unsecured tool is used by that tool to learn and becomes information it could share with anyone. You wouldn’t hand your social security number to a stranger, so you don’t want to put it into an unprotected AI tool either. 

Use practical examples whenever you can. Instead of generic guidance, give teams a prompt library that suggests starter workflows: summarize a long email thread, draft a customer follow-up, pull insights from a 20-page report, or extract action items from meeting notes. Encourage employees to share what they’ve used AI for. Real-world examples take AI from an abstract concept to an actionable tool.

Lastly, encourage a culture of experimentation and sharing. While our AI champions are important for answering questions and providing guidance at a local level, anyone can be an ambassador for AI. We modeled that behavior at the leadership level through a video where I sat down with one of our operating company presidents, Brendan O’Hare of Capital Electric. Brendan is a big proponent of AI. He shared both his own personal experiences with Copilot as well as how the sales teams at Capital are using it to analyze mass amounts of data in just a few minutes and pulling actionable insights about sales opportunities, a process that would’ve taken days without Copilot.

The Bottom Line

If there’s one message I’d leave you with, it’s this: AI will spread through your organization with or without your permission. The only real choice is whether that adoption happens in the open, with the right tools, guardrails, and support, or in the shadows.

  • Assume AI use is already happening and plan for it.
  • Educate teams on data risk in practical terms.
  • Provide an enterprise-grade tool so the safe choice is the easy choice.
  • Invest in change management (communication, champions, and training), not just licenses.

I’d love to hear what you’re seeing in your organization: Are employees using AI eagerly, or are they skeptical? What guardrails have worked, and what’s been harder than expected? How are you balancing the speed of adoption needed with a structured rollout? The core principles of successful technology adoption apply, but the speed and scale of this innovation is bringing new challenges for digital leaders.