‘I Swear, I Didn’t Use AI’

Managers and colleagues are unsure if they should reveal how much—or how little—they use AI for work. Why either option is risky.

June 09, 2025

The executive turned in a stellar marketing plan, full of clever pitches and execution details. When asked to make a few changes, she turned the plan back in within a few minutes. It was so clean and perfect that to all involved it appeared the executive had relied on AI. But she said nothing.

Should she have?

It’s become a new wrinkle as AI continues its march into the workplace: To reveal or not to reveal. As more and more workers turn to ChatGPT and other platforms for help with daily activities, colleagues and clients worry about how transparent they should be. Too much AI, and they fear losing their job. Too little and they may actually have the same fear. Either way, a lack of guidance from most companies is creating a new source of AI anxiety. “It’s a really tricky situation,” says Louis Montgomery Jr., a principal in Korn Ferry’s Human Resources Center of Expertise. “It’s even more tricky when you don’t have any policies on approved ways to use AI.”

Interestingly, disclosure can backfire in unanticipated ways. A new study by University of Arizona researchers found that workers who disclose that they use AI to complete tasks are trusted less by colleagues. According to the study, colleagues perceive those who use AI to complete work tasks as lazy, less committed, and misrepresenting their work. On the other hand, becoming proficient in AI signals to managers that you are staying up to date with new tech tools and investing in learning new skills to become more efficient and productive, says Bryan Ackermann, head of AI strategy and transformation at Korn Ferry. "People that lean heavily into AI can create space and capacity to add more value than those that do not," says Ackermann.

Amid layoffs and cost cuts, however, experts say managers who see workers relying on AI too much could decide their role is not needed. Brian Bloom, vice president of global benefits and mobility operations at Korn Ferry, calls this dynamic the “user’s dilemma.” He says people are rethinking to whom they disclose—and to what extent—their use of AI for job tasks out of a fear of repercussions, being ostracized by colleagues, or potentially being laid off in favor of the technology. “Disclosing or not disclosing could hurt you either way,” he says.

So far, most firms are not providing any guidance—and potentially for good reason. Experts say there are risks for firms that would encourage transparency from workers in how they are using AI: Hallucinations have exposed firms to lawsuits, for instance, and reports of AI providing advice or suggestions that violate company policy abound. Conversely, early adopters who have gained proficiency can help colleagues and teammates get up to speed.

For Bloom, using AI for work tasks isn’t the issue. It’s what you do with the time saved that prompts disclosure concerns. “Using AI to meet expectations and go on autopilot isn’t going to win any favor with managers and colleagues,” he says. “But if you are using the time saved to develop, grow, and improve, that will.” Korn Ferry senior client partner Lucy Bosworth agrees, saying that demonstrating how using AI helps maintain focus and free up time for higher value work will shut down criticism. “Budgets are going down, layoffs are happening, but the work still needs to get done,” she says. “Disclosing how AI is helping you be more efficient and productive and do higher value work will put you in a much better position.”

Maneesh Dube, a senior client partner in the Executive Search practice at Korn Ferry, sees the issue a little more clear cut. He says workers not using AI for tasks face much bigger concern than whether to disclose it or not. “In the future, not all jobs will be replaced by AI, but all people not using AI will be replaced by those who do.”

 

Learn more about Korn Ferry’s Assessment and Succession capabilities.