Ivanti Study Reveals Hidden Use of Generative AI in the Workplace and the Need for Updated Corporate Policies

An increasing number of employees are integrating generative artificial intelligence (AI) tools like ChatGPT into their daily work, often without their employers’ knowledge. A recent Ivanti study found that 42% of office workers use generative AI technologies, with one in three keeping this usage hidden from their organizations. This trend reveals a major shift in workplace AI adoption and prompts critical questions about corporate policies and employee behavior. The secrecy around AI use stems from several factors. Many companies have unclear or inadequate AI policies, leaving workers uncertain about what is allowed. Others explicitly ban or restrict certain AI tools over concerns about security and data privacy, motivating employees to conceal their usage to avoid disciplinary consequences. Additionally, some employees hide their AI use to gain a competitive advantage by boosting productivity, creativity, or problem-solving while not revealing their AI assistance. Initially, organizations responded with caution or outright discouragement toward AI due to fears that sensitive information could be leaked through cloud-based AI platforms. This fostered a stigma around AI use at work, pushing employees to adopt AI tools covertly—a practice known as “shadow AI” or BYOAI (“bring your own AI”). This underscores a growing gap between employee behavior and organizational governance amid rapid technological change. Despite employer concerns, the Ivanti study shows that frequent AI users generally accept their peers’ use of similar tools, suggesting that direct experience increases appreciation and normalizes AI integration at work.
However, this acceptance contrasts with employers’ lack of formal guidance and support, indicating a need for businesses to evolve their policies. Workplace technology and AI ethics experts stress the importance of developing adaptable policies that keep pace with AI advancements. As generative AI becomes more sophisticated and embedded in job roles, organizations must balance protecting sensitive data and ensuring compliance with enabling innovation and productivity. Promoting open dialogue and collaboration on AI use can reduce secrecy and tension between employees and management. Clear, well-communicated AI guidelines can empower employees to use these tools responsibly and confidently. Policies might specify approved AI tools, define acceptable use cases, provide training on data privacy and ethics, and establish channels for reporting AI concerns. Creating a culture of transparency and trust allows companies to harness AI’s benefits while minimizing risks. Generative AI’s rise as a core part of modern office work presents both challenges and opportunities. With employees increasingly using AI for tasks such as drafting communications, coding, and data analysis, the line between authorized and unauthorized AI use blurs. Employers who proactively address these issues can better attract and retain talent, boost efficiency, and maintain competitiveness in a tech-driven environment. In summary, the Ivanti study highlights the widespread secret use of generative AI among employees, emphasizing the urgent need for companies to update and clarify AI policies, encourage openness about AI tools, and educate workers on responsible AI use. Doing so will ease employee concerns, reduce shadow AI practices, and enable more effective, ethical integration of AI into daily business operations.
Brief news summary
A recent Ivanti study found that 42% of office workers use generative AI tools like ChatGPT, with one-third hiding this use from employers—a practice known as “shadow AI.” This arises from unclear policies, privacy concerns, and employees seeking competitive edges. Initially, AI adoption faced stigma and security fears, causing a rift between employee actions and corporate rules. Frequent users are more accepting of peers’ AI use, highlighting a policy disconnect. Experts recommend flexible workplace guidelines that balance innovation and security. Promoting transparency and responsible AI training can reduce secrecy, empower staff, and help firms harness AI benefits while managing risks. As AI becomes essential for tasks like drafting, coding, and analysis, updating policies is crucial for competitiveness, trust, and ethical integration in the workplace.
AI-powered Lead Generation in Social Media
and Search Engines
Let AI take control and automatically generate leads for you!

I'm your Content Manager, ready to handle your first test assignment
Learn how AI can help your business.
Let’s talk!

Artificial intelligence, blockchain drive payment…
The payments landscape is evolving rapidly, with numerous startups spearheading innovations that are reshaping banking, particularly in emerging areas like stablecoins and artificial intelligence (AI).

SoftBank Proposes $1 Trillion AI and Robotics Hub…
SoftBank founder Masayoshi Son has unveiled an ambitious plan to create a $1 trillion artificial intelligence (AI) and robotics hub in Arizona, aiming to boost the United States' high-tech manufacturing capabilities and position the country as a global leader in advanced technology and innovation.

SEC Requests Revised S-1 Forms for Solana ETF App…
The United States Securities and Exchange Commission (SEC) has recently requested amended filings for the proposed Solana-based exchange-traded funds (ETFs), indicating a possible acceleration in the approval process for these financial products.

Anthropic's Research Highlights Unethical Behavio…
A recent study by Anthropic, a prominent artificial intelligence research firm, has revealed troubling tendencies in advanced AI language models.

Apple Considers Acquisition of AI Search Startup …
Apple Inc., known for its innovative products and services, has reportedly initiated early internal discussions about potentially acquiring Perplexity, a startup specializing in AI-driven search technologies.

Artificial Intelligence and Blockchain Discussion…
Join us for an engaging and informative event that explores the latest advancements in Artificial Intelligence (AI) and Blockchain technology.

Ford explores decentralized legal data storage on…
Ford Motor Company, a Fortune 500 firm, has partnered with Iagon and Cloud Court to initiate a proof-of-concept (PoC) centered on decentralized legal data storage, according to an announcement dated June 18.