State Governments Prioritize Data Privacy in AI Innovation and Sandbox Experiments

States nationwide are developing "sandboxes" and encouraging experimentation with AI to enable more effective and efficient operations—perhaps best described as AI with a purpose. However, promoting innovation within government carries inherent risks. In Colorado, CIO David Edinger reported that his office has reviewed approximately 120 AI-related proposals for potential state government applications. He detailed their process for vetting agency submissions. Among ideas deemed "high" risk according to the NIST framework, most rejections share a common issue: data practices failing to meet the state’s data privacy standards. Colorado is not unique in prioritizing data practices when assessing potential AI partners. During a discussion with Government Technology at last month’s National Association of State Chief Information Officers (NASCIO) Midyear Conference, California Chief Technology Officer Jonathan Porat outlined three key factors guiding the state’s evaluation of AI use cases. Beyond evaluating whether the use case is appropriate for state government, officials review the technology’s track record and closely examine the data involved in the proposal. “Are the data that we’re using appropriate for a GenAI system?” Porat asked. “Are they being properly governed and secured?” Video transcript: “I would say we’ve reviewed maybe 120 proposals so far across every agency for all possible uses, following the NIST framework—categorizing as medium, high, or prohibited risk. Prohibited uses we ban outright; medium-risk uses we deploy directly; and high-risk uses undergo more thorough evaluation.
When we reject proposals, it’s almost always not because of the intended use but due to data sharing concerns. Specifically, data shared with providers under standard contracts that are prohibited by state law, such as PII, HIPAA, or CJIS data. We have to deny these not because of how the tool is intended to be used, but because the data sharing arrangements are unacceptable. That is really the core issue—and one that surprised us—the problem lies not in the use itself, but in how the data privacy is handled. ” Noelle Knell is the executive editor at e. Republic, overseeing the overall editorial strategy for e. Republic’s platforms, including Government Technology, Governing, Industry Insider, Emergency Management, and the Center for Digital Education. She has been with e. Republic since 2011 and brings decades of experience in writing, editing, and leadership. A California native, Noelle has worked in both state and local government and holds degrees in political science and American history from the University of California, Davis.
Brief news summary
States across the U.S. are fostering AI innovation in government through sandboxes and pilot programs aimed at enhancing efficiency and effectiveness. However, these initiatives bring significant risks, especially regarding data privacy. In Colorado, CIO David Edinger reviewed nearly 120 AI proposals from state agencies, applying the NIST risk framework to categorize them as medium, high, or prohibited risk. High-risk proposals undergo rigorous scrutiny, with many denials stemming not from the AI technology itself but from insufficient data privacy protections involving sensitive information governed by laws like PII, HIPAA, or CJIS. Similarly, California CTO Jonathan Porat emphasized evaluating AI based on appropriateness, technological reliability, and robust data governance to securely handle data used by generative AI. These examples demonstrate that successful AI adoption in government depends on balancing innovation with strict data privacy safeguards and regulatory compliance.
AI-powered Lead Generation in Social Media
and Search Engines
Let AI take control and automatically generate leads for you!

I'm your Content Manager, ready to handle your first test assignment
Learn how AI can help your business.
Let’s talk!

Google launches AI startup fund offering access t…
Google announced on Monday that it will launch a new fund focused on investing in artificial intelligence startups.

Cryptocurrency Basics: Pros, Cons and How It Works
You’re our top priority—always.

Perplexity nears second fundraising in six months…
Perplexity, a San Francisco-based AI-powered search engine, is nearing the close of its fifth funding round within just 18 months, reflecting rapid expansion and rising investor confidence.

Solana Celebrates 5 Years: 400 Billion Transactio…
The Solana blockchain recently celebrated a major milestone, marking five years since its mainnet launch on March 16, 2020.

The Blockchain Group announces a convertible bond…
Puteaux, May 12, 2025 – The Blockchain Group (ISIN: FR0011053636, ticker: ALTBG), listed on Euronext Growth Paris and recognized as Europe’s first Bitcoin Treasury Company with subsidiaries specializing in Data Intelligence, AI, and decentralized technology consulting and development, announces the completion of a reserved convertible bond issuance via its wholly-owned Luxembourg subsidiary, The Blockchain Group Luxembourg SA.

AI Firm Perplexity Eyes $14 Billion in Valuation …
Perplexity AI, a rapidly growing startup specializing in AI-driven search tools, is reportedly in advanced talks to secure $500 million in a new funding round, according to the Wall Street Journal.

New SEC Chair Intends to Write Rules for Crypto
Securities and Exchange Commission (SEC) Chair Paul Atkins has announced comprehensive plans to modernize the regulatory framework for crypto assets.