lang icon English
Oct. 9, 2023, 11:49 a.m.
651

None

Brief news summary

None

Exploring the Shift in American Education Policy: The Rise of Play-Based Learning Legislation and Its Impact on the Classroom Examining the Changes in Digital Governance: Who Holds the Power in the Digital Gilded Age?Governments worldwide are enacting fundamental policies to regulate artificial intelligence (AI) and algorithmic systems. While legislation is progressing, regulators should not passively wait for lawmakers to act. Instead, they should proactively educate themselves about algorithmic systems in their regulatory domain and assess them for compliance based on existing statutory authority. Many regulatory agencies, including the U. S. Federal Trade Commission's Office of Technology and Consumer Financial Protection Bureau, new algorithmic regulators in the Netherlands and Spain, and online platform regulators like the UK's Office of Communications and the European Centre for Algorithmic Transparency, have already begun implementing innovative approaches and policies for AI regulation. Of particular interest is how oversight agencies can gather information about algorithmic systems, their societal impact, potential harms, and legal compliance. As these agencies experiment with information gathering methods, it is possible to identify an emerging AI regulatory toolbox for evaluating high-risk algorithmic systems. This toolbox includes enhancing transparency, conducting algorithmic audits, establishing AI sandboxes, leveraging the AI assurance industry, and learning from whistleblowers. Each intervention has its own strengths and weaknesses, dependent on the type of AI system being governed, requiring different internal expertise and statutory authorities. To facilitate informed AI policymaking, regulators should be familiar with these tools and their trade-offs. Requiring corporate transparency is a critical function of many regulatory agencies and is equally essential in the realm of algorithmic systems. Algorithmic transparency is a well-researched aspect of AI, resulting in various approaches. Transparency measures for affected individuals involve direct disclosure, notifying individuals when they interact with an algorithmic system. It also encompasses "explainability, " providing individuals with insight into how an algorithmic system arrived at a specific decision or outcome. Public-facing transparency might involve sharing statistics on an algorithmic system's accuracy or fairness, descriptions of its underlying data and technical architecture, and comprehensive assessments of its impacts, also known as algorithmic impact assessments. Transparency can also extend to cross-organizational sharing between businesses or with regulators, enabling more detailed information exchange that may aid clients of AI developers in adapting algorithmic systems or allow regulators to acquire specific information privately, reducing the risk of intellectual property theft. Regulators may already possess existing authorities that can be employed to mandate or encourage algorithmic transparency, such as the Consumer Financial Protection Bureau requiring explanations for credit denial from algorithmic systems. Similarly, the European Union's General Data Protection Regulation guarantees individuals the right to receive meaningful information about the logic of algorithmic systems. Although not yet enacted, the upcoming EU AI Act will likely introduce significant transparency requirements, including direct disclosure of chatbots and public reporting on high-risk AI systems. Transparency requirements are often straightforward for regulators to implement and offer an appealing initial step in AI regulation. However, regulators must be careful when formulating transparency requirements, ensuring they are adequately defined to prevent companies from providing self-serving statistics. Well-designed transparency requirements can deliver a range of benefits, including helping individuals and businesses make informed choices regarding AI developers, motivating AI developers to improve their systems, assisting journalists and civil society organizations in identifying problematic systems, and contributing to better policymaking. In-depth algorithmic investigations and audits are highly impactful tools for regulators to evaluate algorithmic systems. Research has provided insights into how algorithmic audits can be conducted and what flaws they can uncover. Audits have exposed inaccuracies, discrimination, manipulation of information environments, misuse of data, and other significant issues in algorithmic systems. Violations discovered through algorithmic audits may be subject to regulatory action. The European Union's AI Act empowers regulators to demand information on high-risk algorithmic systems to assess compliance with the law. Already, authorities like the Australian Competition and Consumer Commission, the UK's Information Commissioner's Office and Competition and Markets Authority, and the U. S. Federal Trade Commission have conducted algorithmic audits on platforms and algorithms using online data. Algorithmic audits enable regulators to analyze algorithmic systems directly, without heavily relying on claims from developers.

In-depth audits are more likely to uncover flaws and harmful aspects but require more technical expertise and capacity from regulators, including data scientists proficient in evaluating algorithmic systems and the development of secure computing environments for analysis. AI regulatory sandboxes aim to foster collaboration between regulators and regulated entities, typically AI developers. Participation in sandboxes, often voluntary, simplifies regulatory compliance, offers legal certainty to companies, and enhances regulators' understanding of the design, development, and deployment of AI systems. Sandboxes can also help regulators identify potential legal issues during system development. SANDS has varied definitions ranging from documentation exchanges and feedback to shared computing environments. They require ongoing collaboration between regulators and companies, potentially yielding less adversarial relationships compared to algorithmic audits. However, sandboxes are more demanding for companies as they may need to share updated data or ensure their systems work within a government computing environment. From a regulator's perspective, sandboxes require additional efforts, including developing computing environments capable of accommodating various algorithmic software and testing different algorithmic systems across multiple domains. Given the substantial workload, sandboxes are more suited for high-stakes algorithmic systems. AI assurance refers to a growing industry specializing in monitoring, evaluation, and legal compliance of algorithmic systems. Various companies offer services ranging from software aiding algorithmic development to documentation and compliance without developer tools. Some focus on specific industry sectors, such as fairness and disparate impact analyses for financial institutions. AI assurance companies emphasize both profit-driven improvements to algorithmic systems and regulatory compliance, albeit within the context of evolving laws and regulations. Regulators should actively engage with the AI assurance industry to advance democratic goals. They can issue guidance encouraging regulated companies to use AI assurance tools and consider such adoption as a potential signal of regulatory compliance. Regulators can also learn from and inform the AI assurance industry, fostering communication about technical functions, societal impacts, and the best ways to achieve compliance. Regulators should actively welcome information and complaints from affected individuals and whistleblowers who possess unique insights into algorithmic systems and their potential harms. Individuals subjected to algorithmic systems may have specific knowledge regarding their function, but it can often be difficult for them to recognize why an action was unfair or wrong. However, groups of individuals can collaborate to identify algorithmic harms, as demonstrated by content creators who exposed YouTube's potential demonetization bias against LGBTQ-related content. The EU AI Act, although subject to potential revision, includes provisions for redress for individuals harmed by algorithmic systems. Agencies should encourage individuals to come forward with complaints and concerns. Developers themselves, particularly data scientists and machine-learning engineers, possess deep understanding of algorithmic systems and their social impact, potential harms, and legal implications. Examples abound of developers providing inside information that contradicts public statements made by tech companies. Regulators should recognize the value of direct reporting and whistleblowers for uncovering algorithmic harms. Regulators should actively assess the steps necessary to preserve their regulatory mission, including cataloging emerging uses of algorithmic systems, exploring existing statutory authorities, and hiring personnel with expertise in algorithmic systems. Gap analysis can identify areas where current authorities and capacities are inadequate, allowing regulators to inform legislators on necessary updates. Regulators may have some authority for information gathering, and they should complement their efforts by considering independent academic research. Governments, such as the EU, are even mandating platform data access for independent researchers, contributing to regulatory investigations and enforcement actions. Robust and persistent information-gathering strategies are crucial for regulators to make informed decisions about AI policymaking, oversight, and enforcement. As these agencies continue their work, their collective efforts will establish a regulatory toolbox on which future AI governance will be built.


Watch video about

None

Try our premium solution and start getting clients — at no cost to you

I'm your Content Creator.
Let’s make a post or video and publish it on any social media — ready?

Language

Hot news

All news

AI Company

Launch your AI-powered team to automate Marketing, Sales & Growth

and get clients on autopilot — from social media and search engines. No ads needed

Begin getting your first leads today