California Supreme Court Demands Clarity on AI Use in February Bar Exam Questions

On Thursday, the California Supreme Court pressed the State Bar of California to clarify how and why it employed artificial intelligence (AI) to draft multiple-choice questions for its problematic February bar exams. The Court revealed on Tuesday that it had not been informed prior to the exam that the State Bar had permitted its independent psychometrician to use AI in creating a minor portion of the questions. Increasing public pressure, the Court demanded detailed explanations about the AI's role in question development and the measures taken to ensure the questions’ reliability. This request coincides with the State Bar’s petition to adjust scores for hundreds of candidates who reported technical issues and irregularities during the February exam. The controversy extends beyond AI use itself, focusing on how the State Bar integrated AI into question development and the thoroughness of its vetting process for an exam pivotal to thousands aspiring to practice law in California each year. It also raises transparency concerns regarding the State Bar’s move away from the widely adopted National Conference of Bar Examiners’ Multistate Bar Examination towards a new hybrid in-person and remote testing format intended to reduce costs. In a Thursday statement, the Supreme Court sought specifics on why and how AI was employed to draft or revise multiple-choice questions, what steps ensured their reliability before administration, whether any AI-derived questions were excluded due to unreliability, and the overall dependability of the scored questions. Last year, the Court approved an $8. 25 million, five-year agreement with Kaplan to produce 200 test questions for the revamped exam, alongside hiring Meazure Learning to administer it. However, the State Bar disclosed only this week, nearly two months post-exam, that it deviated from this plan by not relying solely on Kaplan for question creation. A presentation revealed that out of 171 scored multiple-choice questions, Kaplan authored 100, 48 originated from a first-year law students’ test, and 23 were developed using AI by ACS Ventures, the Bar’s psychometrician. Despite this, Leah Wilson, the State Bar’s executive director, affirmed confidence in the questions’ validity and their fair assessment of candidates’ competence. Alex Chan, attorney and chair of the Committee of Bar Examiners overseeing the California Bar Examination, told The Times that AI was used only for a small set of questions and not necessarily to create them outright.
He noted that in October, the California Supreme Court had encouraged the State Bar to consider new technologies, including AI, to enhance testing reliability and cost-effectiveness, a process which he said would require Court approval. However, Chan revealed Thursday that the Committee was not informed about AI use before the exam and thus could not review or approve it. This lack of prior notification drew criticism from bar exam experts. Katie Moran, an associate law professor specializing in bar prep, questioned who instructed ACS Ventures—an entity without experience in bar question authorship—to create exam questions and what guidelines were provided. Mary Basick, assistant dean of academic skills at UC Irvine Law School, highlighted the significance of this unapproved change, emphasizing that the Committee and Supreme Court had only authorized Kaplan-drafted questions. She noted Kaplan’s expertise as a bar prep company, which was expected to ensure established question quality, and pointed out that substantial exam changes require a two-year notice under California law. Basick added that typical question development spans years to guarantee validity and reliability through multiple review stages, which was not feasible here. She and other academics also expressed concern over a non-legally trained psychometrician crafting questions with AI and judging their validity, suggesting a potential conflict of interest. The State Bar refuted this, stating the validation and reliability processes are objective and consistent regardless of question origin. It explained that prior to the exam, all questions—including those AI-assisted—underwent review by content validation panels and subject matter experts to assess legal accuracy, competence, and bias. Regarding reliability, the State Bar reported that the combined scored multiple-choice questions, irrespective of source, surpassed the psychometric reliability target of 0. 80, affirming performance standards were met despite the controversy.
Brief news summary
The California Supreme Court has sought detailed clarification from the State Bar of California regarding its use of artificial intelligence (AI) in creating multiple-choice questions for the February bar exam. The Court was unaware that the Bar had allowed its psychometrician, ACS Ventures, to generate 23 AI-assisted questions. It now requests information on AI’s role, vetting procedures, reliability measures, and whether these AI-generated questions were excluded from scoring. This inquiry comes amid controversy over the Bar’s recent shift from the traditional National Conference of Bar Examiners exam to a costly hybrid model involving Kaplan services and a first-year law student exam. Critics contend that the AI-created questions lacked sufficient oversight, transparent validation, and adequate review time. Faculty experts and oversight members have expressed concern about the lack of prior approval for AI use. The Bar defends its approach, citing statistical reliability and expert assessments. The Supreme Court’s scrutiny underscores ongoing disputes and potential score adjustments linked to broader exam irregularities.
AI-powered Lead Generation in Social Media
and Search Engines
Let AI take control and automatically generate leads for you!

I'm your Content Manager, ready to handle your first test assignment
Learn how AI can help your business.
Let’s talk!

Legislative committee dives into blockchain, AI i…
The Select Committee on Blockchain, Financial Technology, and Digital Innovation convened in Jackson Hole on May 14-15 for its first interim meeting, covering topics such as right to repair (RTR), AI in government, and updates from the Wyoming Stable Token Commission.

Nvidia CEO Criticizes U.S. AI Chip Export Restric…
Nvidia CEO Jensen Huang has publicly criticized the U.S. government’s export controls aimed at limiting China’s access to advanced AI chips, labeling the policy a “failure” during his keynote at the Computex conference in Taipei.

Blockchain and the Future of Voting Systems
In an era where securing electoral processes is of utmost importance, blockchain technology has emerged as a promising solution to improve the security and transparency of voting systems worldwide.

Foxconn and Nvidia Collaborate on AI Data Center
At the 2025 Computex trade show in Taipei, Foxconn, the world's largest contract electronics manufacturer, announced a major collaboration with Nvidia to build an advanced artificial intelligence data center in Taiwan.

Ethereum 2.0: What Does the Upgrade Mean for Deve…
The Ethereum 2.0 upgrade, a highly anticipated advancement in the blockchain sector, has garnered widespread attention from developers and users alike.

Promise Partners with Google to Integrate AI Tech…
Promise, a generative AI studio backed by the prominent venture capital firm Andreessen Horowitz, has announced a major partnership with Google to integrate Google’s advanced AI technologies into its operations.

GENIUS Act Advances in Senate, Paving Way for Sta…
The Senate has recently advanced the bipartisan GENIUS Act by closing debate on the bill, marking a key milestone toward establishing clearer regulations for stablecoins within the broader cryptocurrency landscape.