Eric Schmidt’s Controversial Advice on AI Ethics Sparks Debate on Copyright and Fair Use in Silicon Valley
Brief news summary
In April 2024, former Google CEO Eric Schmidt advised Stanford students launching Silicon Valley startups to be prepared to cross ethical boundaries, particularly in AI development. This advice comes amid 19 lawsuits targeting generative AI companies like OpenAI and Anthropic for alleged copyright infringement, as they trained AI models on copyrighted books and media without obtaining permission. Schmidt recommended first building AI prototypes by downloading large datasets and only seeking legal advice after achieving success, highlighting Silicon Valley’s prioritization of innovation over strict adherence to copyright laws. These AI firms invoke “fair use” to justify their data practices but simultaneously enforce strict restrictions on the reuse of AI-generated content, revealing a double standard. Internal company documents reveal awareness of opposition from creators and a dismissal of profit-sharing proposals. Critics argue that AI training unfairly exploits copyrighted materials without compensating original creators, sometimes reproducing content closely. Industry voices, such as former Stability AI VP Ed Newton-Rex, call for the use of licensed data instead. Meanwhile, major tech companies rigorously protect their software copyrights but overlook protections for artists whose work fuels AI development. Overall, Silicon Valley’s culture favors rapid innovation often at the expense of ethical and legal norms.In April 2024, former Google CEO and AI advocate Eric Schmidt delivered a private lecture at Stanford, telling aspiring Silicon Valley entrepreneurs to be ready to cross ethical lines. Despite 19 lawsuits against generative AI firms like Anthropic and OpenAI for copyright infringement over stolen books and media used to train AI models, Schmidt advised students to freely download content to build prototypes, suggesting legal issues could be resolved later if the product succeeds. Stanford briefly posted the talk on YouTube in August 2024 but removed it the next day without comment. Schmidt’s blunt stance reflects a common Silicon Valley attitude often masked by legal or philosophical arguments. His spokesperson cited Schmidt’s belief in “fair use” as a driver of innovation, echoing the techno-libertarian slogan “information wants to be free, ” which treats information as a resource that should flow unrestricted. However, this principle rarely applies to Silicon Valley’s proprietary information—personal data and software—which is heavily protected. Software like Photoshop and inventions such as Google’s search algorithm or Apple’s iPhone design are patented, defended by high-powered legal teams. The tech industry frequently engages in high-stakes IP battles: Waymo settled a $245 million lawsuit against Uber over stolen self-driving car secrets, Apple won over $1 billion from Samsung in a seven-year patent fight, and Apple and Qualcomm have repeatedly sued each other worldwide. In the race to develop generative AI, companies have aggressively targeted less prepared industries, training AI on vast datasets often containing copyrighted content. Firms justify this differently: OpenAI claims it uses only publicly available data; Anthropic says it uses books but not commercially; Meta admits using books commercially but calls it “quintessential fair use. ” Yet, these companies reject similar fair use claims when protecting their own creations. OpenAI forbids users from training competing models on ChatGPT outputs; Anthropic, Google, and xAI have comparable rules—essentially “we can train on your work, but you cannot train on ours. ” While market pressures explain these self-serving standards, the contradictions between actions and proclaimed values are glaring.
Meta, for example, calls its models “open” but prevents online copies, demanding their removal—a stance at odds with typical open-source generosity. The value of training data is clear: in 2021, Anthropic CEO Dario Amodei wrote about compensating data producers with profit shares or equity to avoid creator backlash that could slow AI progress. Yet, Anthropic now claims using copyrighted work is fair use, entitling creators to nothing, and declined comment on this inconsistency. Companies argue AI outputs are original, not derivative of training data, but reports show chatbots and image generators can reproduce near-exact copies of works like Harry Potter or existing art. Firms have downplayed these issues, even invoking geopolitical “AI race” concerns to justify broad fair use claims—OpenAI warned that without such access, America would lose the AI competition. Not all insiders agree. Ed Newton-Rex, former Stability AI VP, resigned in late 2023, criticizing current AI training as incompatible with established copyright-based creative economies and launched Fairly Trained, certifying AI models trained on properly licensed data. Meanwhile, Silicon Valley itself has long suffered IP theft from software piracy, leading companies to change distribution models: Adobe and Microsoft now require subscription access with license verification, and Google offers no downloads. These methods protect IP but are unavailable to many creators whose work AI companies exploit. This double standard raises doubts about Silicon Valley’s claims about fair use—are they sincere principles or legal cover?Generative AI indeed poses novel questions about copyright, but the industry’s aggressive tactics—moving fast, breaking things, and counting on lawyers to resolve problems—reflect longstanding Silicon Valley business norms rather than principled innovation.
Watch video about
Eric Schmidt’s Controversial Advice on AI Ethics Sparks Debate on Copyright and Fair Use in Silicon Valley
Try our premium solution and start getting clients — at no cost to you