lang icon English
Auto-Filling SEO Website as a Gift

Launch Your AI-Powered Business and get clients!

No advertising investment needed—just results. AI finds, negotiates, and closes deals automatically

June 7, 2025, 2:16 p.m.
8

Microsoft Introduces AI Safety Ranking Metric on Azure Foundry to Ensure Ethical AI Deployment

Microsoft is advancing AI safety on its Azure Foundry developer platform by introducing a new 'safety' ranking metric to evaluate AI models for potential risks, such as generating hate speech or enabling misuse. This metric aims to build customer trust by transparently assessing the safety profiles of various AI models. The rankings will be based on two key benchmarks: Microsoft’s ToxiGen benchmark, which detects toxic language and hate speech, and the Center for AI Safety’s Weapons of Mass Destruction Proxy benchmark, which evaluates risks related to harmful misuse. These tools ensure ethical and safe deployment of generative AI technologies. By integrating these rigorous evaluations, Microsoft provides developers and organizations with clear insights into the safety of AI models they may integrate into applications and services. This initiative aligns with Microsoft’s broader strategy to be a neutral and responsible platform provider in the evolving generative AI space. Rather than limiting itself to a single source, Microsoft plans to offer models from multiple providers—including OpenAI, in which it has invested $14 billion—creating a diverse ecosystem that fosters innovation while upholding high safety and ethical standards. The safety metric arrives amid growing concerns over the misuse of AI, including harmful content generation, misinformation, and malicious applications. Microsoft's approach directly addresses these challenges by implementing measurable safety standards to guide responsible AI use. The combination of ToxiGen and the Weapons of Mass Destruction Proxy benchmarks offers a comprehensive risk assessment, covering both harmful language and unethical misuse possibilities. Through Azure Foundry, developers will access detailed safety scores, enabling informed model selection and promoting transparency that boosts confidence among AI users and stakeholders.

Microsoft’s role as a platform hosting multiple AI providers underscores its commitment to diversity and neutrality, encouraging competition and innovation while preventing market dominance by a single entity. This diversity strives to prioritize not only performance but also safety and ethics. Microsoft’s strong partnership with OpenAI highlights its belief in generative AI’s transformative potential, and the broad provider ecosystem aims to create a lively, responsible AI marketplace. The safety ranking metric is foundational in setting clear safety expectations and accountability for AI models. This initiative also aligns with global industry and regulatory efforts to govern AI responsibly. As governments and organizations develop frameworks to prevent AI-related harms, Microsoft positions itself as a leader in establishing best practices for safe AI deployment. With AI technologies advancing rapidly, robust safety measures are increasingly essential. In conclusion, Microsoft’s new safety ranking metric on Azure Foundry exemplifies a proactive and forward-thinking approach to AI governance. By leveraging established benchmarks to assess risks related to hate speech, misuse, and harmful outputs, Microsoft is cultivating an environment for responsible AI development and deployment. This move enhances customer trust and solidifies Microsoft’s standing as a neutral, ethical AI platform provider in a fast-changing technological landscape.



Brief news summary

Microsoft is enhancing AI safety on its Azure Foundry platform by introducing a new "safety" ranking metric to assess risks such as hate speech and misuse in AI models. This metric combines Microsoft's ToxiGen benchmark, which detects toxic and hateful language, with the Center for AI Safety’s Weapons of Mass Destruction Proxy benchmark that evaluates potential AI misuse. By integrating these tools, Microsoft offers developers transparent safety profiles to promote responsible AI deployment and build user trust. This effort reflects Microsoft’s commitment to an ethical, neutral platform hosting diverse AI models, including those from OpenAI, supported by a $14 billion investment. Amid rising concerns about AI misuse, Microsoft balances innovation with strong ethical standards, positioning itself as a leader in AI governance. The new metric fosters accountability and transparency, helping users make informed decisions and encouraging trustworthy AI adoption in today’s fast-evolving technological landscape.
Business on autopilot

AI-powered Lead Generation in Social Media
and Search Engines

Let AI take control and automatically generate leads for you!

I'm your Content Manager, ready to handle your first test assignment

Language

Content Maker

Our unique Content Maker allows you to create an SEO article, social media posts, and a video based on the information presented in the article

news image

Last news

The Best for your Business

Learn how AI can help your business.
Let’s talk!

June 7, 2025, 2:32 p.m.

Paul Brody, EY: How Blockchain Is Transforming Gl…

Paul Brody, EY’s global blockchain leader and co-author of the 2023 book *Ethereum for Business*, discusses blockchain’s impact on payments, remittances, banking, and corporate finance with Global Finance.

June 7, 2025, 10:22 a.m.

Blockchain Group adds $68M in Bitcoin to corporat…

Paris-based cryptocurrency company Blockchain Group has purchased $68 million worth of Bitcoin, joining a growing number of European institutions incorporating BTC into their balance sheets.

June 7, 2025, 10:17 a.m.

Senate Republicans Revise AI Regulation Ban in Ta…

Senate Republicans have revised a contentious provision in their extensive tax legislation to preserve a policy that restricts state authority over artificial intelligence (AI) regulation.

June 7, 2025, 6:24 a.m.

AI Film Festival Highlights AI's Growing Role in …

The AI Film Festival, hosted by AI-generated video company Runway, has returned to New York for its third consecutive year, highlighting the rapidly expanding role of artificial intelligence in filmmaking.

June 7, 2025, 6:16 a.m.

ZK-Proof Blockchain Altcoin Lagrange (LA) Lifts O…

A zero-knowledge (ZK) proof altcoin has seen a significant surge after receiving support from Coinbase, the leading US-based cryptocurrency exchange platform.

June 6, 2025, 2:25 p.m.

Blockchain and Digital Assets Virtual Investor Co…

NEW YORK, June 06, 2025 (GLOBE NEWSWIRE) — Virtual Investor Conferences, the premier proprietary investor conference series, today announced that the presentations from the Blockchain and Digital Assets Virtual Investor Conference held on June 5th are now accessible for online viewing.

June 6, 2025, 2:17 p.m.

Lawyers Face Sanctions for Citing Fake Cases with…

A senior UK judge, Victoria Sharp, has issued a strong warning to legal professionals about the dangers of using AI tools like ChatGPT to cite fabricated legal cases.

All news