Expands AI Governance Software From Policy to Proof
ML Assurance explained: https://monitaur.ai/machine-learning-assurance
ML Assurance platform: https://monitaur.ai/products#MLAssurance
BOSTON, April 05, 2022--(BUSINESS WIRE)--Monitaur, an AI governance software company, today announced the general availability of GovernML, the latest addition to its ML Assurance platform, designed for enterprises committed to responsible AI. Offered as a web-based, SaaS application, GovernML enables enterprises to establish and maintain a system of record of model governance policies, ethical practices, and model risk across their entire AI portfolio.
As deployments of AI accelerate across industries, so too have efforts to establish regulations and internal standards that ensure fair, safe, transparent and responsible use.
Entities ranging from the European Union to New York City and the state of Colorado are finalizing legislation that codifies practices espoused by a wide range of public and private institutions into law.
Corporations are prioritizing the need to establish and operationalize governance policies across AI applications in order to demonstrate compliance and protect stakeholders from harm.
"Good AI needs great governance," said Monitaur founding CEO Anthony Habayeb. "Many companies have no idea where to start with governing their AI. Others have a strong foundation of policies and enterprise risk management but no real enabled operations around them. They lack a central home for their policies, evidence of good practice, and collaboration across functions. We built GovernML to solve for both."
The Importance of AI Governance Today
Effective AI governance requires a strong foundation of risk management policies and tight collaboration between modeling and risk management stakeholders. Too often, conversations about managing risks of AI focus narrowly on technical concepts like model explainability, monitoring, or bias testing. This focus minimizes the broader business challenge of life cycle governance and ignores the prioritization of policies and enablement of human oversight.
"While there are foundations for risk management and model governance in some sectors, the execution of these is quite manual," offered David Cass, former banking regulator for the Federal Reserve and CISO at IBM. "We are now seeing more models, with increasing complexity, used in more impactful ways, across more sectors that are not experienced with model governance. We need software to distribute the methods and execution of governance in a more scalable way. GovernML takes what is best of proven methods, adds for the new complexity of AI, and software-enables the entire life cycle."
"The emergence of and necessity for AI governance is not simply a result of AI investments or AI regulations; it is a clear example of a broader need to synergize risk, governance and compliance software categories overall," said Bradley Shimmin, chief analyst, AI Platforms, Analytics, and Data Management at Omdia. "Considering software as a stand-alone industry and comparing its regulation relative to other major sectors or industries, software’s impact-to-regulation ratio is an outlier. GovernML offers a very thoughtful approach to the broader AI problem; it also puts Monitaur in an attractive position for future expansion within this much broader theme."
GovernML for Building and Managing Policies for AI Ethics
Available today, GovernML’s integration into the Monitaur ML Assurance platform supports a full life cycle AI governance offering, covering everything from policy management through technical monitoring and testing and human oversight.
By centralizing policies, controls and evidence across all advanced models in the enterprise, GovernML makes managing responsible, compliant and ethical AI programs possible.
Highlights enable business, risk and compliance, and technical leaders to:
Create a comprehensive library of governance policies that map to specific business needs, including the ability to immediately leverage Monitaur’s proprietary controls based on best practices for AI and ML audits.
Provide centralized access to model information and proof of responsible practice throughout the model life cycle.
Embed multiple lines of defense and appropriate segregation of duties in a compliant, secure system of record.
Gain consensus and drive cross-functional alignment around AI projects.
For more information on GovernML, please visit: https://monitaur.ai/products#GovernML.
AI Trust Library: https://monitaur.ai/ai-trust
Principles of ML Assurance: https://monitaur.ai/machine-learning-assurance-white-paper
Monitaur, Inc. is an AI governance software company that goes beyond good intentions. The Monitaur ML Assurance platform is a SaaS suite of integrated products designed for companies using AI to make high-impact decisions. Global organizations use Monitaur to drive policies, collaboration, oversight, monitoring, and assurances of responsible and ethical life cycle AI governance. Founded in 2019 by a team of deep domain experts in the areas of corporate innovation, machine learning, assurance, and software development, Monitaur is committed to improving people’s lives by providing confidence and trust in AI. For more information, visit https://monitaur.ai, and follow us on LinkedIn.
Monitaur ML Assurance platform is a trademark of Monitaur, Inc. All other brand names and product names are trademarks or registered trademarks of their respective companies.
Tags: Monitaur, ML, AI, machine learning, artificial intelligence, GovernML, AI governance, machine learning assurance, responsible AI, ethical AI, compliance, risk management
View source version on businesswire.com: https://www.businesswire.com/news/home/20220405005539/en/