Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
AI governance company Credo AI launched a new platform that integrates with third-party AI Ops and business tools to gain better visibility around responsible AI policies.
Credo AI’s Integrations Hub, now generally available, lets enterprise clients connect platforms where they build generative AI applications like Amazon Sagemaker, MLFlow and Microsoft Dynamics 365 to a centralized governance platform. Platforms where these applications are often deployed, like to Asana, ServiceNow or Jira can also be added to Integrations Hub.
The idea is that enterprises working on AI applications can use Integrations Hub to connect to a central governance platform like Credo AI’s governance platform. Instead of needing to upload documentation proving safety and security standards, the Integrations Hub will collect metadata from the applications that contain those metrics.
Credo AI said Integrations Hub will directly connect with existing model stores, which are then automatically uploaded to the governance platform for compliance checks. The hub will also bring in datasets for governance purposes.
Navrina Singh, founder and CEO of Credo AI, told VentureBeat that the integrations hub was designed to make AI governance, whether following data disclosure rules or internal policies around AI usage, become part of the development process at the very beginning.
“All the organizations that we work with, primarily Global 2000 [companies], are adopting AI at a very fast pace and are bringing in new breeds of AI tools,” Singh said. “When we looked across all the enterprises, one of the key things we wanted to enable for them was to extract the maximum value of their AI bets and make governance really easy, so they stop making excuses that it’s difficult to do.”
Credo AI’s Integrations Hub will include ready connections with Jira, ServiceNow, Amazon’s SageMaker and Bedrock, Salesforce, MLFlow, Asana, Databricks, Microsoft Dynamics 365 and Azure Machine Learning, Weights & Biases, Hugging Face and Collibra. Any additional integrations can be customized for an additional fee.
Governance at the onset
Surveys have shown that responsible AI and AI governance, which normally looks at how applications meet any regulations, ethical considerations and privacy checkups, have become top of mind for many companies. However, these same surveys point out that there are few companies that assessed these risks.
As enterprises grapple with how to be more responsible around generative AI, providing ways for organizations to easily figure out risks and compliance issues has become a new niche for many companies. Credo AI is just one of the companies offering different avenues to make responsible AI easily accessible.
IBM’s Watsonx suite of products includes a governance platform that lets users evaluate models for accuracy, bias and compliance. Collibra also released a suite of AI tools around governance that creates workflows to document and monitor AI programs.
Credo AI does check applications for potential brand risks like accuracy. Still, it positions its platforms more as a means to meet current laws around automated platforms and any potential new regulation that would come out.
There are still very few regulations around generative AI, though there have always been policies governing data privacy and data retention that some enterprises would have already been following thanks to machine learning or data rules.
Singh said there are some geographies that do ask enterprises for reports around AI governance. She pointed to New York City Law 144, legislation prohibiting automated tools for employment decisions.
“There are certain technical evidence you have to collect, like a metric called demographic parity ratio. Credo AI takes this New York City law and codifies it to check your AI Ops system, and since it’s connected to your policies and to where you built your HR system, we can collect that metadata to meet the requirements of the law,” Singh said.