It’s time to celebrate the incredible women leading the way in AI! Nominate your inspiring leaders for VentureBeat’s Women in AI Awards today before June 18. Learn More
Matt Calkins, the cofounder and CEO of Appian, a leading provider of low-code automation solutions, has thrown down the gauntlet to the AI industry. This week, Calkins unveiled a new set of guidelines designed to promote responsible AI development and build trust between AI providers and their customers. His message comes at a critical juncture, as concerns over data privacy, intellectual property rights, and the breakneck speed of AI advancement reach a fever pitch.
“We all want to make the most out of AI,” said Calkins in an exclusive interview with VentureBeat. “I come at this not as some sort of skeptic who’s trying to put roadblocks in the way of the development, but actually someone who wants the maximum flourishing of AI.”
Calkins pulled no punches as he criticized the current approach to AI regulation, which he believes fails to address critical issues such as data provenance and fair use. He singled out recent statements from the White House and Senator Schumer as examples of this shortsightedness.
“Big tech doesn’t want to talk about data provenance or fair use,” Calkins said. “So we’re in a gray zone, where the big tech firms are going to run until tackled, basically, and the rest of the industry and those who might have done business with AI, and all of us who want AI to reach its potential, stand back kind of horrified, saying, wait a second, how about we put in some rules here.”
VB Transform 2024 Registration is Open
Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Register Now
Appian’s proposed guidelines aim to tackle these issues head-on. The guidelines consist of four key principles: disclosure of data sources, use of private data only with consent and compensation, anonymization and permission for personally identifiable data, and consent and compensation for copyrighted information. Calkins believes these rules will help build trust between people and AI, enabling the technology to become more pertinent to individual users and organizations.
The next phase of AI: A race for trust, not just data
“The next phase of AI is not like that at all,” Calkins explained, referring to the current perception of AI development as a race to consume the most data. “The next phase of AI is actually a race to trust, believe it or not, and we’re about to start phase two right now. We’ve kind of maxed out phase one, where we’ve had that race. Now, the new race is a race to trust.”
Calkins’ vision for the future of AI is one where trust is the key currency. He argues that by building trust with users, AI systems will be able to access more personal and pertinent data, unlocking far greater value than the current model of indiscriminate data consumption. However, this trust will only be possible if AI providers embrace responsible development practices and prioritize user privacy and consent.
Appian poised to benefit from shift towards trustworthy AI
As a leading provider of low-code automation solutions, Appian is well-positioned to benefit from this shift towards trustworthy AI. The company’s platform enables organizations to quickly build and deploy AI-powered applications while maintaining strict control over data privacy and security. Appian’s commitment to responsible AI development could give it a significant competitive advantage as more enterprises seek out AI solutions that align with their values and prioritize user trust.
Calkins’ announcement comes at a time when the AI industry is facing increasing scrutiny from regulators, lawmakers, and the general public. The rapid advancement of AI technology has raised concerns about job displacement, algorithmic bias, and the potential for misuse by bad actors. By proposing these guidelines, Calkins is not only attempting to address these concerns but also positioning Appian as a leader in the responsible AI movement.
While Calkins has not yet secured any launch partners for his proposed guidelines, he remains optimistic about their potential impact. “I’m gonna reach out now basically, this is our launch moment,” he said. “So I’m going to lay out the terms and see who wants to join me. And I hope that if I can make this simple enough that I’ll, I’ll gather some support.”
The stakes could not be higher for the AI industry. As Calkins explained, the industry has “maxed out phase one,” where the focus was on consuming as much data as possible. The next phase, he argues, will be defined by trust. The companies that can build trust with users and demonstrate a commitment to responsible AI development will be the ones that thrive in this new era.
Calkins’ proposed guidelines offer a roadmap for how the industry can navigate this transition. By prioritizing transparency, user consent, and respect for intellectual property, AI providers can build the trust necessary to unlock the full potential of this transformative technology. The question now is whether the rest of the industry will follow Appian’s lead.
As the AI race enters this new phase, one thing is clear: the winners will be those who can build not just the most powerful algorithms, but the most trustworthy ones. With his bold vision and commitment to responsible development, Matt Calkins has positioned Appian at the forefront of this movement. The rest of the industry would be wise to take note.