Reid acknowledged that errors are bound to occur as AI becomes more prevalent in Google’s search and other products. However, she encouraged employees to continue taking risks and pushing out new features, suggesting that the company can address issues as they are discovered by users and staff. “We should take [risks] thoughtfully. We should act with urgency. When we find new problems, we should do the extensive testing but we won’t always find everything,” she said at the meeting, as per a CNBC report.
“It is important that we don’t hold back features just because there might be occasional problems, but more as we find the problems, we address them,” Reid said at the meeting, as reported by CNBC.
Google needs employees’ help in improving its AI tools
Google has taken steps to address the issues that have arisen with its AI products, with Reid noting in a blog post that the company made more than a dozen technical improvements to the AI Overview tool.
Despite conducting extensive testing and “red teaming” to identify vulnerabilities before launching AI products, Reid acknowledged that the company will need to do more. “No matter how much red teaming we do, we will need to do more.” She also highlighted the challenges of understanding the quality and context of webpage content and encouraged employees to report any issues they encounter.
She stated that some user queries were intentionally adversarial and many of the worst examples were fake, noting, “People actually created templates on how to get social engagement by making fake AI Overviews so that’s an additional thing we’re thinking about.”
Google has faced setbacks with other AI products as well, such as its Gemini chatbot and image generation tool, which were criticised for inaccuracies and biases. However, Reid’s stance indicates that the company is determined to continue taking risks and refining its AI offerings based on user and employee input.