A study detailing typical attack types and the lessons learned was released by Google’s red team, which specialises in artificial intelligence systems. Only a few weeks prior to the launch of Secure AI Framework, which aims to provide a security framework for the creation, application, and defence of AI systems, the business announced the formation of its AI Red Team.
In its latest paper, Google emphasises the value of using red teams to test AI systems, the kinds of assaults that may be mimicked by red teams, and lessons for other organisations who may want to set up their own team. “Although closely associated with traditional red teams, the AI Red Team also possesses the requisite subject-matter knowledge of AI to execute sophisticated technical attacks against AI.