“Google is having fruitful preliminary discussions with European Union regulators about the bloc’s revolutionary artificial intelligence laws and how it and other organisations can build AI safely and responsibly.”
The AI Act is a proposed European law on artificial intelligence (AI) and it is the first ever AI law enacted by a major regulator anywhere in the world. The law categorises AI applications into three risk categories. First, applications and systems that pose an unacceptable risk.. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, must follow strict legal guidelines. Finally, applications that are not explicitly prohibited or listed as high-risk are largely unregulated.
Google is having fruitful preliminary discussions with European Union regulators about the bloc’s revolutionary artificial intelligence laws and how it and other organisations can build AI safely and responsibly.
How Google has Approached the Newly Proposed AI Regulations?
Google is developing tools for dealing with a number of the EU’s concerns about AI, including the fear that it will become more difficult to distinguish between content generated by humans and content generated by AI.
Thomas Kurian, CEO, Google Cloud, said, “We’re having productive conversations with the EU government. Because we do want to find a path forward. These technologies have risk, but they also have an enormous capability that generates true value for people.”
Google is developing technologies to help people differentiate between human and AI generated content. At its I/O event last month, the company unveiled a “watermarking” solution that labels AI-generated images.
AI systems are evolving at breakneck speed, with devices like ChatGPT and Stability Diffusion capable of producing results that exceed the capabilities of previous iterations of the technology. ChatGPT and similar tools are increasingly being used as companions by computer programmers, for example, to help them generate code.