You must verify your email to perform this action.
The webpage reports the enforcement of the European Artificial Intelligence Act (AI Act), the world's first comprehensive regulation on artificial intelligence (AI). This regulation ensures that AI developed and used in the EU is trustworthy and safeguards people's fundamental rights. It aims to encourage the uptake of AI technology, fostering a supportive environment for innovation and investment.
The AI Act categorizes AI systems into four risk types: minimal, specific transparency, high, and unacceptable. Minimal risk AI systems, like recommender systems and spam filters, face no obligations under the Act, while AI systems posing specific transparency risks, such as chatbots and deep fakes, must disclose their nature to users. High-risk AI systems, like those used for recruitment or loan assessments, will have to comply with strict requirements concerning data quality, user information, and cybersecurity, among others. Unacceptable risk systems, which pose clear threats to people's fundamental rights, will be banned.
The Act also introduces rules for general-purpose AI models and establishes enforcement mechanisms, including designating national competent authorities for market surveillance. Non-compliance with the rules could result in fines, up to 7% of the global annual turnover for violations of banned AI applications.
The majority of the AI Act rules will start applying on 2 August 2026. However, prohibitions of AI systems deemed to present an unacceptable risk will come into effect six months earlier, and the rules for General-Purpose AI models will apply after 12 months.
Lastly, the Commission is developing guidelines for the Act's implementation and has launched the AI Pact, encouraging AI developers to voluntarily adopt key obligations of the Act ahead of the legal deadlines.
Post your own comment:
The webpage reports the enforcement of the European Artificial Intelligence Act (AI Act), the world's first comprehensive regulation on artificial intelligence (AI). This regulation ensures that AI developed and used in the EU is trustworthy and safeguards people's fundamental rights. It aims to encourage the uptake of AI technology, fostering a supportive environment for innovation and investment. The AI Act categorizes AI systems into four risk types: minimal, specific transparency, high, and unacceptable. Minimal risk AI systems, like recommender systems and spam filters, face no obligations under the Act, while AI systems posing specific transparency risks, such as chatbots and deep fakes, must disclose their nature to users. High-risk AI systems, like those used for recruitment or loan assessments, will have to comply with strict requirements concerning data quality, user information, and cybersecurity, among others. Unacceptable risk systems, which pose clear threats to people's fundamental rights, will be banned. The Act also introduces rules for general-purpose AI models and establishes enforcement mechanisms, including designating national competent authorities for market surveillance. Non-compliance with the rules could result in fines, up to 7% of the global annual turnover for violations of banned AI applications. The majority of the AI Act rules will start applying on 2 August 2026. However, prohibitions of AI systems deemed to present an unacceptable risk will come into effect six months earlier, and the rules for General-Purpose AI models will apply after 12 months. Lastly, the Commission is developing guidelines for the Act's implementation and has launched the AI Pact, encouraging AI developers to voluntarily adopt key obligations of the Act ahead of the legal deadlines.
SummaryBot via The Internet
Nov. 17, 2024, 12:49 a.m.