The first ever rules for artificial intelligence for the European Union are expected to be approved by MEPs in Strasbourg.
The proposed law, which is likely to have widespread support in the European Parliament, will require AI models such as ChatGPT and general-purpose AI systems to comply with transparency obligations before they are put on the market.
The landmark legislation will see machine-learning systems divided into different risk categories.
Before being approved, the new rules will ensure any AI model introduced in the EU is trustworthy and safe.
Those considered high-risk – used for example in critical infrastructure, law enforcement, or elections – will be subject to stringent rules before they enter the market.
While AI applications that pose a clear risk to fundamental rights – such as biometric systems based on sensitive characteristics, social scoring, or AI used to manipulate human behaviour – will be banned.
The law will also regulate governments’ use of AI in biometric surveillance, as well as systems such as ChatGPT.
Foundation models, such as ChatGPT, and general-purpose AI systems will be required to comply with transparency obligations before they are put on the market.
These include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training.
Under the law, chatbots and AI software that have the ability to create manipulated narratives and images, such as deepfakes, will have to clearly show that their content is AI-generated.
The legislation, which follows almost two years of negotiations, is widely expected to be passed by MEPs later today.
The new rules will come into force by the end of this month.
Member states will have a year to comply with obligations for general-purpose AI, and two years for high-risk models.
Fine Gael MEP Deirdre Clune, one of the MEPs leading the drafting of the EU’s AI Act, described it as perhaps the most significant piece of legislation to come from the European Parliament in the past five years.
Ms Clune said: “We cannot allow AI to grow in an unrestricted and unfettered manner. This is why the EU is actively implementing safeguards and establishing boundaries.
“The objective of the AI Act is simple, to protect users from possible risks, promote innovation and encourage the uptake of safe, trustworthy AI in the EU.
“This will mean that companies developing large language models and generative AI will have to follow new transparency rules in Europe if they wish to continue operating in the 27 member states.”
Article source – MEP’s set to approve new EU artificial intelligence legislation – RTE