Skip to main content
Please wait...

 Database index


Spain, Entornos controlados de pruebas (“regulatory sandboxes”) en materia de Inteligencia artificial

Full reference
Real Decreto 817/2023, de 8 de noviembre, que establece un entorno controlado de pruebas para el ensayo del cumplimiento de la propuesta de Reglamento del Parlamento Europeo y del Consejo por el que se establecen normas armonizadas en materia de inteligen
Spanish Government - Ministry of Economic Affairs and Digital Transformation
Institutional level
National level
Type of Source
National legislation
Source national detail
Primary legislation
Stage of drafting
Territorial scope
Project area
Cross sector, general scope
Law area
Civili liability
Procedural law
Contract law
Criminal law
Privacy / data protection
Consumer protection
Fundamental rights protection
Product liability
Product safety
Medical devices
Biometric data

Summary of the law

The Spanish Government, with the collaboration of the European Commission, has launched the first controlled test environment (sandbox) to assess how to implement the requirements applicable to high-risk AI systems with the aim of obtaining guidelines based on evidence and experimentation that will help entities to align with the proposal for an Artificial Intelligence Act. 

Specific provision(s) regarding AI and data protection
Article 16:

“1. AI system providers and users participating in the controlled testing environment shall comply with the provisions of Regulation (EU) 679/2016 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (GDPR) (...). This is without prejudice to the fact that some of the high-risk systems indicated in Annex II of this Royal Decree that may participate in the controlled test environment must have a prior analysis of the lawfulness of the processing.

2. Data processing carried out within the framework of the controlled test environment shall comply with the rules set out in the previous paragraph.

3. Acceptance of participation in the controlled test environment shall imply acknowledgement of compliance with data protection legislation”.
Personal scope of the instrument
Public Administrations and AI system providers/users who are domiciled or have apermanent establishment in Spain
Material scope of the instrument

According to article 1: “The purpose of this Royal Decree is to establish a controlled test environment to verify compliance with certain requirements by some AI systems that may pose risks to the safety, health and fundamental rights of individuals. It also regulates the procedure for selecting the systems and entities that will participate in the controlled test environment”.

In particular, article 11.1 provides that participation in the controlled test environment aims to meet the following requirements:

a)    The establishment, implementation, documentation and maintenance of a risk management system.
b)    In the case of AI systems involving training with data, it shall be guaranteed that the development has been or will be carried out on training, validation and test data sets that meet the quality criteria specified in the indications provided by the competent body.
c)    The technical documentation of the artificial intelligence system, as listed in Annex VI, shall be prepared in accordance with specifications to be provided by the competent body. This documentation shall be updated throughout the duration of the controlled test environment.
d)    AI systems shall technically allow for the automatic recording of events (logs) throughout the life cycle of the system. These logs shall be kept by the participant.
e)    The AI system has been or will be designed and developed in such a way as to ensure that its operation is sufficiently transparent for the users of the system to interpret the results and to avoid generating discriminatory biases.
f)    The AI system shall be accompanied by instructions for use in electronic format, including information on the system's features and performance that is concise, complete, correct, up-to-date, clear, relevant, accessible and understandable to the users of the system.
g)    AI systems shall have been or be designed and developed in such a way that they can be monitored by natural persons during periods of use. It shall contain appropriate human-machine interfaces for this purpose. In the event that such monitoring cannot occur in real time, it shall be recorded in the communications related to the transparency of the system.
h)     AI systems shall have been or be designed and developed in such a way as to achieve, taking into account their intended purpose, an adequate level of accuracy, robustness and cyber-security. These dimensions shall operate consistently throughout their life cycle.

AI system(s) involved
  • All types of AI systems
  • Ministry of Economic Affairs and Digital Transformation
Fundamental rights involved
  • Right to data protection
  • Right to health
  • Right to non-discrimination
Principles expressly applied
  • Accountability
  • Due process
  • Explainability
  • Non-discrimination
  • Proportionality
  • Transparency
Damages (article 17) and early termination of testing (article 25)

Case author
Laura Herrerías Castro
PhD Candidate
Universitat Pompeu Fabra