The Challenges of Traceability in the Era of Black Boxes
DOI:
https://doi.org/10.35319/lawreview.202618139Keywords:
AI Act, Black Boxes, synthetic content, algorithmic opacity, responsability, traceabilityAbstract
This article offers a critical examination of the AI Act, centering its analysis on the functional framework it establishes and the way it allocates distinct roles and responsibilities to the several actors involved in the value chain of artificial intelligence systems. In particular, it examines the distribution of obligations throughout the lifecycle of such systems, with special attention to contexts of algorithmic opacity in Black Box systems, where internal decision-making processes are inaccessible or difficult to reconstruct, even for their own developers. The article questions whether the AI Act can ensure effective attribution of responsibilities in light of technological advancement and the proliferation of synthetic content, given that technical traceability becomes increasingly challenging. The central argument posits that a possible solution lies in strengthening the evidentiary framework and incorporating the figure of the Logger, which would allow for the preservation of decision-making records and prevent obligations from being reduced to a merely formal and ineffective categories. Nevertheless, it concludes that practical implementation poses a significant challenge; because the exhaustive decomposition of an AI system’s decision-making chain, when its design prioritizes efficiency over explainability, would, in many cases, be incompatible with the very operational logic of efficiency.
Downloads
References
Alda Rodríguez, Á., Díaz López de la Llave, G., & Horrillo, P. (2026). Construyendo un log verificable, parte 1: Ideas principales. BBVA. https://www.bbva.com/es/innovacion/construyendo-un-log-verificable-parte-1-ideas-principales/
Ashkara, Z. (2025). AI liability directive withdrawn: EU impact 2025. AI Act Blog. https://www.aiactblog.nl/en/posts/ai-liability-directive-withdrawal
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1). https://doi.org/10.1177/2053951715622512
Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning (arXiv:1702.08608). arXiv. https://doi.org/10.48550/arXiv.1702.08608
Ghiurău, D., & Popescu, D. E. (2024). Distinguishing Reality from AI:Approaches for Detecting Synthetic Content. Computers, 14(1), 1. https://doi.org/10.3390/computers14010001
Molnar, C. (2019). Interpretable machine learning: A guide for making Black Box models explainable. Leanpub. https://christophm.github.io/interpretable-ml-book/
Comisión Europea. (2022). Propuesta de Directiva del Parlamento Europeo y del Consejo sobre normas de responsabilidad civil extracontractual en materia de inteligencia artificial (COM (2022) 496 final)(AI Liability Directive). https://eurlex.europa.eu/legal-content/EN/TXT/?uri=celex:52022PC0496
Parlamento Europeo y del Consejo de la Unión Europea. (2024). Reglamento (UE) 2024/1689 del Parlamento Europeo y del Consejo, de 13 de junio de 2024 (AI Act). Diario Oficial de la Unión Europea. https://eur-lex.europa.eu/legal-content/ES/TXT/?uri=CELEX%3A32024R1689
Song, P. (2026). Model cards and data sheets: documentation standards for ML. ML Journey. https://mljourney.com/model-cards-and-data-sheets-documentation-standards-for-ml/
Varughese, J. (2026). ¿Qué son las redes generativas adversariales (GAN)? IBM. https://www.ibm.com/mx-es/think/topics/generative-adversarial-networks
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 UCB Law Review

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
The U.C.B. Law Review is an Open Access Journal, therefore, it is full free for access, reading, search, dowload, distribution and lawfull reuse in any medium only for non-comercial purposes, provided the original work is properly cited.