The EU AI Act is Now in Force:  What Built Environment Professionals Need to Understand

This informal CPD article ‘The EU AI Act is Now in Force:  What Built Environment Professionals Need to Understand’ was provided by Karolina Juste, founder of BIM KARELA, a professional BIM training provider delivering CPD-certified training internationally and strategic BIM services across the UK and the European Union.

Artificial Intelligence is no longer a future concept in the workplace. Across many sectors, professionals are already using AI tools to analyse information, generate content, support decisions and accelerate delivery. What is less widely understood is that a new regulatory reality is now in place and it does not only apply to AI developers.

The European Union AI Act entered into force in August 2024 [1]. While its obligations apply in stages, some requirements are already active, and others are approaching quickly. Crucially, the Act is not only about how AI systems are built, it also affects how AI is used in the workplace. For professionals whose work carries responsibility, this matters.

This is Not a Technology Issue, it is a Responsibility Issue

A common assumption around AI is that responsibility shifts to the tool, the vendor or the technology itself. In professional environments, that assumption has never been true. Using AI does not remove professional accountability.

Just as relying on software outputs has never absolved professionals from responsibility for decisions, costs, safety or outcomes, AI-generated outputs do not replace human judgement. They introduce new layers of risk, particularly where decisions are influenced by information that may be opaque, unverified, or poorly understood. The AI Act reinforces a principle that already exists in professional practice: Those who use tools remain responsible for how they are used and relied upon.

Why the Timeline matters more than many realise

One reason awareness is low is that the AI Act applies gradually [2]. This has created a false sense that compliance is “years away”.

The timeline looks like this:

  • August 2024: The Act entered into force (This marked the start of a new regulatory framework for Artificial Intelligence across the European Union. While most obligations apply later, the direction of travel was set: AI use would no longer be unregulated in professional environments.)
  • February 2025: Rules on prohibited AI practices and AI literacy began to apply (AI literacy does not require technical expertise [3]. It refers to a basic understanding of how AI tools are used at work, what they can and cannot do, and when human judgement and validation are required. For many organisations, this is the first time awareness and training around AI use has become an explicit expectation rather than a voluntary initiative.)
  • August 2025: Obligations related to general-purpose AI models start to take effect (General-purpose AI refers to tools that are not designed for a single task [2], but can be used across many activities, such as generating text, analysing information or supporting decision-making. These tools are already widely used in workplaces, often informally and without clear governance. From this point, expectations around transparency and responsible use begin to apply more explicitly.)
  • August 2026: Most high-risk system requirements become fully applicable (High-risk AI systems are not defined by how advanced they are, but by the impact their outputs can have [2]. Systems that influence decisions related to people, safety, compliance, access to services or significant financial outcomes fall into this category. In professional environments, this means AI used to support decisions, not just automate tasks carries higher expectations around governance, documentation and oversight.)

The key point is this: AI literacy is already an obligation[3].

Organisations are now expected to ensure that people using AI systems at work understand the implications, limitations, and risks of those tools, especially where decisions or outputs have consequences. This is not about becoming a technical expert. It is about informed, responsible use.

The signal is this: organisations and professionals are expected to understand, govern, and take responsibility for how AI is used in their work, even when the tools are third-party, widely available, or perceived as “assistive”.

cpd-BIM-KARELA-AI-literacy-an-obligation
AI literacy already an obligation

AI introduces New Information Risks

AI tools are powerful because they abstract complexity. That same abstraction is where risk enters. Common issues professionals are already encountering include:

  • unclear data sources behind AI-generated outputs
  • lack of traceability in how conclusions were reached
  • undocumented assumptions embedded in prompts or models
  • overconfidence in outputs that appear polished or authoritative

These are not hypothetical problems. They mirror long-standing issues seen with unmanaged information, automated processes and poorly governed digital workflows [4]. The difference with AI is speed and scale. Errors, bias or misunderstanding can propagate faster and with greater confidence, than traditional tools.

Governance and Ethics are Professional Disciplines

Discussions about AI ethics are often framed as abstract or moral debates. In practice, ethics in professional environments is about discipline.

Responsible AI use means:

  • clarity on where and how AI may be used
  • understanding what AI outputs can and cannot be relied upon for
  • documenting decisions influenced by AI-generated information
  • ensuring people are trained to question outputs, not just accept them

This is not about restricting innovation. It is about ensuring that innovation does not quietly undermine professional standards, compliance or trust.

Where AI Risk becomes Invisible

Most organisations do not consciously decide to use AI irresponsibly. Risk usually emerges quietly, through everyday practices.

For example:

  • AI is used to summarise, interpret or generate information that feeds directly into decisions
  • Outputs are shared without clarity on their origin or limitations
  • Teams assume someone else has assessed whether AI use is appropriate
  • No one is quite sure who is accountable when AI influences outcomes

In many cases, organisations struggle to answer basic questions such as:

  • Where is AI already being used in our workflows?
  • For what types of decisions are AI outputs relied upon?
  • Do people understand when AI outputs require validation?
  • Is there any shared understanding of acceptable use?

When these questions cannot be answered clearly, AI risk is not theoretical, it is already present. This is where many professionals first realise that AI governance is not a future problem, but a current capability gap.

Awareness is the 1st Step, Not the Final One

The AI Act does not require professionals to stop using AI. It requires them to use it with awareness, competence and accountability. For many organisations, the immediate challenge is not compliance paperwork, it is understanding where AI is already influencing work and whether people are equipped to manage that influence responsibly. AI itself is not the risk. Uninformed use, combined with professional responsibility, is where risk truly sits.

We hope this article was helpful. For more information from BIM KARELA, please visit their CPD Member Directory page. Alternatively, you can go to the CPD Industry Hubs for more articles, courses and events relevant to your Continuing Professional Development requirements.

 

References

[1] European Commission. Regulatory framework on Artificial Intelligence (AI Act).

[2] European Union. Artificial Intelligence Act – implementation timeline.

[3] European Commission. AI literacy and responsible use of Artificial Intelligence.

[4] International Organization for Standardization. ISO/IEC 42001 – Artificial Intelligence Management System.