The EU AI Act Gets a Practical Update: What Job Boards Need to Know

Disclaimer: This article is provided for general information and discussion purposes only. It does not constitute legal advice and should not be relied on as such. Job boards and talent platforms should seek independent legal advice to assess how the EU AI Act and related measures apply to their specific situation.

 

Regulation rarely moves at the same pace as the technology it is trying to govern. The EU AI Act is no exception. When it entered into force in August 2024 as a risk-based rulebook for artificial intelligence, it set out an ambitious timetable. One that, in hindsight, outpaced the practical infrastructure needed to support it. The technical standards were still being developed. National supervisory bodies were still getting organised. And the market, including the online recruiting sector, was still working out what compliance would actually look like in practice.

That is the context behind the Digital Omnibus on AI. Proposed by the Commission in November 2025 and adopted as a Council negotiating position on 13 March 2026, it is not a rollback of the AI Act. It is a recalibration. More realistic timelines, simpler administrative requirements, and stronger support for growing businesses, all while keeping the core protections firmly in place.

For job board operators and online recruiting platforms, the picture is now much clearer: this is primarily about giving the market more realistic time to prepare, not about removing the core obligations. The central change is the delay for high-risk AI obligations. Under the current draft, stand-alone high-risk AI systems would need to comply by 2 December 2027, while high-risk AI embedded in regulated products would follow by 2 August 2028. 

What the AI Act Actually Regulates in Recruiting

Before unpacking what changed, it helps to understand what was already there. The AI Act classifies AI systems by the risk they pose to people’s rights and safety. In the employment space, certain tools have always sat in the high-risk category: systems designed to place targeted job advertisements as part of a selection process, to filter applications, or to evaluate candidates.

This has never been purely about generative AI or chatbots. It is about algorithmic influence over who gets access to opportunities. As Jobiqo explored in the Job Board Revolution Report 2026, that question is becoming more pressing, not less. The market is shifting from passive listings toward outcome-driven recruiting: matching, ranking, behavioural targeting, and automated workflow support. The more a platform shapes who sees which role and who gets surfaced to an employer, the closer it moves to the regulatory boundary. And that boundary has not moved. What the Omnibus does is give platforms more time and clearer guidance to prepare for it responsibly.

The Timeline Has Shifted, The Regulations Have Not

The most tangible change is to the compliance schedule. Under the Omnibus proposal, stand-alone high-risk AI systems now face a backstop deadline of 2 December 2027, while high-risk AI embedded in regulated products must comply by 2 August 2028. Stand‑alone systems are provided as their own service (for example, tools or algorithms used to rank candidates), whereas embedded systems are AI components built into a broader regulated product or suite that is certified as a whole. 

AI in recruiting remains in the high-risk category where it directly influences employment opportunities, such as targeted job ads, application filtering, or candidate evaluation. The critical point is that classification depends on what a system is designed to do, not what technology it uses. A search tool that helps candidates find relevant roles by keyword or location looks very different, from a regulatory perspective, than an algorithm that scores, ranks and filters applicants on an employer’s behalf. Getting that distinction right and documenting it,  is one of the most important things a platform can do right now.

The original timeline for implementation was always ambitious, given that the technical standards and supervisory bodies needed to support it were still being developed. This adjustment acknowledges that reality and gives more time to adhere. For job boards, the extended runway is meaningful but only if it is treated as an  opportunity rather than a reason to pause. The platforms best positioned for what follows are those that treat this period as an opportunity to weave AI governance into how their products work: documenting intended use, building in human oversight, and developing explainability for the employers and candidates who rely on their tools.

Deepfakes, Fairness and Bias

Two other strands of the Omnibus deserve attention from recruiting platforms specifically. The first is synthetic content. In a direct response to growing concerns about deepfake misuse, the Council has added an explicit prohibition on AI systems capable of generating non-consensual imagery or abusive material. For platforms that host video profiles, employer branding content or candidate introductions, this is a prompt to review content safeguards and moderation policies. Providers of synthetic content tools already on the market before August 2026 have until February 2027 to meet the new marking requirements, giving platforms a defined window to get their house in order. Those that move early, with clear policies and visible safeguards in place, will find that doing so strengthens their trust proposition with both candidates and employers.

The second is bias detection. The Omnibus clarifies a narrow, safeguarded legal path to use special categories of personal data for detecting and correcting bias in AI systems, under strict necessity and strong technical and organisational protections. This matters for any platform using AI in screening or matching. Done carefully and with proper governance, fairness auditing becomes something a platform can demonstrate to employers, candidates and regulators alike, turning a compliance requirement into a genuine differentiator.

At Jobiqo, this is not a new conversation. We have been exploring algorithmic fairness through dedicated product development, precisely because we believe fairer matching is not just a regulatory obligation, it is better recruiting.

Smaller Platforms Get More Support

One of the quieter but important changes in the Omnibus is the extension of Small and Medium Enterprise (SME) support measures. A category that covers many scaling job board businesses that have grown beyond classic SME thresholds but are not yet large enterprises. Simplified documentation templates, proportionate quality management requirements, lower fine ceilings and priority access to regulatory sandboxes all now extend to this group.

After all, some of the most meaningful product innovation happens at precisely this stage of growth, and it makes sense for regulation to reflect that.

Use The Time as a Design Window, not a Pause Button

The Council position now moves into trilogue negotiations with the European Parliament and the Commission. Adjustments are still possible, but the direction is settled. The EU is not retreating from AI oversight in recruiting. It is building a framework designed to last and giving the market enough time to meet it properly.

  • Use the timeline to build, not just comply. The extended deadlines are an invitation to integrate AI governance into your product roadmap thoughtfully, starting with the areas that matter most: transparency in matching logic, candidate control over data usage, and clear communication about how AI supports rather than replaces human decisions in hiring. Document intended use, data flows and human oversight points now, well before the 2027 deadline creates pressure to rush. And if bias detection involving sensitive data is on your roadmap, design the governance framework from day one strict access controls, minimal retention, and a clear justification record built in from the start.

  • Turn trust into differentiation. With new prohibitions on harmful synthetic content and clearer bias mitigation pathways, job boards that proactively address these areas can stand out. Introduce clear labelling where AI is used in job recommendations, candidate suggestions or branded content, especially when generative tools are involved. Consider publishing plain-language explainers about your AI practices or offering employers insights on how your tools support fair, efficient hiring.

  • Leverage support mechanisms early. Engage with national competent authorities, explore regulatory sandboxes for testing innovative features, and use the simplified documentation templates once published. National competent authorities are required to provide guidance to SMEs and SMCs on implementation. 

At Jobiqo, we have never seen regulation as something to manage around. We see it as a prompt to build better and to lead by example. Platforms where AI genuinely supports human decision-making, where candidates feel seen and treated fairly, and where employers get tools they can trust. The Omnibus gives the market more time and more clarity. The question is simply what you choose to do with both.

 

Further reading: 

Heise Online — Omnibus AI Act | EU AI Act Implementation Timeline | Council Press Release, 13 March 2026

Related Articles

Jobiqo to Power New Defence Careers Platform Launching in Late Spring

News 2 min read
Read more

Case Study: How Four "High-Risk" Changes Transformed Performance for UK Healthcare Job Boards

News, Use Case 5 min read
Read more

Must-Attend Events for Job Board Owners in 2026

Conferences, Jobiqo, News 3 min read
Read more