HomeTechnical adviceArtificial intelligence in HR and recruitment

Artificial intelligence in HR and recruitment

AI is transforming HR and recruitment, but legal, ethical, and operational risks remain significant, says Steph Marsh

ARTIFICIAL intelligence is rapidly reshaping recruitment and HR processes, with research indicating that a third of UK companies expect productivity gains from its adoption. However, this technological transformation brings substantial operational, ethical and legal risks that many businesses in the flooring industry remain unprepared to manage effectively.

Having to sift through numerous applications while simultaneously managing active installations across multiple sites, it is easy to see the appeal of involving AI in this process to screen CVs, match candidates to specifications, coordinate interview schedules and generate applicant assessments.

Some organisations have begun deploying AI screening technology to conduct initial candidate interviews virtually, with chatbots handling preliminary assessment stages without human involvement. Once employees are on board, AI systems can track performance metrics, monitor site attendance and even evaluate productivity patterns.

In practice, however, poorly managed AI systems create significant legal exposure that could prove costly for businesses operating on tight commercial margins.

The discrimination dilemma
Algorithms developed using biased training data risk perpetuating or amplifying discriminatory selection decisions, presenting particular challenges for flooring contractors seeking to build diverse teams and comply with modern equality standards.

Amazon’s abandoned AI recruitment platform was trained predominantly on male applicant data and went on to systematically disadvantage female candidates. This demonstrates how automated systems can entrench and magnify workforce inequalities rather than remedy them.

The Equality Act 2010 establishes that employers must prevent recruitment processes from producing discriminatory results, whether through direct or indirect discrimination.

Automated tools that disadvantage protected groups, encompassing characteristics including sex, race, age, disability and religion, may trigger employment tribunal claims. Crucially, legal accountability rests entirely with the employing contractor, not the technology supplier providing the AI platform.

Data protection requirements
The General Data Protection Regulation (GDPR) and Data Protection Act 2018 establish specific legal requirements for employers deploying AI technologies. GDPR explicitly safeguards data subjects, job applicants, and existing employees from consequential decisions that rest solely upon automated processing.

Companies cannot depend exclusively on AI for recruitment decisions, performance evaluations, disciplinary actions or termination determinations; there must be practical human involvement.

Data security presents additional serious concerns requiring careful consideration. Flooring businesses must recognise the risks surrounding commercially sensitive information, including project specifications, client databases, pricing strategies, supplier relationships, and employee records, which could be unintentionally exposed through public AI systems.

For contractors operating in competitive tender environments where commercial confidentiality is paramount, such data breaches could prove catastrophic, potentially compromising competitive advantage and damaging hard-won client relationships built over years of reliable project delivery.

Essential safeguards
Businesses integrating AI technology into recruitment and HR operations should establish comprehensive written policies governing both organisational accountability and staff conduct regarding AI usage. These policies need not be overly complex, but they must be clear, accessible and consistently enforced.

Such policies must identify authorised tools and acceptable applications, alongside expressly prohibited activities, particularly uploading confidential client information, project specifications, pricing data or employee records to public AI platforms.

Clear governance frameworks establishing who monitors AI deployment and how often systems are reviewed are essential. Whilst these measures cannot eliminate risk completely, they substantially reduce vulnerability to discrimination claims and data protection breaches that could result in significant financial penalties and reputational damage.

Regular system audits are imperative to identify concealed bias within algorithms and verify security integrity, system robustness and ongoing compliance with equality, data protection and employment legislation. Algorithm training data requires systematic scrutiny to confirm it represents diverse candidate populations across ethnicities, genders, ages and educational backgrounds—not just historical patterns that may reflect past industry imbalances.

Continuous human review of all system outputs remains essential to verify alignment with organisational policies, industry standards and regulatory obligations. This oversight should extend beyond initial recruitment to encompass performance monitoring, disciplinary processes and any automated assessments affecting employees’ terms and conditions.

Fundamentally, human oversight must be embedded throughout every AI-dependent process. This encompasses scheduled reviews, explicit accountability structures for automated systems and thorough training for directors, contracts managers, HR personnel and anyone else utilising these technologies for workforce decisions.

Current legislation is unambiguous – significant decisions affecting individuals must remain under direct human control, with clear documentation demonstrating that meaningful human judgment was exercised.

In conclusion
Effective human supervision, systematic audits and transparent governance frameworks are mandatory for those embracing AI recruitment tools. Delegating decision-making authority entirely to automated systems generates substantial legal exposure and reputational jeopardy that could prove terminal for smaller businesses.

As courts and tribunals adopt increasingly stringent positions on AI-related failures, whether discriminatory recruitment practices, flawed automated dismissals or inadequate procedural fairness, businesses must ensure operational efficiencies never compromise legal compliance or equitable treatment of staff and applicants.

www.coodes.co.uk

Steph Marsh is an employment law specialist and head of employment team at Coodes Solicitors

Please click to view more articles about

Stay Connected

4,800FansLike
7,837FollowersFollow

Training

MOST READ

Popular articles