The introduction of AI in recruitment brings significant benefits to HR professionals. It is more time- and cost-effective, as automated programmes can carry out more mundane, time-consuming processes, freeing up time for HR professionals to do more strategic or complicated tasks. A HR professional may subconsciously select candidates with a familiar background or work experience that he or she relates to, whereas AI can be programmed to select candidates based on objective criteria only, widening the net of potential recruits.
But there are a number of risks associated with using AI in recruitment. What happens if the AI breaches the law when selecting a candidate? What protective measures should be put in place when using AI? How can we work alongside AI responsibly?
Legal pitfalls
We should be cautious to assume an AI programme can be completely objective. After all, AI is still programmed by humans, and it may prove difficult, if not impossible, to eradicate all bias (conscious or not) when programming an AI system. Machine learning, where computers are programmed to act based on past results or performance without being explicitly guided, can also lead to discriminatory decisions. For example, if the last five successful candidates selected by the AI programme for interview happen to be male, the program may “learn” from past experience, concluding that male candidates are more likely to be successful, thereby prompting the selection of further male candidates. Of course, these types of issues could be addressed through programming or algorithms, but where imperfect human programming or behaviour is involved, creating a completely objective selection system will be difficult.
Collecting information to look for a promising candidate could also raise discrimination and data privacy issues, especially if data is collected without the candidate’s knowledge. The candidate may not have been informed about, or consented to, the collection of their personal data. After accessing any personal data via social media, it may be difficult for HR professionals, at a later stage in the recruitment process, to make completely objective decisions about a candidate. What if, for example, the AI collects information from a candidate’s Facebook page suggesting the candidate is pregnant? This information may contribute to a decision, made by either the AI programme or the HR professional, not to hire the candidate, which could be discriminatory.
Other issues to consider
If an AI programme is selecting and ranking candidates based on their online profiles, it is likely to miss potential candidates with a smaller online footprint. A promising candidate may fall between the cracks if they do not, for example, keep their LinkedIn profile up to date.
AI in recruitment reduces the human factor in HR. Can AI truly determine whether a candidate will be the right fit for the company’s culture? During an interview, a recruiter’s connection and chemistry with the candidate can be just as important as the right qualifications. This intangible but intrinsically important aspect of recruiting would be difficult for a robot to replicate completely – for the time being, at least.
Liability under Hong Kong law
Hong Kong currently does not have a legal or regulatory framework for governing AI. For a candidate or employee who has been treated unlawfully by AI, existing laws do not provide a clear path for compensation. A number of parties could be held liable if an AI system breaks the law and causes harm – the manufacturer who designed and built the product, the party who programmed the AI system, or the employer who bought and used it.
For example, under the Sex Discrimination Ordinance (SDO), it is unlawful for a person to be treated less favourably by a prospective employer because of his or her gender. Under the SDO, an employer can be vicariously liable for the discriminatory acts of its employees unless it can show that it took reasonably practical steps to prevent any discrimination. Where the discriminatory acts are done by AI, an employer could argue that it should not be held vicariously liable because a robot is not an employee or a legal person under the SDO. Furthermore, the employer could argue that it had invested in the latest technology and had the robot programmed correctly, and so has taken all reasonably practicable steps to prevent discrimination. An employer has a more general duty of care to an employee, including to provide a safe workplace, but it is not clear whether damage caused by AI would amount to a breach of this duty.
The manufacturer could be held liable for selling a defective product that went on to discriminate or otherwise cause harm to an individual. However, the manufacturer could argue that the programmer or employer effectively broke the chain of causation when it programmed or used the robot, so liability should lie elsewhere. Should two or more parties be held jointly liable? Should there be a strict liability regime? Our current legal framework does not have answers to these complicated questions
HR takeaways
With the rapid development and adoption of AI, HR professionals need to be equipped for when these issues arise. Although we must wait for legislation to catch up with technology, HR professionals can make the following preparations for the impact of AI in recruitment:
Conclusion
AI will dramatically change the way we recruit and manage employees. HR professionals should welcome the benefits AI will bring, but must also be alert to its risks and limitations, especially while the law plays catch-up.