Skip to content

Sanford & Tatum Blog

All You Ever Wanted to Know About Insurance

Risk Insights: AI + Employment Decision

Leveraging AI in Employment Decisions

Organizations are increasingly utilizing artificial intelligence (AI) in employment-related decisions. According to the Society for Human Resources Management, around 25% of organizations use AI for HR processes. This may include the following:

  • Recruitment and candidate screening—AI tools help match potential employees to open positions and scan resumes for keywords and phrases. In some instances, companies may utilize AI to assess candidates’ interview responses or even interact directly with potential hires.
  • Hiring and onboarding—AI streamlines processes during hiring and onboarding, reducing redundancies and checking for incomplete responses. It can also be utilized to personalize and improve the hiring and onboarding experience.
  • Performance evaluation and feedback—AI provides real-time assessments and collects and analyzes performance data. The information can then be used to provide targeted and timely feedback to employees and assist leaders in making promotion and retention decisions.

Although AI can offer several benefits—such as improved efficiency, objectivity and decision-making—it also presents challenges related to biases, transparency and ethical concerns.

To mitigate potential liability and reduce risks with AI use, organizations can implement the following strategies: 

  • Comply with applicable laws and regulations. Businesses need to make certain they are permissibly using AI in employment-related decisions. Working with the AI vendor to understand its algorithm, consulting with attorneys about the applicable laws and regularly monitoring the technology’s outputs for discriminatory results may help them do so.
  • Develop clear ethical guidelines. Internal policies should address appropriate usage, detail consent procedures for candidates and employees, and emphasize transparency of AI algorithms.
  • Ensure data quality to minimize bias. Since AI can perpetuate unlawful biases, it is imperative for the data inputted into the system to be accurate, diverse, relevant and complete. This can help the system produce stronger and more compliant results.
  • Implement human oversight and intervention. Human involvement in decision-making processes is crucial to ensure AI systems’ legal and proper functioning.
  • Audit and evaluate AI performance regularly to address emerging risks. Like other systems, AI needs to be audited on a regular basis to analyze its outputs. Adjustments and corrections can then be made to improve its performance.

This document is not intended to be exhaustive nor should any discussion or opinions be construed as legal advice. Readers should contact legal counsel or an insurance professional for appropriate advice. © 2023 Zywave, Inc. All rights reserved.


Discussion

There are no comments yet.


Leave a Comment

Required fields are marked with

Comment

Your name, comment, and URL will appear on this page after it has been reviewed and approved. Your email address will not be published.