Odgers berndtson
Location and language United States | EN

Daniel Glyn-Jones shares his thoughts in the first in a series of insights into the activity we are seeing in the technology and IT services market in 2022

Having collectively dusted ourselves down from 2021, a whirlwind year which saw some of the most frenzied technology hiring many of us have ever seen, attention shifts to the year ahead in technology 2022. This is the first of a series of pieces on activity we are seeing in the technology and IT services market and the possible implications from an executive recruiting perspective.

It’s universally agreed that regulations relating to technology are often playing catch up in a world led by technology innovation. The year ahead looks to be a turning point for the introduction of better defined and more robust regulatory frameworks across AI ethics, information governance and data privacy and below are a few thoughts on how a selection of these regulations may manifest.

The proposed regulations – ‘The NYC Measure’ FTC and EU European Commission’s proposed AI Regulations and ePrivacy:

EU

GDPR

In terms of staying compliant with GDPR, consent is not the only aspect of GDPR companies should consider.

While 2021 was relatively slow for new data privacy regulations, it was quite the opposite for enforcement – with regulators enforcing the law and imposing fines at higher rates than ever before. Going forward, EU data privacy regulations may undergo another change depending on the outcome of the ePrivacy Regulation proposal, which would create additional challenges for privacy teams.

Hot on the heels of the GDPR’s arrival four years ago, additional EU regulation is imminently anticipated and thus the landscape stands to become increasingly complex to navigate.

The ePrivacy Regulation

A set of data privacy laws first tabled by the European Council with the intention of complementing GDPR and were meant to go into effect alongside GDPR in 2018, have concluded negotiations and are now due for implementation.

The regulation is centered on cookies and consent in the storage and processing of data. If the ePrivacy Regulation proposal passes, it can be expected to manifest towards the end of 2023.

Enforcement could include fines of up to €30 million or 6 percent of global revenue, making penalties even heftier than those incurred by violations of GDPR.

Platform vendors

Proposed EU Regulations pose a potential worry for platform vendors. The potential threat for them being the push towards greater transparency which de facto shines a light on much of what was previously a blackbox in terms of the algorithms at the heart of a platform’s value proposition.

From an information governance standpoint, both the threat and opportunity stem from the market’s attempts to keep pace with the expectation of regulators in exercising greater transparency around how AI-led businesses are making the decisions which affect users and customers.  AI legislation is part of a wider global conversation, with new and evolving regulations gathering pace across jurisdictions.  

In the U.S., the EU's proposal categorizes AI applications as high, medium, and low risk. Low-risk AI involves the use of AI in applications such as video games or spam filters; medium-risk includes AI systems such as chatbots; and high-risk involves AI used in infrastructure, employment, and private and public services.

A few specific examples of how this may play out include:

U.S.

‘NYC Measure’ and Artificial Intelligence Video Act

‘The NYC Measure’ has particular relevance for CHROs and recruitment professionals. The measure was passed due to growing concerns about automated decision-making tools to screen candidates during the hiring or promotion processes.

Employers that intend to utilise an employment decision tool must first conduct a bias audit and must publish a summary of the results of that audit on their websites. They must also notify all NYC employees and/or job candidates that: (1) the tool will be used in connection with assessment or evaluation of their employment or candidacy and (2) specify the job qualifications and characteristics that the tool will use to make that assessment or evaluation.

The category of automated decision-making tools targeted by the NYC measure is “automated employment decision tools,” which the measure defines as “any computational process, derived from machine learning, statistical modelling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.” 

The NYC measure will take effect January 2, 2023.

The penalties levied on employers for use of an automated employment decision tool without first conducting a compliant bias audit are up to $500 on day one, followed by penalties of $500 to $1,500 every day thereafter. Failure to properly notify candidates or employees about the use of such tools constitutes a separate violation.

A related piece of legislation, likely to also feature in the Artificial Intelligence Video Interview Act (“the AIVI Act”) (passed in Illinois 2019), HB2557, which imposes consent, transparency and data destruction requirements on employers that implement AI technology during the job interview process. The AIVI Act, the first state law to regulate AI use in video interviews, took effect January 1, 2020. Likewise, in 2020, Maryland enacted a law that requires notice and consent prior to the use of facial recognition technology during a job interview. 

Similar legislation is expected both at state and federal levels, as firm’s hiring processes increasingly utilize AI technologies as a cornerstone of their hiring processes. As early as 2014, the Equal Employment Opportunity Commission (EEOC) has been scrutinizing the application of AI which will be in violation of existing employment laws such as Title VII of the Civil Rights Act of 1964, the Age Discrimination in Employment Act, the Americans with Disabilities Act and the Genetic Information Nondiscrimination Act.

FCRA Fair Credit Reporting Act

Change in this instance being driven by the proposed FTC federal laws changes regarding commercial application of AI algorithms under the FCRA.

The FCRA is a legacy regulation protecting Consumers in the act of sharing their data with referencing agencies. The legislation which bans companies from sharing consumers data with anyone without a legitimate reason to have it. It also requires credit, insurance, and employment agencies to notify consumers when there is an adverse action taken against them, such as rejecting a loan application.

The Federal Trade Commission advised back in an April 2020 blog, their intention to make this more relevant to work

In an April 2020 guidance blog, "Using Artificial Intelligence and Algorithms," the FTC warned businesses that use AI, that the agency can use the FCRA to prevent the misuse of data and algorithms to make decisions about consumers.

Following on from this post, please look out for another piece to follow in the next few weeks. Michael Drew, head of the technology & IT services practice, and myself will be hosting a round table lunch and several one-to-one conversations with global consulting leaders spanning data, AI ethics, risk analytics and cyber security over the coming months.

Find a consultant [[ Scroll to top ]]