Odgers berndtson
Location and language United States | EN


AI & Board Governance: An Interview with Chief Executive Officer at SwitchPoint Ventures, Ray Guzman

John McFarland, Head of the US Healthcare Practice, speaks with Ray Guzman, Chief Executive Officer at SwitchPoint Ventures, on AI and what Boards should be considering from their governing seats.

John: While we hear every day about the many potential use cases for generative AI and early organizational strides to discern what’s realistic, reliable, and worth exploring, we aren’t hearing a lot about organizational governing bodies and their role in this discernment and move to evolve aspects of the organization.  In general, what role should a governing body have regarding this rapidly emerging technological element?

Ray Guzman: Given the rapid evolution of AI, it's unrealistic for board directors to keep pace with every technological advancement and its implications for their role. Instead, the focus should be towards ensuring that governance processes are robust and frequently updated. 

I lead the AI governance committee of the board of a large organization with several billion dollars in assets. As we began thinking about how to govern AI within the organization, we concluded that AI is a tool – granted a powerful one – and that governing it may prove to be an extension of our enterprise risk management framework rather than a standalone concern. If we find that to be the case the committee may dissolve, or we may find that the pace of change requires an ongoing focus, and the committee needs to be a permanent one. With that in mind, the primary effort of our committee has been to review and enhance our risk management policies to adequately cover the use of AI within our operations.

It’s almost certain that AI is already in use within your organization, indirectly through partners, vendors, and suppliers whose systems you utilize. Consequently, a blanket prohibition on AI would be impractical and immediately place management in breach of such a policy. Instead, our strategy involves a thorough inventory of current AI applications, both internal and external, assessing them from an enterprise risk perspective and shaping new policies accordingly.

Governing bodies must navigate the complexities of AI adoption with a focus on alignment with the organization’s ethical standards and strategic goals. Boards should promote a culture of continuous learning and adaptability to stay abreast of AI developments, ensuring responsible and effective use of technology. Crucially, directors should aim to maintain comprehensive oversight over AI applications without hindering management's ability to compete in a market where AI is rapidly becoming a competitive differentiator. I can’t stress this enough: continuous learning is key. 

John: You are a problem solver, Ray, regardless of industry, and have been at the forefront in knowledge, building, and scaling companies through AI and data science.  Knowing what you know, what elements (if any) around adoption and implementation of these tools concerns you that Boards may not likely be aware of and should be?  

Ray Guzman: The first concern relates to the operational risks associated with AI, such as data bias and privacy issues. Consider a hospital using AI to make admission decisions, which later discovers bias in its model, or a lending institution that employs AI for loan approvals only to find its algorithm contravenes fair lending laws. Therefore, it's crucial for boards to proactively understand these risks and collaborate with experts to establish robust governance frameworks that address these challenges.

The second major concern is the human element, which often represents the greatest obstacle in successfully deploying AI solutions. Resistance to change is common, particularly when it involves complex technologies that employees may not fully comprehend. To mitigate this, it is essential to continuously offer educational opportunities for all staff. This not only helps them understand how the AI operates and why it’s beneficial for the organization, but also how it aims to enhance their roles rather than replace them. Experience shows that top-down mandates for deploying new technologies often fail if the people most affected are not sufficiently engaged in the process. A critical part of engaging key stakeholders is continually articulating that the status quo is a critical risk. The world is evolving, faster than ever, and if we remain flatfooted, we may find that doing so leads to irrelevance in the marketplace.

John: History shows that while typically slow, with every technological advance, regulation then follows.  Are there any healthy guardrails you suggest Boards consider now that keep organizational exploration and implementation honest, without constraining the organization’s need to rapidly explore and advance at the speed of change?  

Ray Guzman: It's essential to ensure that your AI initiatives operate within the "guardrails" established by your enterprise risk management framework. In practical terms, this means not allowing AI to perform any actions that wouldn't be permissible for humans.

As a board director, it is our duty to ensure that management conducts rigorous impact assessments and incorporates transparency mechanisms in any AI-involved processes. This approach promotes ethical AI practices while allowing the flexibility needed for experimentation and rapid technological advancement. Our goal is to foster innovation, not hinder it.

Regulators are primarily concerned with whether organizations comply with relevant regulations and laws, uphold ethical decision-making standards, and whether the board maintains adequate oversight of these issues. The governance strategy I propose aims to maintain the organization’s good standing with regulatory bodies.

Furthermore, given that regulators themselves are striving to keep pace with AI advancements, there exists a significant opportunity for us to engage with them proactively. By collaborating with regulators, we can help inform and shape policies that are thoughtful, practical, and reasonable. Instead of reacting passively to new legislation, let’s actively participate in crafting it to ensure it serves the best interests of all stakeholders.

John: You have clearly laid out what a governing body’s general role should be with understanding AI in general, and its impact on the organization (internally and externally).  How does a governing body take that concept and put it into actual practice?

Ray Guzman: Implementing governance concepts effectively requires the establishment of clear AI governance frameworks that define roles, responsibilities, and accountabilities. Boards should prioritize regular training focused on the impacts of AI and ethical considerations to ensure everyone understands the potential consequences and benefits. Furthermore, forming partnerships with experts in AI ethics can provide ongoing guidance and ensure adherence to best practices within the organization.

It is essential to recognize that AI will significantly redefine how businesses operate. Embracing the value AI offers is crucial—not only because our competitors will, but because it enables smarter, more efficient operations. However, it is equally important to be aware of the risks associated with inadequate AI governance. These risks include compliance issues, regulatory challenges, and, notably, reputational damage.

Let’s go back to the scenario where a hospital or financial institution is found to have used a biased AI model in critical decision-making processes. The reputational damage from such an incident could be profound and long-lasting, affecting customer trust, community standing, and regulatory compliance. 

So, while it is important to move forward and leverage AI, we must do so thoughtfully and diligently. Remember the adage: be quick, but do not hurry. This approach will help us minimize risks while maximizing the benefits AI can bring to our operations.

Find a consultant [[ Scroll to top ]]