AI has been thrust with full force into the zeitgeist over this past year thanks to the transformative and media-worthy feats of generative AI, but putting aside the hype, AI is truly transforming the way we live and work. AI (both generative and classical) will undoubtedly revolutionise industries, massively improve efficiency, and create opportunities we couldn’t have foreseen.

However, as the oft-quoted part-time philosopher / superhero Spiderman put it:

With great power comes great responsibility.

Don’t be left pointing fingers when it all goes wrong and you don’t have governance in place.

As AI becomes more prevalent, it is essential to ensure that it is used ethically and responsibly. This is where AI governance comes in.

So What Exactly is AI Governance?

AI governance simply refers to a framework for ensuring that all AI is used ethically and responsibly. It’s an easy sentence to type and a very hard sentence to realise.

Your governance for AI is the process of developing policies, procedures, playbooks and guidelines for the development, deployment, and use of AI - regardless of whether this is AI you’re building yourself, or a tool you’ve insourced that uses AI to make decisions or play a part in some kind of interaction.

Ultimately, the goal of AI governance is to ensure that AI is developed and used in a way that is transparent, accountable, and fair.

Do YOU Need AI Governance?

The answer is probably “no” if you’re looking at the horizon of today, but if you expand that horizon out by 6 to 12 months…the answer will very swiftly change to a resounding “Yes!”.

As I’ve stated, AI has the very real prospect of radically transforming industries and unlocking many opportunities. However, it also has the potential to cause significant harm at scale and speed. AI systems can be biased, discriminatory, and opaque - how many of you understand what is going on under the hood of ChatGPT? The correct answer is none of us do. It’s a black box. As soon as we begin injecting these black boxes into our processes they have the ability to perpetuate existing inequalities and create new ones - again, at speed and scale.

Putting aside the “should we?” ethical conundrum, there is also the very real prospect that you will have a legal mandate to install a level of AI Governance if your AI use cases fall into the categories that upcoming regulations are deeming as “high risk”.

In short - when you’re deciding whether you need to start developing your AI Governance framework, think first of what you’ll need in 6 to 12 months time, because you will need this time to develop and refine it.

Let’s start at the very beginning, a very good place to start.

Building out a governance strategy for AI can seem like a daunting task - with the rate of change in the industry and the technical complexity involved it can feel like climbing Mt Everest. However it’s not dissimilar to the other governance that you’ve put in place (privacy, legal, data etc).

Here are some of the initial fundamental steps to follow:

  1. Identify the stakeholders: Identify the stakeholders who will be involved in the development, deployment, and use of AI. This may include data scientists, developers, business leaders, legal teams, and others. Remember that AI is more than just the engineers who build the models, it’s everyone involved in the chain from data, to decision, to interaction - there should definitely be some business folk in your stakeholder group, and some customer advocates as well.
  2. Develop a governance framework: Develop a governance framework that outlines the policies, procedures, and guidelines for the development, deployment, and use of AI. The framework should be designed to ensure that AI is developed and used in a way that is transparent, accountable, and fair.But don’t reinvent the wheel - firstly look to your national governments efforts in this space as many are building out the foundations of recommendations and guidelines for Responsible AI, for example this AI Ethics Guidelines from the Australian Government is a fantastic place to start.
  3. Establish an AI Governance Committee: This is where the rubber hits the road - pulling together an AI governance committee (or perhaps initially a less-formal AI Working Group) that will be responsible for overseeing the development, deployment, and use of AI as well as the selection and evaluation of Use Cases for AI across the organisation. The committee should be made up of representatives from the various stakeholders identified in step 1 but make sure not to over-represent the group - it needs to be small enough to remain effective and efficient. Rule of thumb - if you have more than 10 members, you have too many. Effective group sizes are usually around the 6-8 people mark.
  4. Develop an AI Risk Management Plan: Your AI Risk Management plan will outline and identify the potential risks associated with the development, deployment, and use of AI across the organisation. There should be a process developed and a regular cadence for the review of these risks along with the plan for risk rating and risk mitigation.In all likelihood you can piggyback on one of your existing Risk Management processes and tweak it to fit the AI use cases - where ever possible don’t add complexity or reinvent the wheel if you can avoid doing so.
  5. Develop a Training Program: Having new policies and processes in place is of little value if no one knows about them, understands them or has to follow them. The change management of this new Governance process is integral to embedding the change. Develop a training program that will ensure that all stakeholders are not only aware of the policies, procedures, and guidelines but understand why they are important to the organisation and to themselves. The training program should be designed to ensure that all stakeholders are equipped to use AI in a way that is ethical and responsible - this is no small task, do not underestimate the time and effort that will need to be spent bringing this governance to life.
  6. Establish a Feedback Mechanism: Is your process any good? Is it unused? Is it stifling innovation? You won’t know unless you establish a feedback mechanism that allows stakeholders to provide prompt and specific feedback. The feedback mechanism should be designed to ensure that stakeholders can provide feedback in a way that is transparent and accountable but also tracked so there is mandated follow-up and follow-through on any feedback given. As with other steps, you can likely reuse existing processes and tweak (if required) for AI Governance.
  7. Monitor and Evaluate: Don’t assume you’ve got it all right the first time - I guarantee you haven’t. Monitor the process and have regular retrospectives and evaluations to ensure that it is operating as expected. When you find opportunities for refinement or renewal, take them - your AI Governance process needs to be functional!

What do I do in the meantime?? I’m too pretty for jail!

There is a lot of work listed above and I’ve openly estimated that this is a 6+ month effort if you include the time spent embedding and training. But AI is here now, and people will not wait…so what do you do in the meantime?

Take assurance in two aspects:

Firstly - there are likely very few NEW practical applications of AI that will land into Production in a customer-facing way in the next 6-12 months. We are heading towards the peak of the hype cycle and it will “chill out” and allow you time to consolidate your processes, thoughts and projects. You probably have more time than you think. For those that do get near production, do an ad-hoc risk acceptance process to see what you’re dealing with - chances are you can let some of the low impact cases through without significant risk.

Secondly - you can keep a Human In The Loop. You govern people very well, you’ve been doing it for years - your peers, employees, bosses, they’re all held to policies, to agreements, to laws already. Don’t jump into AI all the way before you’re truly ready and have tried and tested governance - in the meantime make sure you have HITL assurances and rely on your existing governance.

Take it One Step at a Time. But it’s Time to Take the First Step.

There is a very good chance that 12 months ago you weren’t planning to have to do this - welcome to the wonderful world of data, where the rules are made up and the points don’t matter - I’m kidding of course, but the reality is we need to move where the technology has taken us, and this is where. You need to spend some time working out how you safely navigate AI before you’re doing it in response to a realised risk.

If you’d like to dip your toes in the water, do some further reading at some of the articles below or get involved with your local government and tech industry talking and research groups - this topic is widely discussed at the minute and you won’t have to try too hard to find some willing ears to be a sounding board.

This is a fast evolving area and I plan on reviewing this article 6 months after the time of first publish to update with the latest.

Let’s both check back in 6 months in the future and see how we’ve progressed.

Till then, happy governance.