Generative AI has taken the world by storm and is being discussed in C-suites and boardrooms daily. Its power and potential are so significant that governments across the globe are trying to figure out how to regulate it. While this “overnight success” has been decades in the making, we’re just now getting a glimpse of the impact and implications of generative AI and the massive disruption that comes along with it. In many ways, it hearkens back to another disruptive innovation that not only changed everything but spawned entirely new industries: the automobile.
Of course, as with any “next big thing,” there’s also a lot of hype. On a recent episode of the Tech Whisperers podcast, Dr. Lisa Palmer and Anna Ransley, two leaders who have been living and breathing all things generative AI, joined me to unpack this story and help us separate the hype from what’s real. Dr. Palmer is one of the world’s top AI experts and a longtime industry veteran who is educating and advising companies on how to approach and harness this new technology. Anna Ransley, a CD&IO known for her work at Godiva and Heineken, among others, has been advising boards and C-suites about generative AI strategy, risks, and opportunities.
After the podcast, we spent some more time discussing how the rise of generative AI is impacting the CIO role, essential skills that must be developed across the workforce, and a playbook every organization needs to get in place now to be able to innovate in this complex and challenging environment. What follows is that conversation edited for length and clarity.
Dan Roberts: How does generative AI affect the CIO role, and what do today’s CIOs need to be thinking about in terms of the way they lead?
Dr. Lisa Palmer: For CIOs, this is arguably the largest inflection point in their careers. If you’re someone who has felt like you have been forced into an administrative, backseat type of CIO role, this is the opportunity for you to change that, for you to step forward and be the visible voice of generative AI. Your organization needs someone to champion this. They need someone to build the psychological safety that is created when individuals know that someone they trust and who is technologically capable is in charge of this particular technology. We need that kind of leadership.
If you are somebody who has long been an innovative CIO, you’re probably already well down the path of embracing what’s possible with generative AI, and I encourage you to reach out to your peers, to those who report to you and to others in leadership roles in your organization, and please help to upskill them. Take them on this journey with you. It’s so important that we focus on education.
This is a fantastic opportunity to drive your own career forward while helping your organization maximize its success while also serving your employees and creating an environment that creates the best possible outcomes for society. All of us want to have that kind of impact at this stage in our careers. So I just challenge you to embrace it.
Will Markow, the VP of applied research for talent at Lightcast, recently wrote about AI-driven skill change and why tech workers will need to develop higher-level skills to complement their technical acumen. What are your views on the new roles and skills for success?
Anna Ransley: First, we need to be very deliberate about the choices that we make as organizations, individuals, and as a society as we are implementing and embedding AI into our day to day. It is important to use AI to assist and amplify, with a human in the loop, as opposed to replacing our critical decision-making with a black-box algorithm.
In terms of new roles and skills for success, there are a number of new jobs that are being created, ones we are already familiar with and ones that are still hard to imagine but will become known as we go further in this journey. Two that come to mind right off the bat are an AI engineer, and that’s somebody who’s very technical that would understand machine learning algorithms, how they work, data manipulation and the programming language that AI uses. And then there’s the non-technical role of prompt engineer, somebody that knows how to effectively prompt generative AI to get the best result that you can get out of it and train it for even better results in the future. Those are two very different roles, but they’re very natural in their progression.
Beyond that, there’s a set of skills that is going to be important for every person to have in this new world:
- Having an agile and adaptable mindset to take advantage of opportunities in a quickly changing environment. Having that openness and ability to pivot is key.
- Having a drive and being deliberate in the focus that you choose to spend your mindshare on. This means being a proactive participant in your environment.
- Fostering curiosity and a continuous learning mindset to be able to keep up with the many changes and evolutions and remain relevant.
- Having patience and empathy as others are going along the journey with us. Frankly, empathy and EQ [emotional intelligence] are the biggest competitive advantage that we as humans have.
- Finally, probably the most important one, is the critical thinking component. We undoubtedly need to be able to check and triple-check all the outputs we’re getting from AI and know whether or not, even though it sounds credible, something is truly accurate. Since we use the outputs from AI as inputs into our decision-making, it is our responsibility to recognize and validate the accuracy of those inputs to make quality decisions.
That focus on critical thinking skills is so pivotal, isn’t it?
Dr. Palmer: Yes, it is. I am excited about the opportunity that’s being created for us to teach people to think and leverage generative AI tools, to have back-and-forth dialogue, to challenge the way we think about things and dig into things more deeply, and that you can do it in a one-on-one environment between yourself and these powerful tools.
But if we don’t teach people to think critically, what ends up happening is that the artificial intelligence tells people what to do. We’re seeing this play out globally in China where they use artificial intelligence to control their population through something known as social scoring. It impacts your entire life, whether or not you are doing what the AI is expecting and telling you to do. And that is not what we want to create, particularly in the United States. We want to have freedom and autonomy and drive innovation with these tools. So the criticality of driving our ability to think and continuing to educate people, to elevate their skills and critical thinking, in my opinion, can’t be underscored strongly enough.
How does the education system need to adapt in light of all of this?
Ransley: I think the education system needs to be quite thoughtful about the fact that we are no longer training people for jobs that are static and well defined. We need to be training people for jobs that may or may not exist today and, at a minimum, will likely change tremendously over time. As a result, we need to be thinking about, what are those generic skills and what are those adaptable types of mindsets that we need to create and foster? Problem solving, creative thinking and critical thinking are all absolutely important skills that the universities and high schools need to be teaching the children to prepare them for those unexpected events that are going to happen and unexpected jobs that they’re going to have.
Dr. Palmer: The challenge in front of our education system today is that it’s largely based upon a series of tests that are designed to measure things that aren’t necessarily going to create value in our next workforce. Those tests and the requirements for those tests go all the way up to the federal level from a policymaking perspective. Adjusting what we are testing and what we are measuring is going to be absolutely critical so that teachers can be successful, because today they are required to stay within a very specific teaching framework that is largely based on these tests that were designed to create a workforce of the past.
We get what we measure. And right now, we’re measuring the wrong things to prepare our workforce for changes that are actively happening. So to me, that’s the starting place: Let’s talk about what we’re going to measure and what success looks like and then adjust all of those things appropriately so that we’re lining up the entire system to create a future successful workforce. Because if we don’t change the measures, we can’t change the behaviors and, therefore, we can’t change the outcomes.
Ransley: I completely agree that outcomes, how we evaluate success, and what types of incentives we put in place need to be revisited. And, to build on that, we also need to make sure that we are clear on the fundamentals that we still need to continue teaching, regardless of what happens in the future with the jobs. That fundamental knowledge that future generations still need to master is going to accelerate and ground their learning. If we skip that step, the decision-making is not going to be as high quality or as viable. So we need to define the knowledge and skills that are non-negotiable, that every student needs to master before they move on to the more advanced skills to future-proof themselves.
Why do you think AI should fall under the CIO’s remit?
Ransley: As a CIO, and really any technology leader, we have the broadest perspective in the entire organization because not only can we see what is happening in every department across the enterprise, but we also have the ability to action it. That means that we are in a very unique position to see the opportunities that any of these technologies can present, just like we can also see the risks, because by having the cybersecurity mindset we already are naturally predisposed to thinking about the risks and the benefits of technology solutions.
CIOs also have tremendous experience with running projects and initiatives that are run from a project management perspective, where they can use the proper business case and can think of the risk mitigation. Typical IT organizations already have embedded processes in place to see how execution can be done even in experiments while actively managing the risks. Because there’s two ways this can go wrong: You can go too fast and not think about the risk because the AI implementation is done outside of the formal methodology — and formal doesn’t mean slow, necessarily; it just means that it’s deliberate. Or, you get slowed down and bogged down by the process altogether. Neither of those is the right path to take.
As a whole, I think it fits beautifully within the CIO’s remit because of the breadth of perspective that the CIO has and the discipline in the organization that exists to make things run fast, but in a deliberate and careful fashion.
What are your thoughts on the creation of a new Chief AI Officer role?
Dr. Palmer: I think of artificial intelligence as another technology, so I’d ask this: Do we need a Chief Internet Officer role? Because I see the technology as playing that same type of role. It needs to be embedded in your overall technology strategy. It needs to be used as a tool by your business units. So I’m not seeing a huge need for this particular title. I think there are some specific creative industries where it makes sense, but on the whole, I’m not a fan.
Ransley: A lot of our existing technologies in IT are going to be AI-enabled, so AI will be embedded in all of the existing tool sets that we have. Name a vendor, and they probably have an AI strategy that has already been implemented or will be implemented in the short term. So splitting that out into a separate silo is probably not the best approach.
Dr. Palmer: To that point, what I know absolutely, not only from my career path but from my research as well, is that silos are the exact opposite of what we need as we move into this next evolution of the technology journey.
From a practical perspective, what does the generative AI playbook look like? What are some of the things every company should have in place?
Ransley: We talked about it a bit in the podcast, but to expand on that, they need to:
- Establish a cross-functional governance review board to assess the impact of any generative AI use cases, whether as a standalone board or part of existing governance
- Set up clear generative AI use policies on when it can or cannot be used and with which data
- Have an AI literacy and upskilling program to incorporate generative AI education into InfoSec training or as a standalone, if needed
- Sanitize any sensitive data before training any of the models
- Incorporate generative AI into current risk assessment capabilities or implement new ones
- Revisit generative AI risks and policies and standards continuously because things are continually changing
- Implement a robust data governance process in the organization
- Review any regulations that are evolving, because things are happening that could impact companies very quickly, so they need to make sure that they’re compliant as new regulations are coming in
These are the things every company needs to do now, because this is happening now.
Dr. Palmer: I want to reiterate her last point about the legal environment. The complexities in the United States are growing extensively. There were 177 pieces of state-level legislation on the books just in this particular legislative cycle. So for anybody that is creating a product or service in the United States, in order to comply, they’re going to have to know all of these different state-level laws. Then we’ve got the federal-level regulations, and then we have the laws on top of that. And then if you operate multi-nationally, you’ve got to deal with all of the laws from other countries.
There’s so much complexity in the legal and regulatory environment, and to Anna’s point, it is quite literally changing on a daily basis. The ability to continue to innovate within this complexity in such a way that you are not exposing your organization to risk — this is a challenging environment and it’s going to get more challenging over the short term and long term. So we need to make sure we have somebody that is really looking at that and staying abreast of it specific to their business needs.
Last up, words and messages matter when it comes to how people perceive things. Is there a better way to brand this new technology?
Ransley: In the beginning, I heard generative AI being called creative AI. It didn’t stick, but I thought that it very much described the essence of what it is — Gen AI shines as a way of sparking creativity where it may have been lacking before. I really enjoyed it being called that. Maybe we can have a bit of a resurgence.
Dr. Palmer: I love that creative AI term. From my perspective, the concept of humans plus artificial intelligence is the absolute foundation of the way we successfully move forward with AI. We can’t think about AI replacing humans. We’ve got to think about, how do we partner with AI to bring out the very best that machines can bring to the table and the very best that humans can bring to the table? I like to think about that as augmented intelligence. So it’s still AI, but in this case, it’s all about that partnership between humans and machines.
For more practical insights and advice on generative AI from Dr. Lisa Palmer and Anna Ransley, tune in to the Tech Whisperers podcast.
Generative AI, IT Leadership
Read More from This Article: The CIO’s call to action on gen AI
Source: News