Ep.3: The Human Cost of AI

Remember, you can always listen here or follow us on Apple Podcasts or Spotify. Either way, thanks for supporting us.

About this episode. Finally, our first guest in the new era of The Better Way? podcast! In this episode, Hui and Zach are joined by Jutta Williams, an experienced compliance, privacy, and information security officer—and a noted responsible AI evangelist. Jutta shares her deep knowledge of this curious—and misunderstood topic—in the most relatable and accessible way. And together, they define key terms; explore (some of) the ethical and strategic risks associated with AI; and then, dive deep into a human-centric approach to AI governance—and to the design and validation of AI systems, more broadly. Be sure to stick around for a striking story about the “human cost” of data. Oh, and of course, this also marks the return of the Better Way? questionnaire.

Who? Zach Coseglia + Hui Chen, CDE Advisors; and Jutta Williams


Full Transcript:

ZACH: Welcome back to the Better Way Podcast brought to you by CDE Advisors. Culture. Data. Ethics. This is a curiosity podcast for those who find themselves asking, “There has to be a better way, right? There just has to be.” I'm Zach Coseglia and I am joined, as always, by the one and only Hui Chen. Hi, Hui.

HUI: Hi, Zach. It's wonderful to be back again and I'm very excited about our conversation.

ZACH: Me too. Because we are joined for the first time since our return by an outside guest. Our guest today

HUI: Yay.

ZACH: Yes, we are very excited. Our guest today is Jutta Williams. Jutta is a career security, privacy and responsible AI evangelist and startup board advisor. She was the inaugural chairperson and head of the US delegation to ISO for AI standards. She holds a master's in information security, policy and management from Carnegie Mellon. And in her Distinguished career, Hui, she's held senior privacy and assurance roles at Reddit and Bolt. She was a product lead for machine learning ethics and responsible machine learning at Twitter. And she’s held other senior roles at Facebook and Google. And she's also a former chief privacy, security and compliance officer for multiple healthcare companies. Jutta, welcome to The Better Way?

JUTTA: You made that sound so much more exciting than living through all those transitions, pivots and changes. It's my pleasure to be here with you and to be your inaugural outsider as you've taken on this new endeavor.

ZACH: We're really happy and really excited to have you here. I'm going to ask you the question that I ask all of our guests when it's their first appearance on The Better Way?, and that is, notwithstanding the intro that I've just provided, to ask you to tell us who you are? So, who is Jutta Williams?

JUTTA: Yeah. At my core, I've been trying to think about how I would answer this question. At my core, I am an optimistic realist and as opposed to a realistic optimist, I'm an optimistic realist. When I took the Myers Briggs, I came out as this person who has this absolute excitement about all things abstract and theoretical. But I had a 100% bent on reality and realism; and what that meant for me in my career is that I would pie in the sky, have all these really great ideas, and then at the end of the conversation before the period hit the sentence, I’d be like, “Yeah, but that'll never work.” Or “no, that's never gonna be possible.” And part of me thinks that that's really sad and part of me thinks that that's been the key to success in my long career—is that you can see the possibility but also focus on what's real. So, for my career, I chased this idea of data and people and how data and people collide in this new Internet world that we live in, and I chased it from a security engineering point of view to protect all that data from outside threats. To the privacy point of view, but just to take a look at all those internal use cases and making sure that those were appropriate and right and now into the kind of long-term consequences of using data at scale to make decisions on behalf of and for people. And that's kind of where I've landed in responsible AI. It's just chasing data and people and how data impacts people in this new, technologically advanced world.

ZACH: I love that so much. You've set up the conversation that we're going to have so perfectly. Because we're going to talk about some things in the abstract. We're going to talk about some things conceptually. But we always want folks to have something that can really take away. Now, you've listened to us before. You know one of our better ways is to talk about topics that are on everyone's mind and to bring in experts, real experts like you in these areas of interest—to dive deep and to help our community get a better understanding of topics that matter from voices that matter. So today, with your help, we're going to talk about AI governance. And we'll see where the conversation takes us, but we're particularly interested in addressing a couple of specific elements of AI governance. One is the intersection of the human and the machine. One is the role of ethics and compliance and ethics and compliance teams in AI governance. And we hope to talk about some of the strategic considerations that are going to help governance and assurance and compliance teams, I think, be a more modern advisor to their internal stakeholders. So, let’s get started with some definitions. And my first question is: what is AI governance?

JUTTA: So governance across the board is relatively similar. Whether you're looking at governance in your financial transactions or governance in how you use data or how you make investments in people, in compliance cultures. Governance, at its core, is just looking at kind of a series of ethical considerations and decisions that you codify. That you codify in policies, perhaps in some standard practices. That you document relative to business risks and business advantages and make a determination about what are the thresholds by which you're going to make key decisions in your business.

With respect to AI. That includes things like, what kind of transparency are we going to deliver to our consumers? What kinds of data are we willing to leverage and use in order for us to to build these automated decision systems? What kind of decisions should computers be able to make versus which sorts of decisions should only humans make? Where do we want to apply automation? If it means that you're going to reduce workforce or to improve efficiencies? Sometimes they're pros, sometimes they're cons, sometimes they're both. It's just establishing the mechanisms, the methods by which you're going to make some of those key and strategic decisions. And then documenting them so that the company can adhere to them.

HUI: I want to take us back a little bit, even from those very helpful definitions. Perhaps you can help us even define what does AI mean? So really. Because I ask this because I think I have seen lots of people using that with very different understanding.

JUTTA: Yeah.

HUI: So I think it'll be helpful for us to sort of get on the same page about—when we're talking about AI here . . . AI, whether it's “AI governance” or AI “validation,” which we may talk about later. What we mean by by, you know, by that?

ZACH: Such a good call Hui. Absolutely.

JUTTA: I'm laughing because when we first established the ISO standards, the first thing you do with new standards is to define it, right? And to create a lexicon. And the term I was hotly debated internationally, nationally, between technology companies, between industries, it was a it was a leveraged term.  I'll tell you when I worked in healthcare, I went to the HIMSS Conference, which is an incredible conference. I don't ever walk the floor of HIMMS, but it's like 40,050 thousand people now. Maybe more. I haven't been in a few years; it’s six football fields worth of vendors and suppliers that that are there presenting their wares. And the year I . . . the last year I went, every single booth had “AI enabled,” like . . . like written across every, every, every piece of marketing. And so, I started to just like stop and talk to folks—I think I was working at Google Health at the time.  So, it was very fascinating to me, and I asked them, I'm like, “what does that mean for your product?” I I used to use your product when I worked in health. What does it mean to be AI enabled? They're like, well, we're “AI ready.” And I said, “wow, OK. What does AI ready mean?” And they're like, well, we have structured data. And I was like, you have structured data and therefore you are AI enabled and or ready. And I was like that is so fascinating. So, when we built the standards, we debated what is AI and the definition of AI for quite a long time. I think it was. like two years into the into the into the definition.

HUI: That is a long debate.

JUTTA: It’s a long debate; and everybody is leveraging and using that term because it's the buzz term of the century, right? And Gen AI Gen AI, right? But it's a field of computer science. It's not necessarily a specific thing. Most people, when they talk about AI, are actually talking about machine learning and machine learning algorithms; and we broadly classify a lot of those things as automated decision systems, technologies that make a prediction about what a human being would like to see as a response based on past learning from experience, analyzing data of a similar . . . on a similar question. Or defining a similar thing.

When I talk with folks about the difference between big data and AI, use the analogy of the Pythagorean Theorem. Prior to understanding that a square plus b squared = c squared, we would have used a large data set that had a whole bunch of measurements for the left arm and the right arm of a right triangle with a calculation for the hypotenuse. And it would be a big table. And when you wanted to find the hypotenuse for your triangle, you'd go look up the left arm and the right arm against this big table and find out what a very close approximation of a hypotenuse would be based on the triangle you're establishing. But over the course of time, we identified that there's a formula, there's a model, that explains how to calculate the hypotenuse without having to obtain all that data. So instead of keeping 10,000 records of a left and right arm measurement to identify what that hypotenuse would calculate to be, now you can just use the formula a ^2 + b ^2 = C ^2. And you can, with a hopefully 100% probability, identify what the hypotenuse would be for your triangle. And you don't have to retain that data or keep it in a really large data set; and do heavy duty analytics to look up an answer.  That's beautiful because that means that your code is simplified. You don't have to have it live on the web and have it look up against a giant data set in the sky. You can literally embed the model. ML Is very similar, but it's using really complicated algorithmic decision criteria to come up with a recommendation—a probabilistic match. When you say, is this picture a zebra? It goes through a series of calculations on probabilities based on the color matching and the shape matching and all kinds of other criteria. And it evaluates and says it's a probabilistic match of X percentage that it is a zebra, and so it will label that image a zebra. That's machine learning at its core, and I think for me and for most people, I think that when they talk about AI, they're talking about ML. Or technologies that are based on machine learning principles.

ZACH: So, I want to ask you something to build off of what you just shared, particularly about the sort of seeing the conference filled with people talking about AI enabled products. As we sit here today in 2025, do you think that the use of AI as you just described it and defined it is living up to the hype, or do you think that the hype is still outpacing the reality?

JUTTA: I think that it's becoming more and more hype appropriate every day. So, I think that I I'm leveraging AI in ways that I never have in the past, even as a person who worked with machine learning technologies that long before they were real commonplace. Predictive text in in e-mail strings. All kinds of ChatGPT related use cases, of course. Generative AI. I mean, on a video call, you're probably using four types of machine learning algorithms in order for us to have this conversation. Hand gesture analysis, the fuzzing of the background in the picture, a lot of the audio kind of softening and noise reduction. All that is AI enabled. So, AI is embedded in a lot of things that we use every day these days, and so I would say yes, it's being leveraged, it's there and it's useful. And I think it's becoming more and more useful, but it's a tool like many others.

Now I'm old and have lots of gray hair, and I remember when e-mail first came to be. And it was the end of the world because everything was going to be networked. All this data was going to be digitized and everybody had to learn to type; and it was going to eliminate jobs; and it was going to do all these things when we had personal computers on desktops and the Internet became real. And I think of AI more as a tool than I do anything else in industry. It's an enabler of efficiencies and scale, but I don't see it as the existential risk that a lot of other people do. And maybe that's gonna come back to bite me one day. But for me, I see it more as a tool of enablement than as anything else. But unfortunately, it gets a lot more hype as an existential risk agent.

ZACH:  When you described AI governance, what you described—I think—sounds very familiar to a lot of our listeners, especially those who are in the compliance or assurance space, even if they aren't dealing with AI governance directly. But the next term, I want to define is AI validation. And Red teaming, what are the differences?

JUTTA: Sure.

ZACH: What are those terms and what are the differences between the two?

JUTTA: Yeah. So, you know, if we if we continue the analogy of things that people may be super comfortable with, they're very familiar with. If you wrote a policy statement, you'd be a little bit [closer] to the AI governance. But if you wrote yourself a standard or procedure, you'd be a little bit closer to AI validation. It's more of the “what you're going to do” to ensure things like accuracy or to determine whether there's bias in a prediction. And I mean statistical bias—we talk about bias, and it's become synonymous with social bias, and that's very important to assess—but just generally bias is a statistical term for mistakes. So, whether you're making a mistake in predicting somebody's financial health, or whether or not they belong to a specific race, bias is a statistical term. So, it doesn't always mean social bias. I say that just a caveat, that bias is something we should always analyze in our algorithmic outputs.

ZACH: It's a really important distinction to make because I think we sadly are living in a world where the term bias has been politicized in ways that conjure something that actually may seem more controversial than it really is. And certainly, there is a an element of that statistical bias that hits on those social topics that we care about, but it lives as a statistical concept independent of that as well.

JUTTA: Right. And so, I just, I just encourage everyone to when they hear the word bias, not to automatically assume a discriminatory outcome, which is often something that occurs because there's bias in an algorithm. But it's not always an indication of a of a of a discriminatory or harmful consequence to people. So, you need to understand bias in your algorithms if you if you are, if you're training data only included horses and two giraffes, and your algorithm is predicting a giraffe 80% of the time. Then there's clearly something wrong with the algorithm. There's a bias toward giraffes. So you still want to look at bias, even if it's not necessarily the from a social good standpoint. So, these are these are the mechanisms / the methods validation is the exercise that you perform in order for you to analyze whether or not you're meeting the criteria you established in your governance program. So if you care about fairness, you're going to want to understand and explain how your algorithms are making decisions. You're gonna wanna look at reliability of those predictions to make sure that they're meeting an expectation. So, if you establish a baseline and you say I'm gonna, I'm going to give a credit card to people who have a credit score of X and over time you start to drift away from that, and you're now giving credit cards to people who have a different credit score or are disqualifying, or were qualifying, people improperly . .  you're gonna wanna be able to ensure accuracy of your algorithm over time. And the mechanisms, the methods by which you would do perform those acts is what is often referred to as AI validation.

ZACH: And it's interesting. I think of the analog for AI validation is just monitoring like a monitoring program. You know continuously looking at the model, the performance of it to see whether or not it is consistent with the standards that . . . 

JUTTA: Yes, absolutely. Yep.

ZACH: . . . we've set in that governance stage.

JUTTA: And just like any other compliance program, you have to have auditing and monitoring as part of your programmatic approach to managing risks. AI is so new that I would say monitoring is a more appropriate way to look at them. Auditing usually implies that you have at least a couple rounds of improvement in optimization within your program before you can truly audit, or you'll just have lots of false positives. Or false negatives in your auditing program. But I think we are definitely in a state of adoption where monitoring makes really good sense. If you're using these algorithms for any kind of commercial purpose, you should be monitoring and, over time, once you become confident in how your program is working, you're going to want to go in an audit kind of of these same sorts of technologies and tools.

HUI: I’m learning so much here.  I'm gonna say, this is, you know, my layperson's example of what AI validation is. So the other day, I asked ChatGPT to generate some research for me. You know, say “please, you know, give me some citations of on this topic.” And it generated a whole bunch of citations; and I picked out the one that I liked and wanted to use . . . and I separately went online and searched that those citations actually did exist, and they actually say what ChatGPT tells me that they say. So, I'm thinking that was my own personal AI validation. And for organizations, they need to just have a more systemic way of doing that. Is that sort of a lay person's explanation / illustration?

JUTTA: It's . . . that is exactly right. Because if you ask some of these generative AI model to do something nefarious, like create me 500 fake citations that suggest that COVID was a hoax. It will. It'll produce a lot of material. And it'll caveat it. So, some of them have better safeguards and kind of guardrails then others, but they'll produce what you ask them to produce, whether or not they're real. So, this happens more often than not for people . . . they may not appreciate or realize that their prompt, their engineering—prompt engineering is a whole new school of study—they may not know that the way they prompted the algorithm suggests that it would be OK to create fake news or fake content. And so, learning how to use these tools is part of our civil society obligation to ensure that we're not running afoul of some of the easy ways that these algorithms are going to respond to your query in ways that you may maybe didn't intend.  

HUI: Yeah. So let me jump from that to something that I recently you know was talking to an organization about. This organization has, like many organizations, has people who are already using tools like ChatGPT, but the organization itself currently has no policy, procedures, statement, anything about the use of these tools. So, a group of people gather together and they're thinking what, what should we do? It seems like a very daunting task to now try to build this governance around, you know the use of these tools. So, one of the things that we talked about, I said you know, we . . . my suggestion at the meeting was we have to think about this as a journey.  This is not a project we can just, you know, plan and execute in a month and be done with. This is going to be something that we start and we're going to have to continuously keep at it and add things and change things along the way. So if we think about it, what is the very first step that we need to take when it comes to this, this governance? And I came up with two things and I would love to hear your view on whether that was the right initial couple of considerations.

So, one is making sure people understand that they can use this tool / they can use whatever legal tools that’re out there, but they are accountable for their own work product. So use whatever you want, but when you present something based on the usage of tools, you can't say “oh I didn't check that because ChatGPT told me that it was this.” It's you . . . you are the one who put your name on the final product. You're accountable for the accuracy of the content of your work product. So that's consideration No. 1.

Two is: beware of the stuff that you put into these sort of public forum or platforms. So that you don't say, let me input all my colleagues, you know, addresses and birthdays in it—so you can create whatever . . .  something that help us, you know, organize office parties for people's birthdays. Because you are now dealing with people's personal confidential information. And when you put it into public platforms like this, they go to places that you might never imagine.

So those were the two things that I said, you know, let's maybe start there and then we grow this governance step by step. What advice would you give—love to hear your thoughts.

JUTTA: Yeah, it's, it's, it's such a tough balance right now. And I have to imagine a lot of the listeners here are trying to balance on the edge of promoting innovation and also managing this completely unknown risk. Because you see it at the national level and the international level even is that we want to create regulation, but also we can't stifle innovation because we're in a competitive race with other nations over AI dominance. It's the same at the business level. You don't want to create too many barriers because these tools and technologies are proliferating quickly and you won't be able to assess and analyze every potential use case that people in your business bring to you. So, you have to find those principles that are gonna help every individual person make informed decisions every day—and be able to hold themselves accountable for managing the risks that are represented to the business.

Now I don't know that I would ever go so far as you did, Hui, and say “use anything you want.” I think that one recommendation I’d make to . . . to just amend what you said, just barely, which is to say, “make sure that whatever you're using, we have a contract and an agreement with.”  Because if you're using a consumer license for things, and it's free, then you're not the customer. And you have to analyze what those products are getting from you. If not your money, and in most cases, in this age of gen AI, they're getting human curated content for free. And that means they're gonna keep anything and everything you give them and maybe a whole bunch of data you didn't, you're giving to them from a heuristic standpoint or some of the other the way that you're asking questions, that help them build better conversational models. Or things like that. And if you're OK with them getting that from you in lieu of cash? Perfect. Use it—but just understand that if you're not paying for something, you're not the customer and you are giving them something of value or the product would not be available to you.

HUI: That is such a great reminder.

JUTTA: And in most cases, most cases it's about human curated content, and I believe that's the new gold rush. I think that companies that have human generated and curated content that . . . the value of that is going to go through the roof. I was joking with one company that I was advising that this is like Bitcoin in 2008—and don't buy a pizza with 800 Bitcoin. Don't give some companies all of your human curated content and data for something free because someday that is going to be incredibly valuable because there's going to be less and less of it. So just understand that there is a trade-off between free services and the long term value of the data that you're sharing.

ZACH: It's such a good reminder, and it actually is a really good segue into the next topic, because what you've just articulated is actually a really big risk that companies have, especially when it comes to the creation of and the protection of their own intellectual property. You said, Jutta, at the outset that you're not a doomsdayer when it comes to AI. So talk to us a little bit more about your risk philosophy around AI and what you see as some of the leading risks and ethical considerations that companies should have top of mind.

JUTTA: So, my best advice comes from marriage counseling. So sorry in advance, but it was to understand all of the things that you put your mind and thoughts and efforts toward against three kind of Venn diagram overlapping circles. The first is, is this important? Is this something I care about and is this something I can do something about?

So, when we talk about these doomsday kind of outcomes, like, someday, autonomous AI systems will siphon all the hydrogen from the sun and the solar system will collapse. That's a common, one shows up at almost every existential risk conversation in an AI conference I attend.

Is that important? Yes.

Is there anything I can do about that? No.

Is this something that I actually personally care about? No.

So, I can't make my time and effort investment there. I'm glad somebody else wants to do that, but that's not me. Most businesses should not, probably, unless you are actually building like adversarial AI systems that would prevent another AI system from siphoning all the hydrogen from the sun . . . unless that's your business model, it's probably not something you should put your effort and attention toward.

From a what can businesses do now kind of standpoint, it's to inventory / to understand use cases—to see where there's actual ROI on the, on the use of some of these technologies. I will say from a personal experience, that people want to use AI enabled technologies to do lots of things in a business. But there has to be criteria by which you evaluate whether it was an actual return and a benefit for the business or not. So, marketing wants to now have AI create personalized messaging through your CRM product for all communications with your customers. Well, how would you validate that that's actually resulting in better customer interactions?

I have a non-profit that I helped co-found that I don't participate in a lot anymore, but I still have an e-mail address.  And I get AI generated proposals to that e-mail address all the time. And there's one company, in particular, that sends me a message almost every two days that is . . . that at the bottom says, “this was AI generated outreach.”  And every one of those messages irritates the hell out of me. And I am angry about it. So, from a validation standpoint, should you not consider a survey for your customers to find out if they actually enjoy this content, if it's actually reaching them, if it's actually resulting some sort of outcome? Is there data and analytics for you to bring to bear to evaluate its impact on improvement for your customer outreach? So just make sure that you're evaluating whether you should have used AI for some of these new-fangled approaches. Not all AI applications are actually of benefit.

ZACH: So just . . . just to put a just a put a finer point on that, I mean, and this goes back to your experience walking the floor of that conference . . .

JUTTA: Yeah.

ZACH: I mean, I can't think of a business meeting that I've been in or a set of priorities that I've heard from a client over the course of the past several years that hasn't included in many cases, a(n) (obligatory) line about AI. It's as though folks feel like we need to say that AI is on our mind and that it's part of our strategy because if it isn't, we feel like we're going to get left behind.

But we're talking about risk here, we're talking about ethics here, but before we even get into the risk or the ethics, it's about being sure that the use case . . . that the opportunity . . . actually has a strategic value. Which, like, seems so obvious, but in the hype, I think we can kind of get lost in that very basic concept. Hui, what do you think?

HUI: It just reminds me of all the things that we do. This is our part of our “better way,” is asking the question: is this thing, whatever it is that you're doing, doing what you think it's doing or doing what you design it to do? It's really asking that question. So, so, you know, going back to your example, Jutta, you know, people have these AI generated content—their goal is not to annoy their customers, I presume. So, what are they doing to make sure that these AI generated content and messages is accomplishing the goal of engaging the customer as opposed to annoying them? What are they doing to, well, I'm going to go back to a different use of the word validate: it's validating. Is it doing what we're, I mean, I think we asked that question of almost everything people do because we hear all the time: “we think our training is great.” Well, how do you know that? “We think our culture is great.” Well, how do you know that? It's really asking for the evidence that whatever you set out to do once you have articulated a goal for something or for a tool or for a project, for an activity that you set out to make sure that it really does what you hope it to do.  

JUTTA: Well, it's particular . . . it's particularly interesting from a from an AI perspective because built into the technology itself is this concept that you're going to be improving model performance, which means you have to go collect feedback and you have to integrate that feedback back into the product. And the focus is almost exclusively in my experience so far at the technical layer. Where they’re looking at labeled data and a reward mechanism where they can tell the machine that it performed a function well based on measurement, a metric that is very analytical in nature. Buy the intention—the need—for human feedback loops and the ability for your customers and your sales reps and your marketers to be able to also provide structured human feedback back to the model developers is just as critically important. Because it's not just about whether the model is performing against the criteria that the developers know about, it's also a function of the root governance role to go back and look at the outcome at a very human centric point of view and to share that feedback back as part of improving model performance, as well. All very core to that whole data validation question. It's also part of the whole red teaming dialogue that you hear a lot about. The red teaming . . . AI redteaming is kind of a leveraged term governance uses . . .

ZACH: Yeah. Give us, give us a definition for that as well because we talked about validation.

JUTTA: Yeah.

ZACH: Let's close the loop on red teaming.

JUTTA: Yes. So red teaming is a security concept that was co-opted appropriately for the AI space; and it's intended to be an adversarial effort where you're acting as an adversary trying to break the system and make it do something it was it was unintended or behave in a way that might be misaligned to your goals. Whether that's inappropriate responses or something more statistically biased.

There's been a case recently where a chatbot was leveraged in using kind of adversarial means to offer flights that have a greatly reduced rate and then the airlines were required to honor those fees, because the AI agent was acting on behalf of the airline. But it was a prompt engineering exercise to try to get it to do something inappropriate, and that should have been a mistake. That perhaps a human would have caught, but the AI bot did not. So that adversarial effort, that that attempt to do / to perform adversarial acts / to identify vulnerabilities in any of these safety risks that might be inherent to these models is what red teaming traditionally would be called.

But it's also a leveraged term. It's often used for structured human feedback. Red teaming something would be just to get a whole bunch of people who are expert in a subject to use a tool and to provide very structured feedback back to demonstrate where the tool was successful or not. For example, if you had an image analysis clinical tool, you want to hire a whole bunch of clinicians to use the tool to identify whether or not it's performing in the way that medical care practices would expect and require: that might be called a red teaming activity. It's not necessarily adversarial nature. It's more evaluative in in nature, but it's a term that's been used to describe that as well. But again, it's human intending to use the tool for purposes identifying where mistakes occur. Again, that bias and sometimes just true error. And giving that feedback in a structured form so that developers can improve the models themselves.

ZACH: Let's go back to risk.  So, if we were to compile a list of potential risks that that, that you might, that you might be mindful of or on the outlook like on the lookout for, you might see misinformation, disinformation, privacy violations, lack of transparency and explainability, bias, as you mentioned before. What are some of the things that you would say to our listeners as risk professionals or as internal advisors on risk? What are some of the things that they should be asking to try to really get to the risk of any particular use case or any particular platform that an organization may be leveraging?

JUTTA: Right. So, so a lot of these functions and roles are rolling over to a privacy organization or a privacy leader for the simple reason that a lot of the risks associated with AI are about the data. The data used to either prompt them or the data used to train them. Whether or not it's consented for that use case, whether or not data leakage is a potential outcome. So, understanding kind of the provenance of the data being used either to use a system you purchase or to build a new model within your organization is super important. Need to know where it came from. Under what conditions was it collected? Are you allowed to use it for this use case?

These are super normal questions that a privacy impact assessment or a DPIA would ask—and so a lot of privacy functionaries within a business are being asked to take a look at AI governance as part of their roles and responsibilities, and that seems to be appropriate.

Also, a really important consideration for businesses that they can take a stab at right away is updating their third-party risk assessment methodologies. Asking a few extra questions as part of due diligence when you're buying products or services is a good idea. Understanding what does it mean to be able to use information for the for the operational improvements of the products and services? Does that mean leveraging everything I put into zoom to train its models? Those terms used to be kind of questionable, but now they've become very questionable. Like if you're a SaaS provider, software as a service provider, or an infrastructure as a service or a platform as service provider: what data are you collecting and how is it being leveraged for the operational improvement of that product or service? These are questions you should ask relative to the AI, its development, and whether or not your data is being used for that purpose. Ask the questions and maybe it's OK. Maybe that's not a problem. Maybe, maybe that's, but it's something you should be aware of, and perhaps something you should assess as a separate risk.

So, improving your third party risk assessment questionnaires; identifying criteria which you would evaluate if the answers to those questions are representative of more, more risk; and then just assigning the accountability to someone.

ZACH: Let’s take a slight turn and hit on something that is very much related to our entire risk and compliance philosophy: and that is, the human. In a discussion about artificial intelligence, we’ve actually talked quite a bit already about human intelligence. About the role of the human in designing and validating AI use cases . . . in challenging its output . . . and in setting the strategy around its use. But let’s dive deeper, you mentioned earlier the non-profit you co-founded. It’s called Humane Intelligence. Using that as a jumping point, what is your philosophy on the humanity of artificial intelligence?

JUTTA: So the real brain trust for the nonprofit is Doctor Rumman Chowdhury. So, I'm gonna . . . I'm gonna channel my, my, my inner Rumman for this answer—and suggest that if you really want to understand this, this this kind of field in science, you should talk with her.

But humane intelligence kind of came from a project that we work on together at Twitter, where we had an algorithm that was used for image cropping on the platform. So, when you uploaded a picture to your Twitter Account, it would crop the image using an algorithm that was called—that was based on—a saliency evaluation. So, saliency means what's important in a picture. And it was trained, and it was an open-source product and it was the same saliency model that was used for all kinds of use cases from surveillance systems to image cropping on Twitter.

And it was trained on a college campus by CS (computer science) students, who typically are guys between a certain age. From particular kind of backgrounds. And it was an involuntary trained model, meaning that they looked at eyeballs—and when they popped a picture up, they measured where the eyeball looked. And not surprising, when you crop an image using that saliency model, and it's a picture of a woman, it cropped us from neck to naval because that's where the teenage boys in CS would look first at a picture of a woman. Was the model particularly gender biased . . . it wasn’t intended to be, but it was a model that was trained without human intentionality, and as a consequence, it did these unsavory things. 

So, we had retired that algorithm; and [then] we had a competition to see “how else was that algorithm biased?” And we ran that competition publicly and we had a couple dozen submissions. And it turns out that that algorithm didn't just crop women. It also cropped out black people. It cropped out anybody with white hair. It cropped out people with head dresses. It cropped out anybody in a wheelchair. It cropped out military members because of camouflage.  Why did we even use this algorithm in the first place? And the answer was well, it saved real estate on the feed because it cropped every image to be so high instead of being however tall the picture originally was. But mostly it was just because it seemed cool, and people would like it. But in our analysis, we also identified that people really hated that we auto crop their images, and they wanted human control over cropping an image. If we needed to save real estate, people who posted a picture would rather have been able to move the box up and down—and “say this is where I want you to crop it for my feed.” So, it kind of came out that humans are sometimes the best agent making decisions. So humane intelligence kind of came from that idea that humans should have a say and be able to provide direct feedback to model developers about how those automated decision systems—those ML models / that AI, however you want to define those things—and the effect that it has on them.

HUI: So, one of the human considerations is how do organizations find the right humans to help them build a data governance structure? Some thoughts to . . . you know, what type of skill sets and backgrounds should we be looking for and where they should be placed in the organization? Would love to have your high-level thoughts on that.

JUTTA: Sure, you know, the simple answer is whoever's going to care the most about it, right? So honestly, I have seen AI governance fall under data science teams. I've seen it fall under policy teams. I've seen it fall under legal counsel. I've seen it fall under a dedicated engineering leader. It really . . . What matters most is that that person is going to be invested in its outcome. Is going to be connected enough within the business to be able to create collaboration across that many different dimensions of your business. And for the most part, that they have an intellectual curiosity. We started this this podcast by talking about intellectual curiosity. Nobody's going to have all of that knowledge on board. I certainly didn't, right? But I had to be intellectually curious enough to say, “wow, what does it mean for marketing to use AI in a way that's super effective?” I'm going to go study it. I'm going to go learn it. I'm going to go figure this out. And it might mean that you need a foundation in compliance and learn the technology. Or it might be that you have a foundation in technology and will learn the regulatory considerations or the ethical considerations. But really, you're crossing so many disciplines that you just need to find somebody who cares a lot, is positioned well enough that they can create a collaborative committee-based approach to these sorts of things—because it's going to take a village; and lastly, they care about it enough to make it their full time, or at least full time additional duty as assigned.

ZACH: And Jutta, what about on the point about where they should be placed within an organization? What are your thoughts on that?

JUTTA: You know, I've seen this work and not work in different ways and places. Influences the most important consideration. And buried inside of an organization underneath a data science team that's under a product team that's under a CIO that's under a XYZ VP, it's going to make it very hard for them to have enough influence to be able to instill the desire in need to change within an organization.

Compliance, in my opinion, should fall under whoever is most closely aligned to the core of the business. So, if you're an industrial safety business, you're probably gonna have [an] industrial safety lead as your compliance leader. And you're gonna wanna make sure that the reporting structure is such that that industrial safety person is  reporting up to whoever is the penultimate authority for that part of the business. If you're a software company, [it] probably needs to roll up under who has the decision rights for changing your product road map. If you're a financial services company, maybe they're reporting up to a CFO because financial bottom line is the most critical risk that you're managing at that business. I find that if you align to whatever is core to the offering that you're making, then you're probably aligned for effective change.

ZACH: I really, really like that. It is in some ways very common sense, although I think in a lot of ways people who are listening to that may also find it to be fairly provocative.  At the end of the day, I think what you're saying is that the person who is leading compliance efforts needs to have a deep, deep, deep understanding of the business.

HUI: And influence.

ZACH: And influence. So, now I want to talk about a topic that’s really important to us. It’s in the name—our name; and that is “culture.” How do we shape a culture of data ethics and integrity? Now, I know that is a big topic, so it would be great, maybe, if you could share the really beautiful story that you shared with me in advance of our discussion today about humanizing data in the healthcare context.

JUTTA: Yeah, you know. I pivoted from healthcare compliance as a full-time job, which included clinical compliance and claims compliance, and, you know, Medicare fraud and my food arrived to my hospital bed cold and wrong site surgery stuff. I pivoted from the full gamut of compliance back to technology in 2016. I moved to California and joined Google. And I had spent about 10 years away from Silicon Valley at that point—a little less than 10 years in the healthcare world. And the whole industry had shifted and changed. It was much bigger, and it was much faster and there was just . . . it was just more intelligent and there was so much more money involved.  And I joined the company, and I was in immediate culture shock because the difference between Google and my health system was a matter of zeros. We had 10 billion user accounts at Google at the time, compared to 2 million patients at my health system. And so, there's a couple extra zeros between those two things.

But the intimacy of the data that we had at healthcare was 1,000 times more intimate than the data that we had at Google. And so, for me, there wasn't a lot of difference between these two businesses from a, from a human perspective, right?  We were smaller number of people, but much more intimate data compared to much larger numbers—and we had to build controls in governance and capability at such a hugely massive scale like Google, that every change was expensive, and every change was hard, and, in most cases, it was inventing a new way of approaching the controls completely. So, what made absolute sense in healthcare couldn't happen at Google because of scale, not because of cost, but because of scale.

So, it was very humbling.  It was very hard, and it was a big culture shock because the proliferation of data was so significant that nobody really honored the data the way that I was used to in healthcare. It didn't come from sick children. It just like populated itself and repopulated itself every day because everybody was using these platforms at such scale. So, when we went to go build technology for healthcare delivery at Google, I was meeting with people who had literally built things like search; and they expected data to be as available. It wasn't a scarce resource to them, ever. And now we were working with clinical trial data, because that was the only way to get what we needed in order to train these models—was to go through the proper channels through HIPAA and clinical trial processes to collect enough of this intimate data to train models.

And at one point, we were looking at . . . it was very hard for me to get records. I was a data hunter for them in some regard because I had the connections in health; and we had to build, kind of almost, a separate infrastructure in order for us to meet HIPAA obligations for that data. And so, it was kind of a contained space; and it was hard to get the data [and] for people to trust the brand Google with their healthcare data. And so, every record was really important to me. And at one point somebody said, well, this is just throwaway data, we'll get more data. And I got really offended by that statement. Throw away data.

These are full clinical records for cancer patients. And I'm like, throw away data? It was so offensive, and it took me a hot minute to figure out why it was so offensive to me. And I remember having a conversation with one of these engineering leaders and I said, “when you say throw away data, I'm truly offended.” And he's like, “why—why are you offended by us throwing away some of this data?” I said, “well, have you ever had to sit with a patient and their family—a patient who's likely to die well before your solution comes to bear and ask them for the privilege of having access to their data so that we could possibly maybe find a better clinical process and or cure for somebody in the future? This isn't gonna help you. You will probably die, but if you'll give us your data, maybe we can help someone else.”

And he was just shocked that that's what it took to get data, because this was just— from a culture perspective data was so prolific at that business that it was a profound moment for us both because they didn't realize that it's actually harder to donate your data to science than it is to donate your body to science. For us, in that moment, that was a really important and profound connection between humans and their data.

HUI: That is such an incredible story. I can't tell you all the images and thoughts that ran through my mind as I heard it. One of the this the phrases that popped up for me is that we need to be mindful that there's a human cost to data.

JUTTA: There is—and for the collection of it, right? Some human, as a clinical trial administrator, has to have every one of those conversations too. That's a full-time job is as a data collector labeler cleaner.

HUI: Yep.

JUTTA: So yeah, there is a human connection. And there are many, many hands and many emotional events that result in that data being available for whatever your training purpose is.

HUI: That is incredible.

ZACH: All right, Jutta, I've been looking forward to this for 10 months. We have been on hiatus, and we have not had a new guest until today and so it is the return of The Better Way? questionnaire. Hui. I'll take the odds. You can take the evens. How's that?

HUI: Sounds good.

ZACH: All right, Jutta, question number one, you get to choose from one of two. So, you can answer, first, if you could wake up tomorrow. Having gained anyone quality or ability, what would it be? Oryou could answer: is there a quality about yourself that you're currently working to improve, and if so, what?

JUTTA: So, I would choose the letters or quality about myself that you're currently working to improve. For me, that is my active listening skills. I've been reading a lot about it. I've been reading a lot about how to have conversations that where my part of the conversation ends with more question marks than periods. To be more inquisitive and a little less statement oriented.

ZACH: I love that. It's great.

HUI: That is so. The second question is also a choose one out of two questions. You can answer either: Who is your favorite mentor or who do you wish you could be mentored by?

JUTTA: I have had mentors and I've had sponsors and I think that there's a big difference between the two. Mentors are great advisors. Sponsors are people who put their own political capital on the line for your own beneficence, and for your . . . and to help you improve in your life and career. I had a mentor named Mac McMillan through a company called Synergistec early in my healthcare career. He was profoundly impactful and important to me in my career development. And I say that with a shout out, because without him, I don't know that I would have been successful in my in, in my technical pivot. And I wouldn't be where I am today. He actively worked for my benefit and helped me find roles and gave me opportunities to speak on a stage with him and leveraged his reputation to help me. In fact, both of the people I would consider sponsors in my life were men. And I've had great female leaders who have been my mentors and wonderful advocates for me, but my two sponsors, the people who helped me the most, were both men.

ZACH: Amazing. Well, hi Mac. Hopefully you're listening.

JUTTA: OK.

ZACH: Question #3. What is the best job, paid or unpaid, that you've ever had?

JUTTA: Oh well, hands down, when I was in college, I had every minimum wage job there was.  But one of them was as a coffee barista at a place called Cyberspace, which was a virtual reality entertainment center where you had to stand at a pod and shoot aliens. And it was the best job ever because their hazing ritual was to make us drink everything on the menu—and then lock us in the game ‘til we won. And I'll tell you, it's really hard to shoot aliens when you're twitching from 15 shots of this stuff though. Great job. Love that job.

HUI: Wow, that sounds . . . that that is that is something.

ZACH: That's something.

HUI: Next question is what is your favorite thing to do? Other than shooting aliens?

JUTTA: I had. Oh well, you know, I'm not a big gamer these days, but that was . . . those were good days. I am working on my left brain. I think that's the right framing for the creative juice. I've been ones and zeros for a lot of my career; and later in life, I decided that I wanted to really hone some my creative juices. So, I I've been learning to paint and learning to sew and I'm working on a mosaic right now. So, I've been doing a lot of art as both stress management and also to exercise more of my brain than I have in the past.

HUI: So cool.

ZACH: Terrific, I love that. All right. Question #5. What is your favorite place?

JUTTA: I have two beagles. I have a super comfortable, very expensive bed; and every morning, my husband will bring me a cup of coffee, set it on the nightstand, the dogs snuggle up while I read my LinkedIn news, and or just watch YouTube shorts. And so that is my favorite place right now—is to just be snuggled up in bed with my two dogs with a warm cup of coffee that my husband has graciously left for me so that I can start my day off right.

ZACH: I love everything about that except for the reading LinkedIn.

JUTTA: LinkedIn is fascinating.

ZACH: It is. It is that. True.

HUI: What makes you proud?

JUTTA: When I can see knowledge transferred to another human being; and I see their eyes light up and their understanding really come to fruition—and for them to ask a really great follow up question that demonstrates that they understood what I was trying to communicate, no matter the topic. When I see understanding in somebody else based on something I've shared, whether it's personal or professional is immaterial.

HUI: That's beautiful.

HUI: That's a good one. All right, we go from the beauty of that to the mundane, which is what e-mail sign off do you use most frequently?

JUTTA: Warmest regards is my is my go to; but I will say that my first career in government meant it was always V/R. Very respectfully. So, for the longest time it was always V/R. But these days it's warmest regards.

HUI: Interesting. What trend in your field is most overrated in your opinion?

JUTTA: Oh, certainly AI as an existential risk. I think it's insanity. So, you know, risk is equal to probability times intact, right? So when you throw out an impact like the solar system collapses, it doesn't matter how minute the possibility, it will always rise as a as a huge risk in your risk assessment. And that's why it gets so much play. But those impacts are astronomically low probability, so that one just kills me. But it doesn't matter 'cause the impact is so significant that it will always end up as a high risk. Boo.

ZACH: Yes, it's it's like whenever anything bad happens, we always say it's not the end of the world . . . until it is, I guess. All right. And the final question you to what word would you use to describe your day so far?

JUTTA: I would have to say “chaotic.” Every day, my inbox is full of really interesting and different problems . . . I don't really have an ability to plan a day, so it's a constant effort in crisis management of some flavor or another, which gives me energy. I love change and I love that sort of thing, and some people hate that sort of thing. For me, that's something I thrive in, but it's always a bit of organized chaos, which is 2 words, not one. And I apologize for that.

ZACH: We’ll take it. Jutta, thank you so much for joining us on The Better Way? We've thoroughly enjoyed having you, and hopefully this isn't the last time we see you. We'd love to have you back.

JUTTA: And it was my pleasure. Thank you.

HUI: Thank you so much.

ZACH: And thank you all for tuning in to The Better Way? Podcast. For more information about this or anything else that’s happening with CDE Advisors, visit our website at www.CDEAdvisors.com, where you can also check out the Better Way blog. And please like and subscribe to this series on Apply or Spotify. And, finally, if you have thoughts about what we talked about today, the work we do here at CDE, or just have ideas for Better Ways we should explore, please don’t hesitate to reach out—we’d love to hear from you. Thanks again for listening.

Next
Next

Ep.2: Our Origin Story