Hi, I’m Greg Schaffer, and welcome to the virtual CISO Moment. I’m so happy to have Peter Gregory back, good friend of mine. He’s been on before. He’s a best-selling author. Most likely, if you’ve been in cybersecurity for any length of time, you have encountered at least one of his books. He’s an educator, a servant leader, and a keynote speaker who spent decades advising CISOs, CIOs, and boardrooms on security, privacy, and risk management. And that means that he’s as seasoned as I am as well, too, which is another word for saying old. He’s published over fifty books and training courses and he has served as a strategic advisor and contributed to major cybersecurity education and certification programs including Accolade, I hope I pronounced that right, and the University of Washington. Peter, thank you so much for joining me today.
Peter Gregory:
Greg, it’s great to see you. Thanks for having me on and I was just rocking out to your intro music there.
Greg:
It’s you know, I wish sometimes people ask me, it’s like, is that you? And I’m like, no, no, I can’t play like that. I can do cowboy chords.
Peter:
You’d be in a different line of work if that was you, right?
Greg:
Well, yes, maybe. But, you know, it’s kind of interesting how there’s like a crossover because I’m missing his name for a moment. But isn’t like the bassist of Queen, like also like a Ph.D. in astronomy?
Peter:
You know, I do vaguely remember something about that. Now, there’s a good friend of mine, Mike Hamilton, who was the CISO for the city of Seattle for many years. And he’s a drummer in a punk band and has been for like decades.
Greg:
There is definitely a crossover with regards to that. And I guess maybe that’s a good way of like spreading out the risk, if you will, of like your career. If you start out like in different areas, it’s like, well, which one is that actually going to pay and which one is going to be my hobby? So obviously, I did not… take the rock star route. Probably I kept my day job, which is, I actually did a demo tape when I first moved to Nashville and I wrote some songs, or not Nashville, Knoxville, Tennessee. And I wrote some songs and recorded it. And the name of that demo tape is keeping my day job. And I’m glad I did. One of these days I’ll have to digitize it and, and publish it. And just, just for the heck of it. I mean, it’s horrible. It’s just, but you know, it’s, it’s what you would expect from someone who didn’t know what they were doing, but.
Well, you know, I got a lot of things I want to chat about. First of all, I appreciate you coming in and joining. And I’ve been kind of getting away from my standard format on occasion of like the usual, like tell me about yourself, and then we get into like some other stuff. But I want to dive in straight. First of all, talk about AI governance. And Because that’s such a huge issue going on right now. And it seems like AI is like being blamed for everything. First of all, and this was not in our show notes as we prepped, did you happen to catch on LinkedIn recently or somewhere else that this one paper from MIT that was claiming that like, eighty percent of all malware was like AI related? Did you catch that yet?
Peter:
I don’t think I saw that one, Greg. I mean, I do a lot of reading and, you know, in our business, if we aren’t spending a few hours every week reading, you know, the latest tools, techniques, standards, laws, etc. I mean, we fall behind and become irrelevant in an instant. I haven’t seen that one.
Greg:
You’ll see it because it’s caused quite a stir in the industry because it’s mainly, as one late great radio host here in the Nashville area, Phil Valentine, would have said, it’s bovine scatology. It’s basically applying AI to where there is no AI. It’s almost like way overusing it as marketing. But we do have to think about… governance, and we do have to think about how we’re using it. And so thinking about organizations that are using AI, because I like to think that even if an organization says they’re not using AI, they are. I use AI to help prep for this podcast, for example. It helps me assemble notes. In fact, Now that I’m going off on a tangent like I always do. This was the first time when I was using ChatGPT where they said, well, hey, would you like to put that in the show note prep for your podcast? I’m like, oh, I didn’t know you did show note preps. I just kind of did my own. So it’s got I got the whole segment thing laid out. I’m like, man, I could be a professional podcaster. I could actually this could be my next career. So I don’t know. We’ll see.
Peter:
I use ChatGPT too a lot, Greg, for a lot of different tasks related to writing and creating educational content and so forth. And what I, any more, just based on what it does well and what it doesn’t do well, I liken ChatGPT and the others as a really eager kind of brown nosing research assistant who’s Actually, not very smart, but very confident, self-confident, but not very smart. Get a lot of.
Greg:
And it does like to be a people pleaser, doesn’t it? It’s almost like whenever you do like a thing, it comes up in the end. And maybe this is a setting you can change. It’s like, hey, would you like for me to do this for you as well, too? And sometimes I’m like, no, just leave me alone. But sometimes it comes up with things. I’m like, yeah, sure. Go ahead. But it’s almost like people pleasing where it wants to be your buddy. And I don’t know, that kind of scares me.
Peter:
Well, it says, oh, that’s very insightful, or that’s a great question. And I saw a comic the other day where a hospital patient is laying in bed and the AI-powered robot surgeon is standing over him. And he had an appendectomy, apparently, the patient, but the scar was on the left side instead of on the right side. And the only thing you hear is the AI surgeon going, oh, that’s very insightful. I must have operated on the wrong side. Would you like me to try again?
Greg:
I mean, for things that are really consequential, I don’t know. I’m going to fact check the heck out of it, which in some cases may end up taking just as long as if I had just figured out whatever it is, you know, on my own.
Peter:
Well, and I often will point out and say, too, that I think the best way to use AI is to use it as a prompt for you to do better. It’s like a tool. Don’t let it do the work for you, but get you to think a little bit that that that is at least I have found that having discussions back and forth has helped me to think through problems. Ultimately, it’s almost like counseling. It’s like the counselor can’t solve your problems, but the counselor can help you solve your own problems.
Greg:
Yes. Maybe, I don’t know, maybe AI should go into counseling. So we talk about organizations using it. Obviously, they want to use it safely, they want to use it effectively, and they want to use it legally. But do they really… Do they? Or is it like the internet in the late nineties where the executives are running around screaming, we got to get online, we got to get online, we got to build a website, we got to do all this stuff without really thinking through whether it’s even financially or operationally viable to do. And yet it was the gold rush. And you were around Greg and I was, and That was that old thing I was talking about earlier. And then, you know, ten years later, it was the cloud and now it’s AI.
Peter:
We see, you know, companies are rushing headlong into implementing, you know, this new technology without guardrails, without governance. They just want to do something because they feel like they’ll be at a competitive disadvantage if they don’t. You know, that’s that main, you know, one of the main drivers is they don’t want to get left behind.
Greg:
And a huge good point there. And I’m thinking about a client that I was talking with just recently where it was almost the same thing where they’re saying, how are we going to leverage this tool? And the question that I posed that then the risk management team also posed was, OK, what problem are we trying to solve here? don’t don’t don’t bring in a technology and look for a problem for it to solve look at what you’re doing because it just because the technology’s there maybe the way you’re doing it is fine maybe you can do it better but don’t blindly say oh we got to go jump on ai because to your point you’re right back in the nineties it was all about this stuff we have to build a website why Because everybody else is.
Peter:
Yeah. I mean, you know, otherwise FOMO, you know, fear of missing out. It’s like we’re missing out on profits and business and gold and silver. And it’s like, well, can you tell me some more specifics? No. Is your business running well? Yes. What’s the website going to solve? We’ll have a website. We’ll figure it out when we get there. I mean, it’s like, oh, you have to pass it to know what’s in it, right? I mean, there’s all kinds of phrases for we don’t know, but we’re so blindly and emotionally invested in this thing that we are determined to do this thing.
There’s another force that happens also, Greg, and having been in IT and cybersecurity my entire career since the late seventies. There is also considerable pressure from within organizations to implement new technologies. For instance, Internet and Cloud, IT departments, their engineers and management and architects, they wanted to get involved in these technologies to learn them to stay relevant professionally. Whether the business they worked for needed it or not, they felt as individual professionals, I need to learn this thing or I’m going to be irrelevant. It’s the same thing with AI. People in IT and actually across the business, as individuals, especially those who read, AI may not replace my job, but someone with AI skills might replace me if I don’t have them. And so there’s this grassroots pressure for people who want to get experience with AI. And unless the organization is just airtight, They’re going to just start using it for things, even if the organization hasn’t approved it. And if they can’t do it through work, they’ll do it at home and try to solve work problems at home. So, I mean, yeah, it’s happening. Shadow AI is probably the biggest shadow technology that we’ve ever seen.
Greg:
And that’s very valid to do. And, you know, I’m thinking back to some of the other things like that we would try to implement and learn about, even if the organization was the first thing that came to mind was building a home network back twenty, twenty five, thirty years ago, learning about routing and switching and all that stuff as well. Well, I can’t play on it on the on the my company’s network, but I can learn it at home. And and I can see AI being being very relevant for a topic that folks are going to need to want to stay competitive in the industry. Just like, as you said, when cloud first came about, it’s like, well, I mean, why do we care about cloud? It’s just a computer out there. It’s like, well, no, you need to figure out some nuances and all that. But let’s just say, let’s just say that an organization, they… are doing it the right way they’re like we have this problem in this process call it process a only because i haven’t thought this thing through totally process a we think we can gain some efficiencies if we if we apply some ai to what’s a very manual processes for looking up information So they’ve gotten through the first gate where they’re looking at AI in general as a methodology for a tool for solving an existing problem. So yay on that. What do they need to do in order to make sure that they’re going to implement this thing in the correct way so that they’re not exposing their information? You mentioned shadow AI, just like shadow IT, one of the… bigger risks in organizations because if the main organization is not going to give you the tools and the resources necessary for it, people are going to find a way to do their jobs. What are your thoughts on that?
Peter:
Well, Greg, implementing an AI-enabled system or an AI model for business in an organization really begs for having governance in place, mainly a modified systems development lifecycle that includes the extra AI specific safeguards because the use of AI is different from all other kinds of business applications and systems that we have seen in the past in that they consume data differently and the way that they consume data is varies and presents a few challenges from a privacy perspective and an intellectual property perspective. So there’s the use of data and whether they can and should and whether it’s the right data and that it is sufficiently accurate, complete, etc. Then the other part about AI that is novel when compared to traditional business applications is That it’s not always deterministic. It is sometimes indeterminate or in kind of in between the two of those. It can sometimes be a challenge to determine why did my AI say this instead of that? Or why did it decide this? you know why did it say yes instead of no for for something you know like a credit um loan application or uh employee screening and so forth but you can ask that you can ask well why did how did you come up with this with this answer right
Greg:
well you you can and you know an llm uh like chad gpt and gemini and grok and all the rest aren’t are are fairly good at that in that they are what they call explainable. But there are some forms of AI models like neural networks that are not explainable. They really are a black box. So what happens is an organization that wants to use AI to improve some system or process they need to tread somewhat carefully to ensure that they’re going to pick the right kind of model to give them the level of explainability that they need to have, especially if they are using an AI to make consequential decisions or at least recommending consequential decisions. And there’s a lot of different areas in which organizations can do that. And we’re hearing that the results are pretty good. You know, for instance, in radiology, It’s been shown that AI can be more precise in diagnosing different medical conditions than a human radiologist can. So the POCs have been there and we see the potential of AI being really great in certain areas. But Again, it has to be, you know, for an AI that’s making decisions or recommendations or diagnoses and similar things, it’s got to be explainable. Back on the privacy part, sorry, Greg, back on the privacy part, the wild card here is that more and more privacy laws are including this thing about right to be forgotten, right? GDPR kind of led the charge on that, and it’s not going to end in Europe. Other jurisdictions are going to do the same thing. But when you train an AI model on, say, your customer or employee data, then what happens is if a data subject comes up and says, I want you to forget about me, you can’t just tell the AI to unlearn that one record. There isn’t an unlearn function in AI models. It has been assimilated in Borg terms. The data has been assimilated and you cannot unassimilate that. It’s part of the collective.
Greg:
Yes.
Peter:
So that’s one of the stickiest challenges. Now, I think that… you know, the smart people in the world are going to figure this out and they may be able to eventually come up with a model where you can unlearn a single record or groups of records. Because if you can’t do that, then the only other alternative is you just got to retrain the model without the records of people who say, take me out of your system.
Greg:
But you lose everything then from before, right?
Peter:
Well, yeah, yeah.
Greg:
And so every time somebody wants to have their record removed, you got to retrain the system?
Peter:
Well, it depends. You know, it depends on what the laws say. Now, maybe what you can do is say, okay, we’ll unlearn you at the next training, which is like, you know, next month or next quarter. So maybe there’s some delay. I mean, there may be some different ways of doing this. Or the alternative might be to train the system with anonymized or pseudonymized data so that when they want to be forgotten, well, their PII isn’t identifiable in the AI system to begin with. So that takes you way back to the design phase where you’ve got to really figure out, okay, if I’m going to be subject to a law today or in the future that says I need to forget about certain people, then maybe I need to figure out how to not get the PII, the identifiable PII for people into the AI system to begin with.
Greg:
Well, and I think that’s important in order to make sure that you’re complying with the this is a Schaeffer’s law of information entropy. And that is that it will always become more complex. You can’t go back to a simpler state. There’s more energy involved. And by the way, I just made that up like right on the fly. So it’s but but everybody listening. Information entropy. That’s that’s my law right now. Schaefer’s Law on Information Entropy. So you got to figure out beforehand about doing stuff to the data so that you don’t have to worry about trying to unspool. Because I don’t put any faith in being able to do some sort of a relearn. It’s out there. It’s out there. It’s done. It’s not like you can do a DOD wipe on a hard drive, something like that, because a hard drive is self-contained. It’s right there. You know it. You see it. You can feel it. You have no idea what this stuff is when. And I know that that sounds a little bit… paranoid and maybe it’s partially because we’re paid to be paranoid but i think it’s more realistic than paranoid it’s just instead of assuming that you can turn back time thank you share um you you you can’t and you just deal with where you’re at and you just build on the mess somehow or another or make sure that you know that there’s going to be mess to begin with so let’s just assume that there’s going to be mess how do we live in a world with that mess
Peter:
Great question. And speaking of paranoid, so you got tinfoil under that hat, Greg, is that why you wear it?
Greg:
No, I don’t. I’m just having a bad hair day, but yeah.
Peter:
Sure, sure. I’m in a Faraday cage, so I’m okay.
Greg:
Oh, well then how is this coming? Oh, you’re on a wired network then. Okay.
Peter:
Yeah, there you go. Yeah. Okay. This is one of the big challenges. And I think that organizations that have in place an effective data governance regime are going to fare much better when they implement AI systems. Because having effective data governance, meaning you know what data you have, you know where it is, you know how it’s used, you have documented justification for its use, you have retention schedules, data classification and protection schemes, and so forth, you’re really paying attention to your data. organizations that are in that state that are more mature in their data management and data governance are going to do much better in AI because they’re already going to have that other set of guardrails in place that gives them confidence that any data that they use to train the system is data that they are using properly and that they have documented justifications and approvals for and so forth. But as you know, most organizations aren’t there with data governance. In my years of consulting… I found few companies that had really good data governance in place. I mean, not just the piece of paper, but the controls. And a lot of times I came across an instance, this wasn’t mine. I read this, I think on a LinkedIn post, where the basic gist of it is that the IT group, they were able to come up with about eighty percent of their asset list. And the question from them and from senior management, wasn’t eighty percent good enough? And it’s like, no, it really isn’t. You got to really know where everything is. Ninety nine percent isn’t good enough. It only takes one unmanaged, unknown thing on your network. I mean, what was the casino? Was it MGM that was or Caesars that was hacked through the fish tank system? I mean, I you only need one week link to that. to get inside of a valuable organization. And I think that that’s sort of like what’s something that’s missing when people are thinking, and I don’t think AI can do this either too. And that’s the, the ability to have imagination to think beyond, um, what the facts are, what the information’s pro, pro, uh, proclaimed. It’s like, there is, there, there is, and there never will be a, um, a place where we come to an AI where, um, you have that intuition and you were talking beforehand and i was that when you were using the example the medical example of uh i think it was reading an x-ray or an oncology report or something like that it’s it’s like i i put a lot more stock in someone who goes by their gut feeling who’s been around for a while it’s not tangible they can’t put their finger on why they feel this way but i think greg actually you might want to check this out instead and i don’t think ai is ever going to get there I can’t see how.
Peter:
Right, because AI is just a mirror. It’s a mirror. Information, yeah. It literally is whatever is put in. You don’t have that imagination going out. But all this idea about governance, I sense, is one of your fifty books on AI, or do you have one coming out?
Greg:
One coming out?
Peter:
I don’t know the schedule yet, but I completed a training course on the AIGP certification. Uh, that’s AI governance professional. That’s an IAPP certification. And that is, that organization is, is well known for their privacy certifications, uh, CIPM and I can’t remember the other one, um,
Greg:
What was the name of the AI-specific certification, the one that you just completed?
Peter:
AIGP. They were the first AI governance certification that was released to the market. I think there might be one or two others. I think ISACA has an AI… cert and isc two they may have one or they’re they’re building one and others others are coming and there are some different angles i mean there’s you know ai engineering um and then there’s ai governance and and i’ve been in in security governance for twenty years. So so that was really natural for me. So the training course is out. It’s on O’Reilly. And then I have a book that’s a companion book that’s an AIGP study guide that is to be published by Wiley. And just over the weekend, I finished going through almost all the chapters on proofing and there’s just a couple of steps left so proofing the glossary and the study questions and and so forth so it’s almost over the finish line but i don’t know when it will be available i’m hoping let’s see it’s november maybe first second week of january i’m hoping
Greg:
So I’m looking forward to that. And that would not be the first book of yours that I’ve had. One of the ones that I’ve read and it helped me out a lot was about technical writing. So let’s just say that there’s someone out there listening to this podcast or watching the podcast that they’re like, I have a great idea. I want to do a study guide for AI. Oh wait, no, Peter’s already done that. I can’t do that, but I’ve got another great idea and I want to turn it into a book and they have no idea where to start. How can they go from an idea through a roadmap to a published technical work?
Peter:
Yeah, great. That’s a great question. And as I became known as an author in the early two thousands after I’d written two or three books, I started getting a lot of people being my friend or I was their friend because they wanted to be published. And so i mentored a lot of people and i helped several people get published over the years but i wanted to write a book that explained the whole process end to end but i was literally so busy writing books that i didn’t have time to write that book but three years ago i finally did get it done there it is let’s see the art of writing technical books yep i did get it done it’s not a very big book it’s uh let’s see uh hundred and fifty pages or so. I keep it concise and I describe the process from from idea to publishing and promoting and even writing subsequent editions and all the steps in between looking for a publisher, going through and negotiating your contract, getting an agent, writing your draft. What tooling do you need? How does copy editing and tech editing and proofing work? So it’s really a deep dive, but a concise deep dive on what the whole publishing process is about. And it was mysterious to me when I started writing in the late nineties. I wish I would have had this book so that I could have understood better what the process of writing a book is all about. What is required of us and what do publishers and other people take care of?
Greg:
I think an important point to make is that whether you’re going the traditional route of publishing or you’re self-publishing, you need to ensure that the work that you’re putting together is of high quality. I think that those that fall into the self-publishing world, they don’t get that. They’re just like, well, it’s so easy for me to put this out on Kindle or Amazon or whatever. I’ve seen some really good self-published works out there. I went that route for a variety of reasons. Probably the primary one is that I keep the rights to everything and I don’t have anybody controlling me and I’m a control freak. I sacrifice some for that because I think you can get better exposure, particularly if you go up the chain with more major publishing houses. So there are pros and cons. But the one thing that’s consistent is you have to have good work because crap will never sell. So how can someone be sure that their work is good? What do they need to do, like editing-wise, to hire someone or something like that?
Peter:
Good point. And when I wrote this book, I was going the self-publishing route because the page count was low enough that traditional publishers didn’t want to pick it up because the economics just wasn’t there for them and the audience wasn’t there for them. I mean, I’m not Stephen King or Bruce Schneier, right? So, you know, publishers are in business to make money and they sign up authors who they think is going to make some money for them. uh so what is what is needed i mean you need to know how to write yourself but it’s it’s important to hire a copy editor and it’s it’s uh and the copy but even but in terms of the time sequence after you write your rough draft you need to have one or more subject matter experts Go through your manuscript to ensure you’re explaining things correctly and in ways that prospective readers can understand. And whatever the book is about, it just needs to be understandable so that the audience can read it and go, okay, I understand this, whatever this is about, so that I can do something with it. So that’s the tech reviewer or the tech editor. Then there is the copy editor, and they’re doing the punctuation, grammar, formatting, voice, first person, second person, third person, you know, all of those things that a lot of us don’t think about too much or, you know, the overuse of trite words. Yeah. repetition of things. All the bad things that almost every writer, including…
Greg:
One of the things that I hate to see, I shouldn’t say hate to see, but there’s a big red flag talking about technical books and all that. I’m sorry for interrupting, but the way overuse of the term in this ever-changing digital world or something like that in this ever-changing threat environment it’s it’s okay to use that occasionally but sometimes i’ve seen some works where it’s just like it’s repetitive it’s repeatedly repetitive let’s see what i did there um and and it turns out to be like filler words because you you you don’t you The best works aren’t based on number of words, but rather it’s based on the concept that you’re communicating, right?
Peter:
Right. It’s sort of like, what’s the fat content? Although a little bit of marbling is nice.
Greg:
I like that. I like that. My next book is a well marbled.
Peter:
Yeah. So the upshot is, Greg, that if you’re going to self-publish, you need to line up one or more subject matter experts and a copy editor and a proofer. And you may have to hire other people like illustrators if that’s something you can’t do well on your own. You need someone to design your cover if that’s not something that you can do on your own. And then if you’re self-publishing, then you’ve got to find an outlet, you know, like Kindle or Book Baby. And there are other avenues. And, you know, we live in a great age where, you know, literally anybody can become published. And there are certainly people who have self-published and then their self-published book was so successful that a traditional publishing house said, hey, I want that and we’ll make you even more money if you’ll sign with us. So I can see that happening too. Now, in my case, I was all the way down to the finish line on self-publishing. And then at the last minute, Waterside Publishing picked it up. So I had literally done all the work, but they still did the final steps. And and they made me a nice offer because I had done all the work up front and I literally had a camera ready manuscript. So it was easy for them.
Greg:
Well, it’s a good book for those of you that are interested in it. Again, the name of it again is The Art of Writing Technical Books. The Art of Writing Technical Books. And you can find that on Amazon. They can go to your website, PeterHGregory.com. I think I have that right, correct?
Peter:
Yep.
Greg:
Yep. Very, very easy to find. And Peter, you know, I hate to say this because there’s so much more I want to talk about, but we’re kind of over on time and we’re going to have to just do another one of these segments because fascinating stuff. Just I can’t believe that thirty five minutes have already gone by. And I mean, it’s it’s been a it’s been a pleasure talking with you, brother. It’s like I always love having you on. You always bring some great stuff to everybody.
Peter:
Well, my mission is to improve the world by helping people better understand how to protect their information assets. And so any way that I can do that is a win.
Greg:
You and me both. Appreciate it. Thank you so much for joining us today, Peter. Appreciate it.
Peter:
All right. Thank you, Greg. And everybody, stay secure.
Greg:
Bye.