Greg: Hi, I’m Greg Schaffer, and welcome to the Virtual CISO Moment. Today, I’ve got Scott Foote with me. He is a cybersecurity executive, a board advisor, and longtime industry thought leader with more than thirty-five years of experience spanning information security, risk management, privacy audit, and now artificial intelligence governance, which I’m sure we’re going to get into that as well. He currently serves as a chief AI officer, chief information security officer, chief risk officer, and data protection officer for many clients, where his focus for helping the clients is helping their organizations bring order to chaos, translating complex technical and risk concepts into actual board-level decisions. Scott, thank you so much for joining us today.
Scott: Thanks for having me, Greg.
Greg: So we’d love to hear your story. And I should note from a timeline perspective, when I was reading this intro, as far as like thirty-five years of experience spanning information security, risk management and privacy, I could be thinking about myself because it was about thirty-five years ago, I think probably that I started my career in this wonderful world. So I can imagine that probably we have some experience that dovetail. But I would love to hear your story because it goes all the way back to the orange book. Remember the Rainbow series?
Scott: Yes! I don’t have any relics of that anymore, but I do remember.
Greg: The whole set downstairs. The entire set. It’s probably worth something at this point in time.
Scott: Oh, I’ve been offered money for it. Maybe my wife will donate it to a museum at some point in time.
Greg: Yeah.
Scott: So yeah, probably like yourself, I started back in the eighties. I started as an operating systems engineer at Digital Equipment. For anybody who remembers it, Digital or DEC, I was working on the VMS operating system. Well, the VAX was the first mainframe I ever worked on.
Greg: There’s the first dovetail right there.
Scott: There you go. It was huge at that point in time. It was in every data center and it was creeping its way during my tenure, it was creeping its way into the department as departmental servers. But I supported a lot of the internals of the operating system. And periodically, I was also a fly guy. So I was troubleshooting. For the most part, it was performance in very large-scale systems back then for huge Fortune X companies. But once, twice, three times, I got dragged into events that were actually security problems. One was an external event. One was an internal event caused by ourselves, the Morris worm, for example. So I got pulled into this and we didn’t of course call it information security at that point in time. But I got pulled in to just start shredding things, data structures running in memory that it was that bare knuckles at that point in time. But that whole experience brought me eventually to Oracle, where I worked supporting, when it was just database and the tools were emergent at that point, all the SQL tools. But I supported same clients, only now I was sitting above the operating system working with databases and applications. And they didn’t spend as much time in the security space, but certainly was flying around the world looking at all these big data systems. We always have had big data, but it paled in comparison to what we have now.
Eventually, a group of us left Oracle to start a company called Open Vision. And Open Vision was intended to compete with Computer Associates. It had a suite of tools they were trying to integrate. They went and bought a whole bunch of companies and technologies, and they started to integrate them as a way of creating a startup. And it was unprecedented, probably for good reason. We struggled with culture. We struggled with salespeople knowing what to sell. There were a lot of early stage challenges, but eventually it took off and we took it public. In Open Vision, I had three of the four product divisions, and one of the biggest ones was security. We had the old Gears Zolot guys joined us to do consultancy. We did everything from tiger teaming is what we called it back then, red teaming, all the way through to vulnerability analysis. We had one of the leading vulnerability assessment tools. But even back then, selling security was difficult because the pain threshold wasn’t high enough. The risk simply wasn’t high enough. Where we’d get engaged, we would go in, and in one particular event, the team broke a huge financial services environment, showed the CFO his own comp plan in less than fourteen seconds. The pain still wasn’t high enough that we could effectively sell a suite of security tools.
So we ramped that. We continued to ramp that. We wound up merging with a company called Veritas and I left there right before two thousand. I left Veritas and backup, of course we were selling backup as a storage product, but you and I both know backup is a security tool, right? It’s a responsive or reconstructive kind of a control.
So after that, there was a series of startups. Almost all of them were in the space of security. In one point, we were mapping the internet to sell the knowledge to folks that were looking. The company was called Quova with a V, like quote tub, but with a V. We were looking to the existing customers at that point in time were two markets. One was the porn industry that was trying to localize the content to the browser. The other one was the gambling industry. And at that point in time, this is like the two thousand five. You couldn’t serve a bet here in the US to someone who was on US soil. You could still bet, but you had to be somewhere outside of the fifty US states. And the best way to find out was to geolocate the browser. Well, when I met them, they were struggling. And I said, you guys have an anti fraud solution here. You have all this knowledge. This is anti-fraud. So we turned the company around. I joined full-time and with the president really who ran all of the sales organization, he and I kind of reinvented the business as an anti-fraud capability and it just took off.
So we got the attention of the federal government because in order to map all of the intermediate points all around the world, we were generating traffic, mostly ICMP. But we started to knock on doors of the US government to the point where they finally wanted to come figure out who the hell we were. We had virtual machines and physical machines that we had leased all over the world to generate the traffic, but we generated too much traffic just knocking on US assets. So a certain three letter came and knocked on the door and basically said, what the hell are you doing?
Greg: So just out of curiosity, when you did get that knock and you realized who was knocking, how did you feel?
Scott: It was a little intimidating at first, but I mean, I’ve been in this space. I had been cleared and dealt with that even back in my Oracle years. So it was not unexpected. But the first thing I wanted to do is authenticate these clowns to see if they really were who they said they were. So we went through that dance and eventually we sat down and I knew that I had a story they wanted to hear.
Greg: And they were trying to authenticate you too at the same time.
Scott: That’s exactly what they were trying to do.
Greg: That must have been an interesting conversation.
Scott: It was. It was. But eventually they said, okay, who do you have that’s cleared? And only myself and the director of my analyst team had ever been cleared in the past. So we went through this intermediary, which happened to be the MITRE Corporation. MITRE stood between the two entities. They brought in government people, single names, false identities, types of things. And we demonstrated what we could do with the knowledge we had built now for ten years. We had ten years of IP intelligence. And today, there’s a lot of people that have that. But at that point in time, there were only a few of us, really, that had that kind of knowledge. And we can’t get into why they wanted it. But suffice it to say that when we sold the company, they said, you need to come work for us. And I said, there’s no way I’m going to work for the government. You couldn’t possibly pay me enough. I’m in the startup space. I’ve been an executive for ten plus years at that point. You couldn’t possibly afford me. They said, look, consult through MITRE. I said, I’ll give you one year. One year turned into ten.
Yeah, and it started with before there was any cyber command, before the services had any cyber aspects, there were some cyber entities, but nobody had that term. So I got involved immediately. MITRE put me exactly where the government wanted me, and that just started to, let’s just say, rolling stones gather moss. And in ten years, I finally had to tap out and say, okay, there’s nothing more I can help you with. The problems all continue on a nation-state scale. I need to go back to industry. And frankly, there were a lot of startups that were calling. The exits turned out great. My friends at Warburg Pincus were saying, okay, what are you doing? You’ve been on the sidelines for ten years here. And when I started, it was because the market had quiesced. There were no decent exits happening. Mergers, acquisitions, nothing was going public. When I came back out, of course, the entire cyber market was lit up.
So after that, I started to roll something out, which I won’t bore you with, but it was a cyber risk intelligence platform. Could not get the funding, could not get money from the VC community. And eventually the old customer inside of the government came back and said, yeah, you tapped out, but you got to come back in. So Phenomenati, the consultancy that I run today, that was created for one reason so I could quickly go back and work on a project. I had no plans of being here ten years later and running it as a going interest, but we’re a multi-million dollar consultancy at this point in time because the private side, the commercial side has taken off and we do almost no government work at all at this point in time. So yeah, it’s been a fun ride.
Greg: And I can somewhat relate to that. I purposefully have not, I never wanted to be an entrepreneur. I never wanted to be a business owner. I just wanted to form an organization, an LLC, so that I could do some of this consulting on the side. And I never expected it to grow, not multi, multi-million, but it’s a lot bigger VC services than I thought it would be. And I just, sometimes I feel like, you know, the Hotel California, I can check out but I can’t leave. I don’t know. It’s like I’m sucked in. I can’t retire. I just continue to work on this because I’ve been doing this for, like I said in the beginning, around the same amount of time. Started, and I guess it would be eighty-nine. I was working as a student assistant in networking and networking back then was more like serial connections. Not we had Ethernet, but it was thin net. I mean, it was like it was barely a twisted pair of coax running all over the place. And so, I mean, dark ages and all that and never expected to be here. But that kind of brings me to, you know, I get asked a lot of times, it’s like, well, you’ve seen a lot, Greg, over your tenure, which is a nice way of saying that I’m old. But what has changed over time with regards to cybersecurity and cyber risk? And actually I like it when it’s framed the other way around. Well, outside of the name, because we never called it cyber risk back in the early nineties or whatever, but I’d be interested to hear from your perspective what outside of the technology, of course, what fundamentally has not changed in cyber risk over the last thirty, thirty-five years?
Scott: Wishful thinking. As a broad statement, I would say wishful thinking. And it’s everything from, hey, you’ve got a risk there and they want to diminish it by saying, yeah, but the likelihood is so low, right? The impact could be big, but the likelihood is so low. We’re a tiny company or nobody knows us because we’re still stealth mode or whatever it is. But that wishful thinking always stems from we think we see a solution or we think we see productivity gains or things like that with AI in particular, and we want to rush right into it. Look at what’s happening with MoldBook over the weekend. There’s a perfect example. Wouldn’t it be cool if we all took our MoldBots and connected them all into this social media environment, MoldBook? They rushed right into it. And we find out yesterday that there are absolutely no protections on the underlying data store to describe all of the API keys that everyone’s using. Everything is world readable and world writable. And now we’ve got malware that’s basically running rampant that has been set loose in what was initially, it was obviously the intention was a good thing to say, let’s experiment with this. Let’s see the art of the possible. But they didn’t apply responsible engineering. That’s what hasn’t changed. They dove right in. They didn’t put in a sandbox. They didn’t do any of the initial testing and just framing in terms of controls. And now we’ve got a global storm on the loose.
Greg: Well, and I experienced that in a separate instance and somewhat related, somewhat not. I’ve really embraced the idea of, I’ve talked about this in the podcast before, I’ve embraced the idea of vibe coding to create solutions for a business problem that I, prior to this, it’s like I could not spare the time to learn how to code properly. But one of the platforms I like to use is Bubble and some other ancillary products with it. One of those ancillary products, I was building a workflow using AI as a guide and they created the workflow. I mean, the AI comes up and says, it’s like, hey, I see what you’re trying to do. Do you want me to create a workflow for you? And I’m not going to say no to something like that. I’m like, hey, yeah, it’s great. It works. It’s fine. It’s perfect and all that. But this was to transfer some files and files I didn’t want exposed. But by default, this application dumped it into a fully exposed S3 bucket. And I’m thinking to myself, oh my gosh. It’s like, I mean, I caught it because I’ve got a bit of a security mindset, at least I like to think so. But how many people out there, how many folks are going to be using these tools to rush the basic process? And the basic process really hasn’t changed in thirty-five years as well too. It seems like that can be a real danger going forward.
Scott: Oh, it is absolutely a danger. We have people that have no training in engineering that are vibe coding. And to your point, there are assumptions made. If you’re not explicit in what you need from a platform, it’s not going to give you the locked down S3 bucket. It’s not going to give you a backend or a backend. It’s not going to give you the auditing capability. It’s not going to encrypt or minimize data, et cetera, unless you’re explicit. And I’m watching people whose intentions are good and they’re launching things that are great productivity boosts within their job function or even within their company, but they don’t know what they don’t know. It just works, right?
Greg: Yes. And I’ve had that experience of dealing with folks, professionals in the field, that they’ll just say, well, I don’t know, it just works. And whenever somebody comes to me and says, I don’t know, it just works, and you’re supposed to be a pro in your field, I really stop for a minute. I’m like, that doesn’t sound like the right answer here. It’s like you should understand. I mean, you can drive a car without being a mechanic, right? But you should understand the fundamentals of the internal combustion engine or the electric motor. You should understand the concept of brakes, how they work and how they affect your driving. You don’t necessarily have to be able to rebuild a brake. And I don’t see that happening with anything in the world beyond some effort to put forth standards. Now, you worked on the ISO 42001, right?
Scott: Yeah, I use it all the time. I use it as a baseline to gently introduce companies to just a framework for AI governance. I don’t start with, hey, let’s start layering on bureaucracy. I start with, let’s be practical about it. Just because you can doesn’t mean you should. That’s another mantra.
Greg: Well, I’ve got a friend who I’ve worked with for thirty years now who’s been on the sales side. And he will always say this when he’s talking to a client. He’ll say, it’s like a speedo on a fifty-five-year-old man, right? Just because you can doesn’t mean you should. And I laugh as well. But that’s his point. You don’t forget it. The next time they do something like the vibe coding, they have this painfully, mentally indelible image that forces them to go, wait a second. Just because I can doesn’t mean that I should. What could go wrong?
And the temptation to solve problems and to be productive is so huge. And unfortunately, I think that’s because we have the ability now to quickly take the locks off, the guardrails off, or we don’t even think about putting them on, that having businesses at least try to put those guardrails in place definitely has value. But I’m curious, why did you land on 42001 instead of, say, NIST or one of the other standards?
Scott: I can tell you why I did initially. It’s because if I’m not mistaken, but I think 42001 was the first AI governance framework that was published. It was the first one out in November of what, twenty twenty-one? Twenty-two. The draft came out in twenty-one. There was a draft released to the community in twenty-one. They had a call for comments, I think, in twenty-two. I think it came out in twenty-three. I’m losing track of my years. But I’ll tell you, I mean, I love NIST. I’ve actually supported NIST. I’ve contributed and reviewed standards in my time at MITRE, but I support global organizations and there is a bit of a that’s a US thing when NIST is out there. There’s a bit of dismissiveness. Oh, that’s US. That’s a US thing. And they tend to lean in all the ISO standards. And it’s also something that now you can get a certification against. You can’t get certified against any of the NIST standards. There are no auditors out there that will do formal certifications. So 42001 has given me both the international acceptance, but also the ability to take a client through certification.
Greg: So kind of just hanging on that governance, it took a long time for security to rise to the point of at least maybe if anything in name only, but at least there’s that C there with the CISO that’s been there for a few decades now. But whether or not you could say that most CISOs are actually true C-level executives, that’s a conversation for another day. But we have now the emergence of the chief AI officer role. And how do you sell that to businesses that that’s something not necessarily that they need, but that they should at least consider? Because I would imagine that they just would lump that as like in the entire rest of the IT world and say, well, you know, IT takes care of that.
Scott: Yeah. There are many organizations that will just simply defer it to IT or throw it on the CIO. And if your CIO has the background, it’s probably not a bad place to do it. But I’ll tell you when I talk to people about it, sometimes it’s because they’re coming to me saying, we think we need a CAIO. And I stop them and say, hang on a second. What does success look like? Oh, what do you mean? I said, it sounds like you’re looking for a shepherd to bring order to the chaos. It doesn’t sound like you’re committed to yet another C-suite member. And in fact, I’m not convinced the CAIO role will ever get to an emergence where it is recognized as an independent function. I’m still watching people struggle with the chief data officer or chief analytics officer and trying to say, well, how is that out of the bounds that you had already drawn in scope for the CIO or your CTO? So I’m not convinced that it is a long-term function. But at least in the near term, most of these organizations simply want order from chaos. They’re looking for a shepherd, somebody who can speak tech and understands what the hell is Hugging Face, what is PyTorch, but also can speak business and say, wait a second, what’s the business function? This is not a deterministic problem, so it’s appropriate for a probabilistic kind of a model. That is definitely a deterministic function. What the hell are you doing bringing in a third party AI platform? They’re looking for somebody that can organize that chaos.
Greg: And it’s not just the chaos with technology or the business processes, but there’s privacy concerns associated with this as well.
Scott: Oh, absolutely. Absolutely. The privacy implications are huge. And whether you’re turning it loose internally, like for example, I have had at least a dozen companies that I’ve spoken with about their very first adoption of an AI capability has been to screen candidates, right? First and foremost, you’re breaking your obligations depending upon which regions you’re working in around the world in terms of fairness and transparency. But secondarily, you’re dumping privacy information about these people now into a third party platform. They’re not building their own HR functions internally, but you’re dumping this privacy content. This is PII. You’re the controller now if you are the hiring entity. You’re the data controller. It’s on you to be responsible in how you manage that person’s identity, their PCI if they’ve got any type of information there, the PHI if you’re onboarding and you’ve got healthcare and benefits. All of that is on you. What is your relationship like with the third party platforms?
Greg: And then this gets to so much. We talk about third-party risk management. We used to talk about that just from the security standpoint. And now we have to ask folks, it’s like, well, do you use AI? And well, what do you mean by that? Yeah, we use ChatGPT or we use Copilot. Well, okay, that’s fine. But what else do you use it for? Well, I don’t know. I don’t know. And unfortunately, I think there’s a lot of I don’t knows out there. You know, we talk about security. It’s like you can’t secure what you don’t know about. And it seems like that AI is kind of following along that same path.
Scott: No, there’s a lot of shadow IT. We call it shadow AI, but it’s just shadow IT like it’s always been. And you’re pointing at one of the significant risks. I was in a publicly traded environment with thousands of staff and there was a belief amongst the C-suite that no one was really using AI. Well, it turned out that a large percentage of their workforce actually was using AI on a regular basis. Now they weren’t being irresponsible. They weren’t dumping intellectual property or anything like that. But what they had no knowledge of is the fact that all those AI platforms were farming their private information. And we began to get inbound requests saying, hey, you know, not for nothing, but you’ve got like three hundred and eighty-five people in your company that are all using freemium or personal versions of our AI platform. Don’t you think maybe you want an enterprise capability? And when I raised that up in the C-suite, they said, how does the vendor know that? The vendor is farming all of those users and their PII.
Greg: Well, that’s actually an interesting sales technique, I suppose.
Scott: It is, but they’re also farming the behaviors because when you start using like ChatGPT, for example, I mean, pick a platform, they’re all doing this right now. But when you have the freemium version, you’re the product and they’re farming your behavior. What are the questions you’re asking? What are the streams that you have in your discussions? Are you building projects? Are you working? What are you actually focusing on? Are there pockets? I’m working in privacy. I’m writing songs over here in my personal world. But in my professional world, maybe I’m writing code or maybe I’m actually working on marketing content to release that’s going to be used in something that might move a market. I have real world examples. Obviously, we won’t get into them, but people in public entities that were writing marketing content for a new product launch coming up in three months that had that become public knowledge, it could have moved markets.
Greg: Wow.
Scott: So you get into these, you know, privacy is one part of it, obviously confidentiality is another big part of it. But you get into the fact that the AI platforms, it’s not just the benefit, but it’s what’s in it for them if you’re not paying them.
Greg: Right. Everybody’s a product. We talked about just briefly before we started recording, you had mentioned my sweatshirt and asked if I was in the Air Force. And I’m like, oh, yeah. And I got this hoodie off of Facebook because it presented it to me as a marketing thing because, well, I’m part of several military-related, Air Force-related Facebook groups. And Facebook is free, but actually it isn’t because I fell for the marketing. And I’m happily fine with falling for the marketing with regards to that. But at least keep an awareness of the whole ecosystem of what’s out there and how we’re being, I wouldn’t say manipulated, but maybe guided into behavioral changes.
And all of this has always stressed me out. I never wanted to be in security period because I mean, I liked IT. IT was very binary, ones and zeros, either it works or it doesn’t work, you know, networking. Either it connects or it doesn’t connect. And then somehow along the line, I got sucked in and I could never get out of security. It’s a very stressful field and it makes it exciting, but you also have to have a way to positively deal with the stress and decompress. What’s one of the things you do, Scott, to deal with the stress and decompress?
Scott: It’s been a variety of things over time. Martial arts was something I was heavily into, really, from the time I was a teenager. Unfortunately, I can’t do much of that at this point because two years ago, I tore one of my Achilles tendons.
Greg: Oh my.
Scott: And I had to relearn to just walk. And at my age, the doctors are saying, hey, do you really need to risk the surgery? So I haven’t had the surgery yet, but I’m beginning to think maybe I will. But that’ll get me back out there. Not much of a runner, but the whole family runs. Everyone in the house. I have four grown kids. They’re all long distance runners now. They don’t run competitively, but they do run. And even my wife runs half marathon. So maybe I’ll get back into that. But music was a big deal for a long time. Guitar in particular. I’m noticing the guitar in the background for you, but I’ve also picked up just in the last few months prior to the holidays, I said to my wife, I want to get like a Chinese guitar. And she said, what the hell is that? I said, I think it’s called a guqin. But it is basically, it looks like a slide guitar, and you play it essentially like a guitar, but it’s got a very, very unique sound that you’ve seen in all types of Chinese martial arts movies over the course of time. So I’ve picked that up and I’ve been trying to teach myself first to tune it and then maybe play it.
Greg: Well, and you mentioned the guitar in the back. I am really not a musician. But many, many years ago, I was curious. I wanted to learn about chords and music and how it’s constructed because there’s so much mathematics behind it. And so, I mean, you know, now it’s just something that if I feel like just plucking away on something as a stress reliever, that’s one of the things that I do, which is why it’s back there. But I don’t like to even give the impression that I have any sort of like meaningful talent with regards to it. But what plans you got coming down the road outside of like learning how to play the Chinese guitar? And I’m sorry, I can’t remember what it’s called again.
Scott: I think that’s the way you say it. So it’s travel in general. I mean, I’m semi-retired, right? What I do is really more of a hobby. My wife, if I complain about it, my wife shuts me down and says, I don’t want to hear it. You could have retired years ago. You do this because this is your hobby.
I do have, so I mentor about a dozen people of various backgrounds, mostly ascending into the executive level position, but a few are tier one, tier two types of analysts. And I gotta tell you, one of the things that I love to do the most is investigations. It’s when you’re chasing a rabbit down a rabbit hole. I’ve spent a lot of time with nation state actors and frankly, they’re a lot easier to investigate, right? You know what their modus operandi is. You can get to a point where you’ve got the rabbit by the tail, but you can’t do anything with it. But I have volunteered my time to probably half a dozen different organizations over the last three, four years to do large scale investigations when they just simply couldn’t figure out what the hell was going on. And for me, it’s a chess game. It’s like a massive chess game. I really enjoy doing it. And to date, I’ve always caught the rabbit by the tail.
Greg: Fascinating stuff. Scott, I really appreciate you taking the time to talk to us today. Great conversation. Could go on and on and on with regards to particularly talking about the AI rabbit hole and all of that and certainly that’s something to watch as time goes on and how it develops and matures. And once it matures, I’m sure there’ll be something else coming down the road. But really appreciate you taking the time to join us today.
Scott: Thanks for having me, Greg.
Greg: And everybody, stay secure. Bye.