Greg Schaffer:
Hi, I’m Greg Schaffer and welcome to The Virtual CISO Moment. Walter Haydock joins us today. He is the founder of StackAware, which helps AI-powered companies measure and manage cybersecurity, privacy, and compliance risk. Walter, thank you so much for joining us today.

Walter Haydock:
Greg, thank you for having me on.

Greg Schaffer:
We’d love to hear your story, as we usually do in our first segment. If you could start with the how and the why you got started in the technology arena, just bring us through your career and then to how and why you founded StackAware.

Walter Haydock:
So, from my time in the Marine Corps as an intelligence and reconnaissance officer, and then working on Capitol Hill as a staff member for a House committee, I thought I understood dynamic environments — and in some cases even chaos — but moving into the private sector showed me that there can be an entirely new level.

I founded StackAware because I’d seen organizations wasting time, money, and resources on checkbox exercises to manage their risk that didn’t really accomplish the mission. I knew there was a better way to do it. So, I decided to start partnering with companies to help them effectively manage risk in a way that enables the business.

Greg Schaffer:
One of the things I always like to ask folks who came from a military background is — what’s one thing that comes to mind that you brought from the military that has helped you in technology and security?

Walter Haydock:
Decision-making is by far the most important skill I developed in the military. One thing I was taught as a young officer was to make decisions when you have seventy percent certainty. If you wait longer than that, you’ll lose opportunities. If you make decisions before that point, you’re basically guessing. So there’s a happy medium — enough reasonable certainty to move forward, but not waiting for a perfect picture.

Greg Schaffer:
That’s so hard for us in technology because we think in binary — it’s zero or one, yes or no. When I was in networking, it was very binary: either you’re connected or you’re not. Then I started understanding risk management and that gray area — that sometimes you have to balance other factors.

It reminds me of a story from years ago — we were running a line to a separate building, and the standard was to encase the conduit in concrete. The CFO said no, because nobody was going to dig in that area. That was one of those lessons — sometimes it’s not binary; it’s about acceptable risk. Would you agree?

Walter Haydock:
Absolutely. You’re never going to have a perfect scenario. I made a joke the other day on LinkedIn that sometimes executives, when asked about risk appetite, say things like, “We have a lot of tolerance for good things but not a lot of tolerance for bad things,” which isn’t incredibly helpful for people who need to make decisions at the edge of the organization.

Greg Schaffer:
Yeah, and that’s one of our goals — to educate folks in the C-suite on what risk tolerance actually means. Some do a great job; some could do better.

Let’s pivot to StackAware and AI in general. You’re very active on LinkedIn — and thank you for sharing your knowledge on AI governance. I think you were one of the first who mentioned ISO 42001 when it came out a couple of years ago. I bought the standard right away.

Talk a little about StackAware — what you do, how you do it, and why.

Walter Haydock:
StackAware helps AI-powered companies measure and manage their risk through implementation of the ISO 42001 standard. Just like any compliance framework, there are a range of ways to meet requirements. We use ISO 42001 to build an effective, actionable risk management program that also provides the benefits of compliance certification for customer trust.

Even in the U.S., in some jurisdictions, the government is recognizing ISO 42001 as potentially giving safe harbor from regulatory action under certain circumstances.

Greg Schaffer:
I know there are other initiatives out there — NIST’s AI Risk Management Framework, the EU AI Act. How do those align with ISO 42001, and why did your company choose that framework?

Walter Haydock:
The three primary frameworks — the EU AI Act, NIST AI RMF, and ISO 42001 — represent categories of compliance frameworks.

  • The EU AI Act is a binding law; no choice whether to follow it.
  • The NIST AI RMF is voluntary (except in some federal cases); you can benchmark against it but not certify.
  • The ISO 42001 is a certifiable management system. You can be audited and obtain certification.

Greg Schaffer:
So just like ISO 27001, ISO 42001 is certifiable?

Walter Haydock:
Exactly. StackAware itself has been audited and certified since 2024.

Greg Schaffer:
That’s something I hadn’t realized — that auditors now certify against 42001. How does it integrate with ISO 27001?

Walter Haydock:
Very well. They share a similar structure. The main difference is subject matter — ISO 27001 focuses on the information security management system, while ISO 42001 focuses on the AI management system. Organizations can operate both together seamlessly through an integrated management system.

Greg Schaffer:
If a company contacts you and you mention an “AI management system,” what does that mean in plain terms?

Walter Haydock:
An AI management system is a set of policies, procedures, and controls through which you manage the use of AI in your organization — including its risks and impacts.

Greg Schaffer:
Do you typically see larger organizations pursuing this, or SMBs too?

Walter Haydock:
A range. Smaller organizations like StackAware can have broad scopes covering most of the company. Large ones like Amazon have achieved certification, but with narrow scopes — for example, four specific AWS products. The biggest difference is scope, not company size.

Greg Schaffer:
You mentioned Amazon — and I saw your post about Amazon and Samsung getting burned by not assessing AI risks properly. What happened there, and what can we learn from it?

Walter Haydock:
According to press reports, Amazon engineers used ChatGPT to process internal architecture and source code while ChatGPT’s default training mode was enabled. OpenAI then used that data to improve its models. Later, other Amazon engineers noticed ChatGPT responses that seemed to include proprietary knowledge — implying their data had been absorbed. An Amazon attorney then warned employees not to process confidential info in ChatGPT.

Simply opting out of default training probably would have prevented that. They also could have avoided data retention and confidentiality issues by using enterprise agreements.

Greg Schaffer:
Right — and in the paid tiers of ChatGPT, there’s an opt-out option. But it’s buried and unclear. Is training on by default?

Walter Haydock:
For business and enterprise versions, training is disabled and cannot be enabled. For free, plus, and pro versions, training is enabled by default, though users can opt out.

Greg Schaffer:
That’s what I thought — on the Plus plan, around thirty dollars a month, you must manually turn it off, and it’s not obvious. Any thoughts on fixing that?

Walter Haydock:
For most businesses, having default training on user content is an unacceptable risk. I advise all clients to restrict ChatGPT use unless training is disabled.

Of course, the providers have strong incentives to collect training data. Anthropic, for instance, recently changed its default to train on user content unless you opt out. I expect others to follow.

Greg Schaffer:
Exactly — there’s no such thing as “free.” If you’re not paying for the product, you are the product. I don’t think the general public realizes that what they input may be reused by the model. How do we fix that?

Walter Haydock:
There’s always a trade-off. I understand the business model. But users must understand those trade-offs. That’s why I share information publicly — to help build awareness.

Ideally, companies should require opt-in rather than opt-out for training. But they’ll make business decisions based on market reactions.

Greg Schaffer:
Maybe clearer disclosures would help — though they’re probably buried in the EULA, which nobody reads. It’s similar to past technology shifts — every new tech brings a learning curve, and society must learn the boundaries.

We didn’t initially focus much on public education around phishing either, but now we do. Maybe AI needs a similar push for awareness. Do you see that happening?

Walter Haydock:
Yes — AI literacy should be a public policy initiative, including education around security, privacy, and compliance. The U.S. government’s AI Action Plan mentions AI literacy and public training, which I support.

Greg Schaffer:
Is that through CISA or another agency?

Walter Haydock:
Primarily workforce training — upskilling people who might otherwise be displaced by technology.

Greg Schaffer:
Got it. Well, all of this AI talk is stressful — trying to understand the boundaries. Security in general is stressful. You’re an entrepreneur too, and that adds more pressure. I always like to ask — how do you decompress?

Walter Haydock:
I’m lucky to live on a ski mountain in New Hampshire, so I get out on the slopes whenever I can — and I’m teaching my daughter to ski. She’s in her fourth season now, and it’s wonderful.

Greg Schaffer:
What about summer — do you mountain bike down the slopes?

Walter Haydock:
You can, but I don’t. My body’s starting to complain with age. My wife’s a big hiker, and I can’t keep up with her! But I swim — lots of beautiful lakes in New England during summer.

Greg Schaffer:
I’ve embraced mountain biking this year, though I’m not at the “confidence — or stupidity — level” for big downhill jumps! I prefer keeping bones intact.

So, what’s ahead for StackAware — and for you?

Walter Haydock:
The future of StackAware is to become an AI governance and risk management platform. We already have a software product that clients use today. The goal is to take what we’ve learned from our services work and turn it into a more automated, widely usable solution.

Greg Schaffer:
Excellent. And your clients range from small to large?

Walter Haydock:
Yes, a mix — with a specialty in healthcare and life sciences. We’ve worked with organizations using AI to deliver patient care and improve provider efficiency, and that’s where we’ve seen the most success.

Greg Schaffer:
If a company wants to contact you, what’s the best way?

Walter Haydock:
Visit stackaware.com — you can book a call directly from the site.

Greg Schaffer:
StackAware.com, everyone — if you’re interested, reach out and learn how to strengthen your AI governance.

Walter, it’s been an absolute pleasure having you on today. I’ve learned a lot — some of it I liked, a little of it I didn’t — but that’s part of the field! Things change every year, and that’s what keeps it exciting. Thank you so much for your time today.

Walter Haydock:
Thank you, Greg. Appreciate it.

Greg Schaffer:
And everybody — stay secure.