Hey, I’m Greg Schaffer. Welcome to a very special edition of The Virtual CISO Moment — a very special and a very long episode.
Earlier today, I had the pleasure of talking with students at Minnesota State University about cybersecurity. They’re taking a cybersecurity class — I think it’s at both the undergraduate and graduate levels — and one of the resources they’re using for the class is my book Information Security for Small and Mid-Sized Businesses.
As part of this, I was asked — and I agreed — to participate in a Q&A session on information security and cybersecurity with a focus on small businesses, career paths, and my own story. It was a very interesting and engaging conversation. I think you’ll enjoy it.
Just sit back — and hopefully you’re not on a treadmill for this one, because it’s going to be about two hours long. Pause it when you want to and then come back to it.
The main reason I wanted to do this is because I know we may end up having some discussions here that are helpful not only to the students in that class but to other folks out there as well.
I don’t have a set structure for this — really, this is meant to be a question-and-answer session. But I’ll start with a little bit of background about myself first, outside of what you’ll see in the book.
I’ve been doing IT and cybersecurity and information security for about thirty-six years now. When I started, we didn’t even have twisted-pair Ethernet. We barely had Ethernet at all — it was on thin coaxial cables, which was a real pain to troubleshoot back in the day. We also had serial connections, and it was difficult just to get machines to talk to each other.
Think about that — today, you buy a laptop, it automatically connects to wireless, and it’s all there. But at the beginning, during the early days of LAN and WAN connectivity, computers didn’t even come with network interfaces. They had serial interfaces for modems and such, but not Ethernet.
You had to install a card — a network interface card — and you had to choose settings manually, like memory and interrupt addresses. Sometimes you’d get it right, sometimes you wouldn’t. Just getting one computer to connect to a local area network was a big deal. That’s how the world was when I started.
From there, security became more and more of a big deal. The first virus — or worm, really — appeared back in the mid-1980s. It was created as an experiment and ended up being a mistake. I think it was a student from Cornell — and he’s like, “Oops.”
Back then, the internet was really only used for research purposes. It wasn’t connected to the entire world like it is today, and it certainly wasn’t something that was considered essential for business.
It’s been really cool to see how it’s grown over the years.
For me personally, I was in networking — that was my primary role: getting computers to talk to each other. But about twenty-five, twenty-six years ago, security started to take over more of my time.
Roughly six years after that, I realized I was spending more time trying to stop computers from talking to each other than getting them to communicate. That’s when I realized — I’m actually more of a security person than a networking person.
And since then, I’ve dove fully into the field.
One of the most important lessons to learn in security is that there’s no such thing as perfect security.
There is such a thing as perfect networking — that’s what I grew up with — but you can’t create a perfectly secure environment. It’s all about trade-offs.
For those who want to be successful in information security — and if you’ve read my book, you know why I make a distinction between “information security” and “cybersecurity,” though we can dig into that later — the most important thing to understand is risk.
Even if you’re doing something specific, like being a SOC analyst and looking for threats, you’ll go further if you understand the why behind the what of what you’re doing. And that “why” is always related to the business you’re trying to protect — what that business does, and how its information flows.
The reason I wrote my book — and the reason I do what I do now — is because over the past ten years or so, I’ve been helping small and mid-sized businesses with their information security.
I realized that I had all these decades of experience in my head, and that most small and mid-sized businesses don’t have access to that experience. They can’t afford a full-time security officer — it’s too expensive. And startups, especially, have such thin margins.
I saw a need to help them — and to do it right.
But there’s also a lot of charlatans out there, people selling snake oil in cybersecurity. And it’s gotten worse over the years. So I wrote the book as a small effort to provide those businesses with an understanding of what information security really means.
It’s not just having a good firewall or antivirus. It’s not about fancy EDR or MDR technology. It’s about process, mindset, culture — and again, risk management.
I’m pleased that practitioners are using the book as well — and even more so that students like you are reading it in your classes.
I wanted to write it in a way that wasn’t overly technical, so that the concepts would be easy to digest. And if you wanted to go further, I included references and links.
Hopefully that’s been helpful.
So that’s my preamble. I’ll stop there and open the floor for questions.
Student:
I was wondering what your thoughts were about AI, and whether there are any security risks with it — since it’s getting more advanced every day.
Greg Schaffer:
(Laughs) So you start with the heavy question right out of the gate!
Yes — there are definitely risks associated with AI, particularly generative AI.
Everybody’s using it now, for all kinds of purposes.
Just today, for example, I used it twice — once to help me with a little side project I’ve been developing, and once to help me write the short disclaimer you heard at the beginning of this recording. That part actually came from ChatGPT.
It’s become a very useful tool — and I use that word “tool” intentionally.
Because AI itself isn’t inherently risky. It’s not the tool that’s risky — it’s how the tool is used.
There’s no such thing as a risky tool. The risk comes from misuse.
It’s like a wrench. A wrench is meant to tighten or loosen bolts. If you use it to hammer something, you’ll probably break the wrench — or the thing you’re hammering — or even hurt yourself.
Not saying I’ve ever done that… maybe once or twice.
When it comes to AI — especially generative AI — the same principle applies.
There are many kinds of AI: the consumer type, like ChatGPT; AI embedded into applications; and AI embedded into business processes. I’m not an AI expert, but I am a risk expert.
So, what are the risks?
I like to compare AI risks today — in 2025 — to where we were with social media risks in 2010.
Back then, social media exploded in popularity for business use. People were sharing things they shouldn’t have, and organizations had no real policies for managing that risk.
AI is similar — we now have a tool that can do things at scale that we’ve never been able to do before.
So, as always, we start with risk management: identify the risks, then address them.
Probably the biggest risk is unintentional disclosure — releasing information you didn’t intend to.
AI tools learn from the data you feed them. For example, ChatGPT’s business plan lets you configure whether or not your inputs are used for training. If you’re on a free or default plan, what you type may be used to train the model.
That means you could accidentally expose confidential or proprietary data.
Another major risk is hallucination.
AI sometimes invents information — and it sounds very confident when it does.
There was a case with Deloitte — ironically, a firm that consults on AI usage — where they published a report containing fake citations and quotes generated by AI.
That’s a massive reputational risk.
And finally, I think there’s a societal risk: loss of critical thinking.
AI can get you to a starting point, but it shouldn’t replace your reasoning.
Here’s a funny example — and one where I failed.
I asked ChatGPT how to convert the tires on my 25-year-old mountain bike from tubed to tubeless. It made it sound ridiculously easy.
If you’ve ever worked with tubeless tires — even on bikes designed for it — you know it’s not easy. After a frustrating weekend, I gave up.
I trusted the AI too much. And I learned from that.
So the key lesson is: use AI to help you think, not do your thinking for you.
Student:
Yeah, that makes sense.
Student:
You talked about how procedures and risk management are more important than technical aspects. Can you give some examples of good and bad procedures you’ve seen?
Greg Schaffer:
Great question.
When I emphasize procedures, I’m really talking about the foundation of all security — understanding how things are done.
Whether we’re talking about information security — protecting information — or cybersecurity — the technical subset of that — our goal is the same: protect information.
But here’s the key point: you can’t protect what you don’t know about.
That’s why understanding processes is so important.
If you don’t understand how data is created, used, and stored in a business, you can’t protect it properly.
And procedures help bridge that gap.
You can have the best technical controls in the world, but if your processes are weak, they’ll fail.
Here’s a simple example: password sharing.
Let’s say I give you my login credentials so you can access one OneDrive folder you need. You don’t have your own account, so I share mine.
Right there — we’ve broken a fundamental rule.
Now imagine I also have multi-factor authentication. You try to log in, and I text you the code when it appears on my phone.
Technically, we have strong security controls — but we’ve completely undermined them through bad procedure.
People do this because they’re trying to get their work done.
That’s one of the core truths in security: if you make security too hard, people will find another way to do their job.
So, the key to good procedure is balance — enabling the business to operate while minimizing risk.
Another major example: disaster recovery and incident response.
Those sound technical — but they’re mostly process-driven.
If you don’t have good procedures in place, and you don’t practice them, you’ll fail when a real incident occurs.
Incidents happen to everyone. The difference between chaos and control is preparation and process.
Student:
Yeah, that makes sense. Thank you.
Instructor:
Remember, you have the author of your book here — so this is your chance to ask questions, especially about your career interests.
Greg Schaffer:
(Laughs) No pressure, right?
Instructor:
Some of you want to go into cybersecurity, some into database programming. Ask questions that pertain to your interests.
Greg Schaffer:
No one? No questions yet?
All right — I can pontificate for a bit if needed.
Let’s talk about the career field a bit, since that’s often the biggest question for students.
When I asked earlier what domains or sectors of cybersecurity people in this class wanted to pursue, the answers were pretty diverse — and that’s a good thing.
Cybersecurity will continue to be one of the fastest-growing fields out there, in my opinion, simply because we keep finding new ways to create information. And the more information we create, the more we have to protect.
I’m not trying to sound like the old guy who says, “I walked uphill to school five miles in the snow both ways,” but it is amazing to see how much this field has evolved.
What was absolutely nonexistent 35 years ago is now such a huge, complex field.
When someone tells me they’re a “cybersecurity expert,” I usually smile and say, “No, you’re not.”
There’s no such thing. Nobody can know everything in cybersecurity. And even if you did, it’ll change by next week.
So I caution people not to label themselves as experts in everything — it can sound a bit egotistical, and it misses the point.
The field is too broad.
That’s why I prefer to call myself an expert generalist.
I’m an expert in being a virtual CISO, because that’s my specialization. I’m also an expert at helping small and mid-sized businesses secure themselves.
That doesn’t mean I know everything — but I know how to connect the dots across disciplines.
Now, let’s talk about how the cybersecurity job market has shifted.
There was a huge boom — a rush — into the field, followed by a saturation period. We’re now in a kind of trough, though I think it’s getting better.
So, how do you differentiate yourself in a crowded market?
The first and most important thing: understand your “why.”
Your “why” should be about wanting to help.
If your main motivation is financial, that’s fine — but cybersecurity can be a tough grind if that’s your only reason.
This work is hard.
We’re constantly trying to achieve a goal that’s literally impossible — perfect security.
That’s where burnout often comes from.
But if your “why” is to help — to solve problems, to protect systems, to make a difference — that motivation will sustain you.
For example, I know people who love network analysis — they speak Wireshark in their sleep.
That’s passion.
When you find something you love like that — investigating, digging deep, solving puzzles — you’ll succeed.
You have to have that investigative mindset.
You also have to have an understanding of how the business works.
When I started out, I was a student assistant working on the campus network before Ethernet over twisted pair even existed. My first packet analyzer was an early, clunky tool — but the fundamentals were the same as Wireshark today.
If you love that level of analysis, follow it.
You also have to be flexible.
Sometimes the best way into cybersecurity is through IT.
I hear people say, “I don’t want to do help desk.”
Well, I did help desk — and it was one of the best learning experiences of my life.
Help desk teaches you how a business really works. You learn how to communicate, how to solve problems, and how to think under pressure.
That experience shapes how you approach cybersecurity later.
So, if that’s your entry point — embrace it.
Instructor:
That’s a great point. I tell my students all the time — you have to start somewhere.
Greg Schaffer:
Exactly. And remember — you’re not going to go from zero to sixty in two seconds.
There are a lot of charlatans out there who claim they can get you six cybersecurity certifications in six weeks and land you a six-figure job.
That’s not reality.
Sure, some people get lucky — but most successful professionals have earned their way up over time.
They built experience, learned from mistakes, and evolved.
Sometimes, you’ll start in IT and transition into cyber. Other times, you’ll move laterally — maybe from a compliance or risk role into security.
I’ve seen both paths work.
Personally, I started in networking. Then I shifted into security. Then governance. Then risk management. Then eventually into the CISO and virtual CISO roles.
Your path will evolve — and that’s a good thing.
Follow your passion, but don’t be afraid to pivot.
The beautiful thing about this field is that it’s big enough to move around.
Student:
I think I want to do cloud security.
Greg Schaffer:
Okay — great. But what exactly do you mean by cloud security?
And the reason I ask is because “cloud security” can mean a lot of different things.
Let’s break it down — think of it as inside versus outside cloud security.
From the outside, you’re dealing with governance, risk, and compliance — vendor management.
That’s where you make sure that the SaaS provider your company is using has the right security controls in place.
For example, you might ask:
- Do they have a SOC 2 audit report?
- Are they compliant with the Cloud Security Alliance framework?
- What’s their data handling policy?
That’s all external cloud security.
Then there’s internal cloud security — managing the cloud environment you control, like AWS or Azure.
That’s where you deal with things like:
- Perimeter security (firewalls, identity, network controls)
- Access management
- DevSecOps or SecDevOps
- Virtual private network (VPN) configuration
- Data protection policies
It’s a big universe.
So my advice: start by figuring out which side excites you more — the GRC side (governance and risk management), or the technical side (engineering and architecture).
Then dig into that first.
And keep in mind — cloud security is constantly evolving.
When I was starting out, we didn’t even have “the cloud.”
We hosted everything in our own data centers, and the closest thing to a “cloud” was renting rack space from a hosting company.
Now, everything’s cloud-connected.
AWS, Azure, Google Cloud — they’ve changed the game.
And each has its own security model.
If you’re planning to go into that field, pick one major platform and learn it deeply.
Get hands-on experience with AWS IAM, S3 bucket policies, VPCs — or the Azure equivalents.
That’ll make you stand out.
Also, don’t stress about locking into one niche.
You can always shift.
When I started, I was a mechanical engineering major. I took a campus job in networking because it paid better than working at Burger King.
(Laughs) True story — I worked at three Burger Kings in my life.
That job in networking led to my entire career.
And now, decades later, I lead information security programs and mentor others doing the same.
So remember — where you start isn’t where you’ll finish.
Follow your curiosity, not a rigid path.
Instructor:
Joseph seems to be typing a question in the chat — we’ll wait a second for that.
Greg Schaffer:
Sure thing — while we wait, any other questions from the group?
Instructor:
Let’s hear some about career roles, responsibilities, or any challenges you’re thinking about.
Student:
I think a lot of people are scared when they start out — like, not feeling confident about getting a job. Do you have any advice for that?
Greg Schaffer:
That’s a really good question.
The fear of not being “good enough” is completely normal — and honestly, it never fully goes away.
Even people who’ve been doing this for decades sometimes feel imposter syndrome.
But here’s the thing: confidence issues are often not employee problems — they’re leadership problems.
Good leaders mentor and build up their team. Bad ones tear people down.
If your first manager doesn’t support you, that can damage your confidence — but that’s their failure, not yours.
Now, on a more personal level — how do you build confidence starting out?
First, recognize that cybersecurity is a constant learning field. Nobody knows everything.
So you’ll never reach a point where you know “enough.”
That’s okay.
Second, get comfortable making mistakes — and learning from them.
Everyone makes mistakes in this field. The key is to analyze what happened, fix it, and move on.
If you do that consistently, you’ll grow fast.
Third, find a mentor.
It doesn’t have to be someone you work with directly.
It could be a professional you connect with through LinkedIn, a professor, or someone who’s been in the field a few years longer than you.
Having people you can bounce ideas off of — that’s invaluable.
That’s one of the reasons I do my podcast, actually.
Every week I talk to someone in the information security world, and the first thing I ask is: “Tell me your path. How did you get started?”
You’d be amazed how many people took unconventional routes — law enforcement, accounting, programming, even marketing.
The diversity of backgrounds is what makes this field so strong.
So, in short:
- Don’t let imposter syndrome stop you.
- Keep learning.
- Build your network.
- Find mentors.
And remember — passion and persistence are more valuable than perfection.
You’ll do great if you keep that mindset.
Instructor:
One of the things we’ve been covering lately is risk — and how difficult it is to quantify risk.
Greg Schaffer:
That’s a great topic — and a difficult one.
Risk analysis in information security is inherently subjective.
You can make it qualitative — which is completely subjective — or quantitative — which still has a lot of subjectivity baked in.
Let’s start with qualitative risk.
That’s when you see “heat maps” with red, yellow, and green boxes — low, medium, high risk. Those are opinions. They’re educated opinions, but opinions nonetheless.
That’s why, when conducting risk assessments, I always emphasize consistency. You want one person or team using the same methodology each time, so the results are normalized.
Typically, in a proper risk management process, there are two key roles:
- The subject matter expert — who understands the system, processes, and business flow.
- The assessor — who interprets and measures the risk.
Sometimes they’re the same person, but not always.
Even then, you’re still forming an opinion.
So, let’s say you present your results to a board of directors. You show them a few red boxes on the heat map and say, “These are high risks.”
Someone on the board will ask, “What does that mean?”
If you can’t answer that question clearly — if all you can say is, “Well, bad things might happen” — then you’ve failed to communicate effectively.
That’s where quantitative risk analysis comes in.
The goal is to translate technical or procedural risk into business language — ideally, into dollars and cents.
Because that’s what executives and boards understand.
You might quantify risk in terms of potential financial loss, downtime, or operational impact.
For example, a vulnerability might have a 5% likelihood of being exploited and could cause $1 million in losses. That gives you an expected loss value you can compare to the cost of mitigating the risk.
If mitigation costs $10,000 and could reduce the risk by 90%, that’s an obvious decision.
That’s what quantification allows you to do — make business-informed risk decisions.
My favorite framework for this is called FAIR — Factor Analysis of Information Risk.
It’s an open model designed to quantify information risk using probabilities and historical data.
It takes inputs from data sources like the Verizon Data Breach Investigations Report (which, by the way, is a fantastic annual resource) and combines them with your business context.
FAIR lets you produce a realistic range of possible outcomes — not exact predictions, but evidence-based estimates.
Then you can answer the “so what?” question when executives ask.
That’s the key — risk quantification bridges the gap between security metrics and business decision-making.
Instructor:
That’s excellent. Thank you.
Student (in chat):
For anyone who wants to start a career in cybersecurity — in my case, I started with a bachelor’s in networking communications and I’m pursuing a master’s in IT. Should I learn as much as I can about all areas of cybersecurity — threats, vulnerabilities, prevention, detection — or should I specialize early, like focusing on network or cloud security?
Greg Schaffer:
That’s a great question — and it’s one that doesn’t have a one-size-fits-all answer.
It depends on you.
If you already know what truly excites you — say, network security or cloud security — then yes, specialize. Dive deep into that area.
But — and this is important — don’t do it at the exclusion of everything else.
Even when I was a network engineer, I was a better network engineer because I understood system administration, database management, and business processes.
Everything connects.
Here’s what I mean:
When I understood how databases communicated — which ports they used, like TCP 1433 for SQL — that made me a better network engineer.
When I understood how servers were configured, I could troubleshoot performance issues more effectively.
When I understood business workflows, I could prioritize security controls based on business impact.
All of that made me more valuable — and made my work more effective.
So even if you specialize, always stay aware of the broader picture.
If you’re not yet sure which direction to go — stay general for a while.
Becoming a generalist makes you more marketable early in your career.
Then, as you gain experience and discover what you enjoy most, you can go deeper.
That’s usually when you’ll find your niche — and your “superpower.”
And don’t worry about locking yourself in.
Cybersecurity is incredibly dynamic.
New specialties emerge all the time — threat hunting, red teaming, OT security, application security, GRC, privacy engineering — you name it.
Being flexible and curious is one of the best career strategies you can have.
Instructor:
Exactly. I tell my students — start broad, then narrow. You have to know the fundamentals before you can specialize.
Greg Schaffer:
I couldn’t agree more.
You can’t secure what you don’t understand.
Learn how networks operate, how systems communicate, how users interact with technology.
Then layer security on top of that understanding.
You’ll not only be a better technologist — you’ll be a better communicator and leader.
Student:
I have a question — how does the impact of a data breach differ between small businesses and large corporations?
Greg Schaffer:
That’s an excellent question. And just so you know, every question so far has been a good one!
On the surface, there’s no difference — a data breach is a business-impacting event in both cases.
But when you dig deeper, the difference lies in how each organization can respond and recover.
Large corporations typically have the advantage of resources.
They have dedicated incident response teams, larger budgets, and well-established procedures.
They can triage, investigate, and recover faster — sometimes before the public even notices.
Small and mid-sized businesses (SMBs), on the other hand, usually don’t have that luxury.
Many of them outsource their IT entirely, and security might only be a small part of that contract.
They may not have an incident response plan, let alone a team.
In fact, many SMBs don’t even have cybersecurity insurance — or if they do, they don’t understand how to use it effectively.
That’s why breaches can be catastrophic for smaller organizations.
The average recovery cost of a breach for a large enterprise might be millions — but it’s often survivable.
For a small business, even a fraction of that can be fatal.
This is why I’m so passionate about helping small and mid-sized organizations — they’re the soft underbelly of our economy.
They often believe they’re not targets because they’re “too small.”
That’s a dangerous myth.
Attackers don’t always target; they often opportunistically scan.
If you’re vulnerable, they’ll exploit you — whether you’re a two-person startup or a Fortune 500.
Let’s take an example you may remember: the Target breach from over a decade ago.
The attackers didn’t start with Target. They breached a small HVAC vendor in Pennsylvania that had a remote connection into Target’s network.
That small vendor’s weak security posture became the entry point for a major retail breach.
It’s a perfect example of how small business vulnerabilities can ripple up into global consequences.
So, to summarize:
- Large businesses usually have the resources to detect, contain, and recover faster.
- Small businesses often don’t — and may not even know they’ve been breached.
- Both are at risk, but the impact is usually far greater for small businesses.
And that’s why improving security for SMBs helps everyone — it strengthens the entire supply chain.
Instructor:
That’s an important lesson. Third-party risk affects everyone, large or small.
Greg Schaffer:
Absolutely. And that’s something the federal government has recognized.
Take CMMC — the Cybersecurity Maturity Model Certification — for example.
It was developed to strengthen security across the entire defense industrial base, including subcontractors.
It’s about securing the entire supply chain — not just the big primes, but the small businesses that support them.
That’s where the real risk lies.
Instructor:
Last week we had a discussion about end-user device protection — things like laptops and desktops — and some of the risks they introduce. Would you share your thoughts on how endpoint devices contribute to overall threat exposure if they’re not properly hardened?
Greg Schaffer:
Absolutely — and that’s a great question.
When we talk about layered protection — “defense in depth” — we’re referring to security at every level of data handling: where information is accessed, stored, processed, and transmitted.
And the endpoint — the user’s device — is a critical layer.
Why? Because endpoints have something most other components don’t: humans.
Every other device in your infrastructure — firewalls, switches, servers — follows predictable logic. Humans don’t.
We click things. We get tired. We multitask. We make mistakes.
That’s why endpoint protection is so important.
Back in the early days, endpoint security meant antivirus.
If you could stop malicious code from running, you were safe.
Then came personal firewalls, like ZoneAlarm — that was revolutionary at the time.
Eventually, operating systems started building them in, and endpoint firewalls became standard.
Now, we’ve evolved to EDR (Endpoint Detection and Response) and XDR (Extended Detection and Response). These systems don’t just block known threats — they monitor for behaviors that look suspicious.
They can even isolate an infected machine automatically before malware spreads.
Let me share a recent real-world example.
One of my clients — a small business — was hit with ransomware last month.
They didn’t realize it immediately. The user had downloaded a tool they found online to do a task they weren’t authorized to do.
They ran it, thinking it was legitimate. It wasn’t.
The ransomware started encrypting files.
Several controls failed — the firewall didn’t catch it, and the user’s training didn’t prevent the click.
But one control worked perfectly: their EDR.
It detected abnormal file operations, cut network access to that machine, and stopped the infection cold.
Without that EDR, they would’ve lost everything.
So, when we talk about endpoint protection, it’s not just one tool. It’s a layered strategy.
You need:
- A strong configuration baseline (hardening).
- Patch management to keep software updated.
- Application whitelisting or control.
- Behavioral detection through EDR.
- And, most importantly, user awareness training.
Because the human factor will always be the weakest link.
Instructor:
That’s an excellent example. So what about lateral movement — what can organizations do to prevent an attacker from spreading inside a network once they’re in?
Greg Schaffer:
That’s another great question — and one that takes us right into network architecture.
Preventing lateral movement is all about segmentation.
When I was learning networking decades ago, we used to say, “Keep local traffic local.”
That originally referred to performance, not security — but it’s equally valid for both.
If you design your network so that devices can only communicate with what they need to, you’ve already limited the blast radius of a breach.
For example, let’s say you have three VLANs:
- One for users,
- One for servers,
- One for financial systems.
If an attacker compromises a workstation on the user VLAN, segmentation can stop them from reaching sensitive servers directly.
Each segment becomes its own security zone.
That’s the foundation of modern zero-trust architecture — don’t automatically trust anything just because it’s inside the network.
You verify and control access everywhere.
Lateral movement also relates to privilege management.
If every user has admin rights, you’ve already lost.
Attackers thrive on overprovisioned accounts.
By enforcing least privilege and strong identity management, you reduce what they can do once they’re in.
Add multi-factor authentication (MFA) and conditional access policies, and you further slow their progress.
Here’s another example from the financial sector.
In banking, compliance frameworks like PCI DSS require segmentation of systems that handle payment card data.
If you isolate that environment — those cardholder data systems — you shrink the “audit scope.”
That means fewer systems to secure, fewer systems to test, and less room for lateral movement.
One bank I worked with had a dedicated VLAN for ATMs and teller machines, completely separate from the rest of their network.
Even if a workstation in accounting were compromised, the attacker couldn’t reach those payment systems.
That’s how you minimize risk.
So, to summarize:
- Segment your networks.
- Apply least privilege.
- Use identity-based controls.
- Implement EDR to detect movement.
- And continuously monitor everything.
That’s how you make it hard for attackers to move sideways once they’re in.
Instructor:
That’s a great practical explanation. We’ve actually been using packet tracer in class recently to design VLANs — so this ties in perfectly.
Greg Schaffer:
That’s fantastic — I’m really glad you’re teaching those fundamentals.
Because, truth be told, not much has changed at the conceptual level.
The OSI model, for example, is still one of the best ways to explain data communications.
We may have new tools and automation, but the core principles — segmentation, isolation, traffic control — are timeless.
And by the way, fun fact: people often say the OSI model has seven layers, but there’s actually an eighth one we joke about — politics.
(Laughter in the room.)
Instructor:
That’s absolutely true!
Student:
I’m interested in database management but also starting to get into security. What’s your perspective on the future of that field? What should someone learn to get started?
Greg Schaffer:
That’s a great focus area — and very relevant.
Now, I’m not a database specialist myself, but I can speak to it from the security perspective.
The most important thing with database security is access control.
A lot of breaches happen because database permissions are too broad.
Developers or admins sometimes have full access long after they no longer need it.
If an attacker gets in, they inherit those privileges — and suddenly, they can see or export everything.
That’s one of the biggest oversights I see.
Databases often contain the most sensitive information in an organization — customer data, financial records, proprietary information.
They should be among the most protected assets.
Good design practices are also critical.
I still remember struggling through database normalization during my master’s program — one-to-many relationships, primary keys, table joins — all that stuff.
It wasn’t fun, but it’s vital.
Why? Because poor design can lead to security holes — like SQL injection vulnerabilities.
And nowadays, there are even automated tools that can scan your database schema for vulnerabilities, just like code scanners for software.
Use them.
Finally, think about the connection between databases and identity management.
One of the biggest vulnerabilities in business today isn’t technical — it’s process-driven.
Many organizations fail to properly offboard users or update access when roles change.
It’s shockingly common.
I’ve worked with clients where someone left the company over a year ago, but their account was still active — with full access to databases.
That’s a huge risk.
So, for anyone going into database management or development, here’s what I recommend:
- Learn database fundamentals deeply (normalization, queries, design).
- Understand secure coding and how to prevent SQL injection.
- Learn access management and identity governance.
- And practice monitoring and auditing — always know who’s connecting to your database and what they’re doing.
Do those things consistently, and you’ll be way ahead of most professionals in that space.
Instructor:
Exactly. I teach a database class with a security component, and I emphasize the same thing — connections, networks, and access all matter, not just SQL.
Greg Schaffer:
That’s spot on.
And it’s not just about keeping intruders out — it’s about managing legitimate access responsibly.
People change roles, contractors come and go — processes have to adapt.
If you don’t have good policies for that, you’ll end up with privilege creep, orphaned accounts, and eventually a breach.
That’s why governance is just as important as technology.
Student:
I wanted to ask about social engineering — how it’s evolved over the years — and if you’ve seen any notable examples or changes.
Greg Schaffer:
Ah, social engineering — one of my favorite topics.
It’s also one of the hardest problems to solve because, at its core, it’s about human behavior.
Back in the early days, we used to tell people to look for poor grammar or spelling mistakes in phishing emails. Those were the classic giveaways.
But that doesn’t work anymore.
With the rise of AI, phishing emails are now perfectly written — no typos, no awkward phrasing, and often perfectly mimicking legitimate styles of communication.
Attackers are using AI to craft messages that look exactly like they came from your boss.
And now, we’re seeing deepfakes entering the mix.
There was a case last year — I think in Singapore — where a company’s CFO was on a video call with what he thought were several other executives.
They discussed wiring $25 million for what seemed like a legitimate transaction.
The problem? Every single other person in that meeting — including the CEO — was fake. AI-generated video and voices.
Only the CFO was real.
It was an incredibly sophisticated deepfake attack.
So how do you defend against that?
It’s tough. Technology can help, but ultimately it comes back to process, awareness, and verification.
Let me give you a more lighthearted example — a physical social engineering story that went wrong (for the tester).
I used to work at a bank, and we hired a firm to perform physical social engineering tests.
Their job was to try to gain unauthorized access to branch offices and see if they could get to sensitive areas.
The rule in these engagements is simple: if you’re challenged by staff, you must immediately stop the ruse and produce a “get out of jail free” letter — basically authorization from the bank’s risk office confirming it’s a test.
Well, in one instance, the tester was challenged by a teller who asked, “Who are you?”
The tester said, “You’re right — I’m not really here to fix your printer,” and then reached behind his back… to grab the authorization letter.
Of course, the staff thought he was reaching for a weapon and panicked.
Needless to say, we never worked with that firm again — and that tester probably found a new career path shortly after!
But jokes aside, that story illustrates how dangerous — and unpredictable — social engineering can be.
It comes in all forms: phishing, pretexting, baiting, tailgating, and now AI-generated deception.
There’s no single defense.
The best approach is layered — combining technology, awareness, and culture.
And above all, encouraging verification before action.
Trust, but verify — and when in doubt, verify again.
Student:
Is there any kind of software or machine learning that can detect social engineering attempts, like phishing or deepfakes?
Greg Schaffer:
Not effectively — at least not across the board.
The problem is that social engineering relies on context and emotion.
Technology can detect patterns — but it can’t fully understand human persuasion.
Machine learning can help with email filtering or anomaly detection, sure — but attackers are always evolving faster.
There’s no permanent technical solution to the human condition.
The best we can do is reduce the likelihood of success through education, awareness, and strong processes.
That’s why continuous awareness training — done right — is so critical.
Student (in chat):
If most attacks happen because of human mistakes, how can we train end users better? How often should we do it, and should the training be different for people who handle sensitive data?
Greg Schaffer:
Excellent question — and yes, 100% yes: training should be role-based.
Not everyone faces the same risks.
For example, someone in HR deals with personal information — they need to know privacy risks and phishing tactics.
Finance handles money — they need to recognize wire fraud and business email compromise.
IT staff need to understand technical hygiene, configuration management, and incident reporting.
So training should be tailored to the risk of the role.
And it shouldn’t just be a “once-a-year PowerPoint.”
That’s not effective.
Security awareness should be continuous — short, relevant, and frequent.
Quarterly micro-trainings are far more effective than a single long session.
You can also incorporate simulations — phishing tests, scenario walkthroughs, tabletop exercises — anything that reinforces behavior through action.
But more important than the format is the mindset.
If you tell users “Don’t click this” or “Don’t do that” without explaining why, they won’t internalize it.
We need to teach risk-based thinking.
Instead of saying “Never use public Wi-Fi,” explain why the risk is often overstated — and how to evaluate it rationally.
Here’s an example:
There’s a lot of fear-mongering around something called “juice jacking” — the idea that if you plug your phone into a public USB charging port, someone could steal your data.
Technically possible? Sure.
Actually ever happened? Not really.
There’s never been a verified real-world case outside of controlled demonstrations.
So when security professionals say “Never plug into a public USB port,” what they’re really showing is a misunderstanding of risk management.
Let me explain it this way:
Risk = Likelihood × Impact.
Yes, the impact of your phone being compromised is high.
But the likelihood of it happening from an airport charging station? Virtually zero.
So the overall risk is negligible.
Now, if your phone is dead and you need it for navigation or safety, that’s a real, immediate risk.
The rational decision is to charge it — maybe use a data blocker if you’re worried, but don’t let fear guide you blindly.
That’s risk-based thinking.
And that’s the kind of mindset we need to cultivate in users.
Teach them to evaluate situations instead of memorizing rules.
It’s like teaching someone to fish versus giving them a fish.
If you teach users to think critically, they’ll handle novel threats — not just the ones they’ve seen in training.
Instructor:
So it’s almost like risk classification — evaluating and ranking risks appropriately.
Greg Schaffer:
Exactly.
It’s like moving from a “waterfall” approach to an “agile” one — constantly assessing risk in real time.
For example, right now, while we’ve been talking for over an hour, I realized my throat was getting dry.
I assessed that as a risk — “Will this affect my ability to communicate clearly?”
I decided the impact was moderate, the likelihood was high, so I mitigated it by grabbing an energy drink.
(Laughter from the class.)
That’s real-time risk management!
Instructor:
(Laughs) Fair enough. I think that illustrates the concept perfectly.
We’ve got about ten minutes left. Any final questions?
Student:
What about securing data in transit versus at rest — like with backups or offsite storage?
Greg Schaffer:
Another great question — and a perfect one to close on.
Data security boils down to three key concepts:
- At rest — stored data.
- In transit — data being transmitted.
- In use — data actively being processed.
For data at rest, encryption is your primary control.
It protects against theft of storage media, backups, and unauthorized access.
For data in transit, encryption again — through HTTPS, VPNs, TLS — ensures that even if data is intercepted, it’s unreadable.
And as technology improves, the performance impact of encryption is minimal — so there’s no excuse not to use it.
One thing I’d love to see more of in the future is micro-level encryption — securing data at the packet or field level, independent of where it’s stored or transmitted.
That’s not mainstream yet, but it’s coming.
Until then, defense in depth still applies: encrypt at every layer, manage keys securely, and always classify your data.
Data classification — public, private, confidential, restricted — determines how much protection it needs.
That’s one of the most important policies an organization can have.
Instructor:
That’s fantastic advice. Any final thoughts you’d like to share?
Greg Schaffer:
Just this — information security isn’t really about technology. It’s about people, processes, and purpose.
Technology changes every week. But principles — risk management, communication, ethics, curiosity — those never go out of style.
If you focus on understanding why you’re securing something — not just how — you’ll always have a career in this field.
And please, stay curious. Keep learning. Build your network. And when you’re in a position to do so, give back.
That’s how we move the industry forward.
Instructor:
That’s a great way to close. Everyone, let’s thank Mr. Schaffer for spending time with us today.
Students (in unison):
Thank you!
Greg Schaffer:
Thank you — I really appreciate the opportunity.
One of my biggest passions now is helping the next generation of professionals grow.
If anyone wants to reach out, feel free to connect with me on LinkedIn — just let me know you’re from this class so I don’t mistake you for a sales message!
Thanks again, everyone — and good luck in your cybersecurity journeys.