I just completed recording an interview for The Virtual CISO Moment podcast where, as part of the “top cyber threats to small and midsized businesses” segment, we discussed potential threats of generative AI (such as ChatGPT). The truth is, as my guest noted, generative AI is a tool, and tools can be used for good and not-so-good purposes. But do you need to explicitly document processes to mitigate this risk?
At vCISO Services, LLC our prime concern is the security posture of our clients’ business, and compliance follows. There is no regulatory or framework requirement (yet) for standalone generative AI policies but, as a common saying in information security goes, compliance doesn’t equal security, and its corollary is you should perform actions to improve security because it does just tat, not for compliance purposes. On that alone, the argument of “we don’t have to do it (from outside pressures) falls apart.
One may argue that generative AI is just another location where confidential information can be stored and processed, and therefore is likely covered under existing policy such as the Acceptable Use policy or Information Classification and Handling policy (or equivalent). There is validity in this position as the more that organizations write targeted policies, the more likely gaps will form in the policy structure. For that reason, policies are constructed at a high level, and details left to procedures and standards.
But there is precedence to think otherwise. Most organizations have (or should have) a standalone Social Media policy. Social Media use is a blend of information security, marketing, and communications concerns. Because of such, assigning its content to one of those relevant policies may not agree with other areas’ responsibilities. In other words, Social Media use could fall under the Acceptable Use policy but not all aspects (e.g. when, how, and what to post to social media) shouldn’t be applicable to all users since only a select few should have the access and authority to post to company social media assets.
The alternative of not having a standalone Social Media policy isn’t a great option after, chiefly due to awareness. This is where the analogy for a separate generative AI policy makes sense. Just as when social media was gaining in popularity while relatively new many years ago, so is generative AI today. Staff may not be aware of the dangers associated with improper generative AI use. Take the example of Samsung earlier this year, where employees passed confidential information to ChatGPT. Samsung allowed their software engineers to use ChatGPT to solve coding issues but in the process the workers submitted confidential information. It is possible they just did not realize the risk of doing so.
Thus, this is a problem not necessarily limited to policy but also of awareness. While your policy suite should explicitly direct how generative AI can and cannot be used, staff need adequate training on the polices and procedures so they have the awareness to recognize when such tools may produce a risk.
Does your business need assistance evaluating and enhancing its information security policies? A virtual CISO (vCISO) from vCISO Services, LLC can help your SMB as either as a component of Subscription Virtual CISO Services or Standalone Virtual CISO Services. Contact us at info@vcisoservices.com to get started.
(Generative AI was not used in any part in the creation of this blog post).
Photo by Gerard Siderius on Unsplasht