Blog - Integrity Technology Solutions

Preventing Insider Threats: Secure AI Use in the Workplace

Written by Integrity Staff | August 6, 2024 at 2:13 PM

According to one recent study, about 74% of companies say that they feel "moderately" or "extremely vulnerable" to consistent cyber breaches.

It's a constant source of anxiety for just about everyone.

The Internet is moving too quickly and new threats are emerging every day.

To say that even attempting to stay one step ahead of those who wish to do you harm is difficult is, at this point, likely a bit of an understatement.

Things become exponentially more complicated when you consider the fact that, from 2018 to 2020, insider attacks grew by an incredible 47%.

That's right - not every cyberattack is executed by someone halfway around the world who has been working tirelessly for weeks to try to break into your system.

More often than not, these attacks start with the people you work with (and trust) daily.

Not only do insider threats impact over 34% of businesses globally each year, but even trusted business partners account for 15 to 25% of all insider threat incidents according to one recent study.

This is especially concerning when you're talking about areas like healthcare or personal finance, where the value of information that can be stolen (and thus re-sold) is exponentially higher than it is in other fields.

All this is to say that the situation is already complicated.

It's about to become increasingly so once artificial intelligence tools like Microsoft Copilot become ubiquitous in the workplace.

So how do you prevent insider threats and secure AI use in the office, allowing you to enjoy all the benefits of these modern-day tools with as few of the potential downsides as possible?

By keeping a few important things in mind along the way. 

 

Understanding What an AI Insider Threat Is

Within the context of artificial intelligence and workplace environments in particular, it's important to know exactly what is defined as an "insider threat" so that you understand precisely what it is that you're up against.

There are two definitions - but most people are only familiar/concerned with one.

The first is that of the rogue actor.

Perhaps an employee has just been fired and, for some time, they still have access to their various accounts.

The threat would be if that person were to abuse that access, stealing business secrets and taking them to a competitor for personal gain.

In reality, what you should be focused on are those "accidental" insider threats - employees who become vulnerabilities simply because they're not up-to-date on all the various cybersecurity threats that exist on the Internet.

Take phishing emails, for example.

Here, an employee would get an email that appears to be legitimate and from a recipient they know.

It might redirect them to log into a website that, again, looks completely real.

Only the email sender isn't who they say they are, the site is illegitimate, and that employee is in the process of having their credentials stolen.

Once that username and password have been handed over, someone with malicious intentions has access to your computer systems, and you aren't even aware that a problem exists quite yet.

Artificial intelligence and related technologies introduce a unique layer of risk into this proceeding because of how powerful they are from the onset.

When AI is deployed across your entire infrastructure, it may have access to all data contained on a server even if individual users don't.

So depending on how permissions are set up and someone's role within an organization, they may have access to certain types of accounts and not others by design.

That AI system, however, would likely have access to all that information. At that point, the user doesn't need access - they just need to make the AI tool think they do so that they can see whatever records they want.

The same concept is true in a larger sense in terms of compromising your system.

Now, a hacker doesn't have to gain access to your servers to steal information.

They just need to compromise your AI tools and have those digital assistants retrieve valuable information for them.

The most immediate impact of this type of insider threat has to do with data security - there's no telling what type of information could potentially be compromised if an AI system is breached.

For specific industries like clinical healthcare, this also instantly creates issues when it comes to compliance.

HIPAA violations, for example, come with penalties that range from a minimum of $100 if the individual was unaware they were in violation and a maximum fine of $25,000 per year.

One could make the argument that an insider threat taking advantage of a poorly deployed AI tool is actually "willful neglect," which increases the fine to a maximum of $100,000 per year.

Similarly, with community banks and other financial institutions in particular, you'll also be dealing with the types of trust issues with your clients that will be difficult, if not impossible, to overcome.

Once your systems have been compromised, it will be very hard to convince people that you can safely deploy something as powerful as artificial intelligence and keep their data away from prying eyes once again.

 

Tools and Techniques to Detect and Prevent Insider Threats

Thankfully, there are many ways that AI and machine learning can be used for behavioral analytics, allowing systems to automatically detect anomalies that may indicate an insider threat.

To continue using the example of a clinical healthcare organization, naturally only certain types of employees would have access to a patient's full file.

There are others, with some members of the "front of house" administrative staff being included in that, who absolutely shouldn't have access because they don't need it to do their jobs.

Now, let's say that one of those people used an AI tool to essentially gain access.

They've bypassed one security measure in a clever way.

Thankfully, behavioral analytics can still detect that someone with this permission level having access to this data would deviate from the accepted definition of "normal," thus representing an insider threat that someone should review and pay attention to.

It really is that simple sometimes for an insider threat to play out, which is why it's so essential to have a comprehensive logging of all AI interactions.

These logs can be audited for suspicious activities to help shed valuable light on who accessed what information, when, and why.

If nothing else, should an insider threat be successful, these logs can be a helpful tool to highlight exactly what happened - all so that you can prevent it from happening again in the future.

 

Role-Based Access Control with Copilot

Microsoft Copilot thankfully includes several innovative features that help prevent it from enabling issues like those outlined above, with role-based access control (otherwise known as RBAC) for short being chief among them.

By design, role-based access control helps to limit access to essential data based on a user's roles and responsibilities within an organization.

If only a doctor in a healthcare facility should have access to certain types of personal patient information, only doctors will have it.

If only account managers should have access to customer financial data within a financial institution, only account managers will have it.

Customizing permissions for different roles is a process that will vary depending on the organization you're talking about.

This is true even within the same industry.

A small, private healthcare practice that has three employees will need to approach things differently than a larger hospital with a few hundred people working there.

Generally speaking, ask yourself the following question: "does someone need access to this information to effectively do their job?"

If the answer is "no," they should not have access to it - end of story.

You should also consider this within the context of industry-specific compliance.

There will be certain laws in place that say that people who don't meet specific criteria should not have access to certain parts of the system. 

 

Educating Employees on Secure AI Practices

As is true with any new technology that your organization is embracing, regular training will be needed to help make sure that employees not only understand the full extent of what AI can do, but how to stay safe while using it.

To return to the example of phishing emails outlined above, know that this is something commonly focused on during most cybersecurity training sessions.

If you don't want someone to fall for a fake email, you need to make sure they know exactly what one might look like.

They need to be trained on the tell-tale signs and what to do if they do receive something they suspect is a phishing email.

Artificial intelligence safety training is no different, it's just on a much larger scale.

You also need to make sure that employees understand the risks associated with the misuse of these AI-based tools.

If you want people to take this all seriously, they need to understand the full extent of the consequences you will all collectively face if something goes wrong.

Many average employees might not realize just how quickly HIPAA violations add up.

For the best results, employees should also follow certain best practices when using AI tools to minimize risk.

These include but are certainly not limited to things like:

Stopping Insider Threats, One Step at a Time

In the end, it's important not to hear a term like "AI insider threat" and immediately call to mind images of science fiction stories and rogue actors like Skynet.

More often than not, some type of data breach will originate with someone already inside your organization.

It's just that they won't always be intentional in the way you assume.

As of 2022, the majority of these insider threats will be caused by user negligence.

AI insider threats are naturally more severe because of how powerful these tools are and because of the sheer volume of data they have access to.

That's why it's so important to not only continually monitor the way these systems are being used, but to train your users on all the best practices they need to stay safe.

This is one of those situations where there is no such thing as a "small problem" any longer. 

If you'd like to find out more information about the steps you can take to present insider threats and secure AI use in the workplace, or if you have any additional questions about related technologies like Copilot security that you'd like to discuss with someone in a bit more detail, please don't delay - contact us today.