Scammers won’t hold back, why should we?

Simulated phish have unique traits no sender can hide from…

I'll kick off by saying I have no problem with phishing simulations, relevance and plausibility establishes trust, so logically the plausibility of email threats can be lessened by making people aware of them, so why not send a few in the wild. They’re also a great way to extend learning, and to reinforce the ways in which someone can spot a threat. I once heard it called a ‘fire drill’, and I liked that — seen from the perspective of an employee, I think that description would make sense.

A lot of the arguments for using phish - like the infamous pay rise phish - focus on it’s what scammers do, so to get an Org battle ready, we should simulate the whole spectrum of their pretexts. There is credible logic to that, and keeping things pragmatic and real (when simulating a threat) is a strong enough concept in itself to always justify ‘tough’ phish. But as with anything that requires justification, it can mean there are more perspectives to consider - why isn’t it universally agreed that’s what should be done?  

So before we all hit send, let’s look at just how close we really are to ‘doing things as the scammers do’, because it’s a fascinating topic — and well worth exploring.

What do scammer’s do?

They want clicks, for the most part. Click through to their login page, click and open their attachment; their goal is clicks. I’ve written before about how we give scammers far too much credit for being skilled at manipulating our amygdala — that’s what might be happening biologically, but it’s also just a by-product of them simply using things we/they are most likely to click on. 

I mention this as we shake our heads at the depths scammers go to - and they definitely do at times - but we cloud the intent and the skills scammers have by applying that to everything they do. The majority are blissfully unaware of the emotional reactions, and certainly don’t have much empathy to explore it as well as security professionals can. Playing devil's advocate for a second - could we even be seen as less empathetic than the scammers? Because we are so much more aware of the personal impact, and have shared values with those we’re phishing? 

As a second nature, scammer’s abuse trust, the trust we have in people, and brands. When a scammer sends a phishing campaign, they often aren’t targeting people in any specific or personal way, it’s just a batch of email addresses whose turn it is to get hit. Could be a mixture of roles, Orgs, Industries, time zones - depends on how well they’ve processed the leads. 

Their main concern after hitting send is how many inboxes will they land in, that battle is always there for them, if they don’t win it then they definitely won’t be getting their precious clicks.  

As a (generalised) process and approach, it’s fairly devoid of human focus. Humans are just convenient for them as they click things. Which isn’t an earth-shattering revelation, have you tried using a computer or phone without clicking anything? Clicks are the de facto behaviour of being online - we navigate, explore, and make decisions by clicking.

How we copy what scammers do

So the scammer’s objective was clicks. Hmm. Is that the objective of the phishing sim? It isn’t the primary objective, but it is a sub-objective, or why present a test that gets a pass or a fail (O/1) based on a click? Sure, the reason for collecting the click is different - to gauge the human risk profile, to provide a ‘scammed’ experience - and to educate off the back of it - but that needs clicks. Without clicks, it’s just a weird newsletter.

And that’s where things get complicated, because everything outside the simulated phish landing in an inbox, and it being clicked on, is so different to a real phish.

Firstly, we know we are landing in pretty much every inbox. No technical hurdles to overcome, diluting the attack. Our ‘attack’ will be successful, and we will 100% get clicks.

And that’s quite a responsibility to have. We know without question that someone who’s a part of our organisation's success and ‘family’, will believe our pretext is real. So if we give a pay rise - that person has a pay rise, until they discover otherwise. 

If someone has clicked, they’ve reacted emotionally to it. Things would be a lot simpler if we could assign words like ‘pleased’ or ‘happy’ to describe the journey every clicker will take, but that’s not digging deep enough. The real emotional journey (we’ve tricked out of them) is about why they are pleased or happy. Are they happy through relief? Are they pleased because they can now help a family member? And that might be the reaction of a tiny minority, but those that do react like that are more likely to be facing personal pressures the pretext story is solving. They may only feel that relief for a few seconds - but to them, it’s real, and they’re biologically reacting to it. 

And I guess that’s why the pay rise phish is sent out. To show how scammers abuse a trusted identity, to deliver a pretext that causes a notable emotional reaction. And on paper, it’s a very good email to highlight just that - I can see how given the task of simulating attacks, it would immediately jump out as a likely click-winner, but there’s some fundamental dynamics that are unique to simulating this, and similar, phish. 

Out of the phishing Sim, into the fire

Financial and covid related benefits have had patches of negative media coverage over recent years, Organisations have had to publicly apologise for sending them. These types of phish seem to suffer from the same hidden ailment; abuse of trust is just too close to home.

Scammers absolutely abuse the trust with have in others, and a simulated pay rise phish abuses trust too. But crucially, it abuses the trust the employee has with the Organisation they work for - the Org sending the sim is using the trust between themselves and the employee to make the employee believe the pretext is real. Once an employee knows it was sent internally, it is no longer a phish a scammer might send, it’s an email their organisation has sent, to try and trick them.

The pay rise email also has another problem to contend with, by its very nature it doesn’t apply to those running the company - they award the pay rises, and arrange the covid assistance. They aren’t the audience for emails like these, and it leaves the door wide open for them to be the target for any upset it might cause. By excluding those who can genuinely award the benefit that was used as bait, it throws petrol on the flames of having used their own brand to add validity to the phish. And I’m lumping the Org, it’s  execs, and the campaign sender together. Because from an employee's perspective, and the media’s, I’m not sure how clearly they’ll separate them if they’re looking to give a face to who was responsible for their reaction. 

And then other factors can be pulled into the mix. Have shareholders had a large payout recently, has there been a pay freeze, or reduction in staff. Have specific members of staff asked for - and been denied, a pay rise. An angered person will naturally pull all the reasons it’s justified towards them, and we can’t anticipate how that will unfold for every individual. And that’s proven now. It has unfolded to the extent someone was pissed off enough to post in online, and the media picked up on it. 

And that’s an interesting development. The court of public opinion will never at any stage side with an organisation that awards ‘fake pay rises’, I think we can probably all agree on that. There is just no security justification, no matter how sound and well it’s put across, that will stand a chance of making the headline. If two thirds of businesses don't even do any training, it makes sense the public won’t side with something that they see as unnecessary.

And whilst it’s not a huge character in the news pantomime, the misjudged phish is now a character nonetheless, and that could be abused. Anyone sent a phish of that ilk can screen-grab it, and post it online. And that could be a worry for the well-meaning button-clicking sender, because some questions asked internally could be awkward. Was this a phish we’ve seen in an attack? Do you have stats on the frequency in the wild? Have they been trained on this pretext before now? What are we doing from a technical POV to prevent this phish, if it’s such a risk? Did you consult legal and HR? Did you think this was a good idea after the lay-offs? 

The chances of it being weaponized by disgruntled employees are tiny, easily small enough to disregard, unless the frequency of stories in the media increases. But to circle back a little; it was, we must assume, spikes of anger that ultimately propelled provocative phish into the wider world. 

So what if nothing bad happens after sending it - there’s no indication of anger - does that mean no one has any ill-feeling towards it? If there’s some mild unhappiness, is that still okay? Do employees buy into the need to be tricked to the extent it won’t alter how they feel about a security programme, even if it isn’t always pleasant?

….And I have no experience in that area, so I’m not going to comment!

I hope you found this blog interesting, I am not trying to push opinions on people, I just wanted to talk about the parts of the whole ‘provocative phish’ discussion I find interesting :)

Next
Next

Ask&Do™ - The transactional relationship that gives life to email