The annual release of the Verizon Data Breach Investigations Report (DBIR) is personally one of my most anticipated “news” releases of the year since 2016. It’s a great opportunity to take a macro and micro view of how cybersecurity is impacting our social and economic activities in a format that lends itself well to answering nuanced questions.
This year’s report definitely extrapolates trends from previous years and the purpose of this opinion piece is not to tell you how bad phishing and social engineering is — we all know that. Rather, we take it as an opportunity to take a peek into the future (Hari Seldon style) and ask ourselves how PhishDeck as a product and company needs to evolve to accurately reflect it.
Let’s get the summary out of the way. All quotes and figures mentioned in this piece are taken from the 2021 Verizon DBIR.
- ~35% of successful breaches originate from a social engineering attack;
- Business email compromise losses range from $250 to $984,955 with the 95% of losses being in the lower and upper bound respectively ($30,000 being the median);
- COVID-19 resulted in an increased spread of ~12.5% and ~6% for phishing and ransomware respectively (we’ll get to the correlation in a bit).
Alarming numbers, but not something we were not aware of already. What I am interested in are the emerging trends related to attack variants occurring after an initial phishing email (i.e. credential theft, malware, financial fraud) followed by trends related to our ability to respond to them both technically (SOAR) as well as socially (security awareness, culture, processes).
This year, specifically the last couple of weeks, has seen a tremendous increase in ransomware attacks. The breach-to-payout ratio is still not favourable (for attackers, that is) and I suspect this is due to increased awareness and legal risk surrounding negotiations with ransomware groups such as DarkSide, making it less likely for victims to pay up and even why they do, being able to successfully smuggle the funds.
The success behind Business Email Compromise (BEC) may be attributed to the psychological effect social engineering has on a victim, causing them to take action without having time to think or seek guidance, fueled by urgency (“We need to sign this NDA by today”) or abuse of implicit trust (“Hey this is Jennifer, your CEO…"). I feel the report describes this succinctly.
Psychological compromise of a person, which alters their behavior into taking an action or breaching confidentiality.
We see that 85% of phishing attacks target credentials as a data asset. What we also see is an uptick in malware utilising social engineering as its delivery vehicle (note the uptick on the green line below).
A lot of Social Engineering breaches steal Credentials and once you have them, what better thing to do than to put those stolen creds to good use, which falls under Hacking. On the other hand, that Phishing email may have also been dropping Malware, which tends to be a Trojan or Backdoor of some type (Figure 74), a trap just waiting to be sprung.
So far at PhishDeck we’ve focused on credential phishing, however, we have always anticipated supporting phishing simulations leveraging attachments or other file-based vectors. Furthermore, what type of attack simulation would provide most value in this regard (e.g. C2, trojan, ransomware)?
Credential harvesting still remains by and large the most common attack scenario and is where we need to keep placing focus on for the time being.
As you can see in Figure 77, no organizations experienced consistent malware by email.
That being said, email malware remains a key simulation scenario worth testing so the answer is when rather than if. As a phishing simulation product it may suffice to simply have a canary dropper, where any form of execution would fire an event, notifying the user that the Target opened the phishing simulation email, clicked the link (or downloaded the attachment) and executed the malware. Anything more than that may feel too invasive for the scope of our product which aims to provide simulation that is safe for both the user as well as their Targets.
Before moving onto the response trends, a brief comment on p. 51.
In Figure 76, you can see the click rate could be anywhere from almost none to expecting over half of respondents to click. Additionally, real phishing may be even more compelling than simulations.
With PhishDeck this narrative (and numbers) change significantly, primarily for two reasons.
- As a product there is great emphasis on low-volume, high-quality templates (pretexts). It is pointless having thousands of templates available that are subpar and makes it exponentially more difficult as an administrator to create realistic campaigns. You can do anything but you can’t do everything;
- Our phishing simulation is based on realtime phishing rather than static phishing landing pages. I’ll save you the first-in-market vendor shpiel but if you are keen on the technical underpinnings of how this works, check out Phinn.
There are some areas into which I would be really interested to see Verizon DBIR extend their research when it comes to social engineering. It would be great to get a better understanding of the following.
- The types of domains that are being configured to host phishing pages. Are these mostly generic (e.g. login-here.com) or specific typosquatted domains (e.g. gooogle.com)?
- Outside of the business context, it would be interesting to understand how social engineering attacks differ for individuals such as independent journalists who are being specifically targeted.
I will start this segment with a personal comment on security awareness. Depending on how it is approached, running phishing simulations can be more than a compliance checkbox. I like to think of phishing simulations like fire drills – simulations not only allow you to assert that processes around reporting and response are working, but they also allow you to measure your progress over time as your organization changes.
The value is in being able to continuously place employees in a culture and mindset that entices them to question emails and documents they receive and where in doubt, have available to them a well established set of communication process (i.e. Slack, email, on-call) to reach out to the security team. Reading this year’s report further embodies this ethos.
Huang and Pearlson’s cybersecurity culture model suggests that cyber secure behaviors are driven by the values, attitudes, and beliefs of an organization, which are visible at the leadership, group, and individual levels. Influencing how employees prioritize, interpret, learn about, and practice cybersecurity allows managers a way to create a cybersecurity culture within the organization.
This means that when employees are falling for the bait, they don’t realize they’ve been hooked. Either that, or they don’t have an easy way to raise a red flag and let someone know they might have become a victim. The former is difficult to address, but the latter is simple and should be implemented—something as basic as a well-publicized email of firstname.lastname@example.org (which, of course, is monitored) can give you a heads-up that something is amiss.
Correlate this with the exponential increase of BEC in security incidents and you start to see the problem here. If you are not breeding a culture where employees feel comfortable and are rewarded for reporting phishing emails, compromised or not, will place your organization at a significant disadvantage.
The good news is that most security teams seem to understand the value of security awareness training as a control for phishing and BEC. Across three implementation groups, we see 96% of users leveraging CIS Control 17 (Implement a Security Awareness and Training Program) and 74% leveraging CIS Control 19 (Incident Response and Management).
In terms of detection, turns out that stolen credentials are the fastest to be compromised and difficult to immediately detect. The fastest discoveries are attributed to incidents where an error is readily apparent (e.g. website is down). The good news here is detection and response is drastically improving from “Months or more” to “Days or less”.
One area we are not seeing enough push on are web applications implementing some form of risk-based authentication where heuristics such as GeoIP-location are used to trigger strong forms of MFA (i.e. TOTP, U2F/FIDO2) and immediately notify users of the attempt or delegate to common IdP(s) such as (not limited to) Google Workspace, Okta and Azure AD.
There are some interesting takeaways here for us.
- With a post-pandemic workforce and the drive for hybrid or remote-first approach, social engineering will see another surge unfold over the next few years;
- Remote-first will result in a number of day-to-day communication tools that place heavy emphasis on sharing and human interaction (e.g. Slack Connect). We will see these used as attack vectors more and more beyond traditional email and phishing;
- As an industry we are still struggling to develop a culture that encourages our employees to report phishing emails — instead relying on external parties. This perhaps stems from the blame-culture historically associated with phishing simulation tests.
I gleaned over a number of interesting data points that I highly recommend going over. For example, according to the DBIR, in EMEA AppSec is still a larger problem than social engineering, why is that? When reading segments of the report, take your time with it to really picture the narrative that the data points are trying to explain. I find it to be extremely insightful and rewarding.