Back in my day..
I often joke with my kids about how old I'm getting. I was born in the 1900's but my memory is still sharp. I remember a time when MFA was considered complicated, expensive, difficult to rollout, and only realistic for the larger enterprises.
- You needed hardware tokens
- You needed on premises infrastructure
- Dedicated engineers
- Complex rollout plans and LOTS of user education
It was something mature security teams did once they had everything else in place.
I recall trying to test a legacy MFA provider and being told my company was too small for their solution and it would cost too much.
Now MFA is table stakes.
You can enable MFA in minutes. Employees already use mobile devices. You can easily enroll remote team members. Providers are cloud native. What was once an advanced control became a day-1 control.
Like MFA, deception goals are often seen as advanced - something you layer on top once you have perfected logging, alerting, and response.
That mindset is outdated.
Legacy Assumptions
The common belief is that deception is mainly for mature security teams.
You need:
- Dedicated detection engineers
- Highly tuned SIEM rules and detections
- Hardware distribution and management
- Deep understanding of attacker behavior
In other words, deception is positioned as a heavy lift which requires a long commitment.
This assumption sounds reasonable. Deception involves realism. It involves placement. It requires thinking like an attacker.
But so did MFA.
There was a time where networks were highly segmented. Remote work was an exception. BYOD was extremely rare. Cloud adoption was low.
But the environment was ready to evolve.
Where It Breaks Down
The reason MFA became accessible was not just the need for better security. MFA was not a new concept or market. The legacy providers had been around for a while.
So what changed?
The environments changed: the users became more involved, the technology landscape shifted, the attack surface expanded.
- Identity shifted to the cloud
- Remote work wasn't just reserved for the random IT person
- Everyone was getting a mobile device (back then it was a Blackberry, if you remember them!)
- Hardware tokens were no longer the primary MFA option
The surrounding ecosystem reduced the friction. Honestly, Duo timed the MFA market perfectly, improving security within user trust for a whole new set of companies and use cases.
I see a similar shift happening for deception within today's ecosystem.
- Rapid advancements and adoption of AI & LLMs
- Global adoption of cloud providers, often across multiple platforms
- Containerisation has standardised packaging / deployment but increased system complexity
- An expanding supply chain surface and risk, expansion of dependency chains, CI actions
- Infrastructure as code adoption within DevOps
We are no longer manually deploying servers, setting up network gear, or sprinkling static canary files across environments. We are working in environments that are dynamic, highly distributed, and covering multiple factors of today's attack surfaces.
The traditional barriers to deception were rooted in older infrastructure models. They assumed static networks, manual processes, and high operational overhead.
That world has faded.
A Better Framing
Instead of asking whether your team is "mature enough" for deception, the better question is this: are you already using a cloud environment?
If the answer is yes, you are closer than you think.
Deception should not be positioned as a final layer. It should be part of an Assume Breach mindset from early on.
- Assume an attacker will get credentials.
- Assume a workload will be exposed.
- Assume lateral movement will happen.
- Assume AI and LLMs will expand your risk.
Deception fits naturally into that model because it focuses on intent. It triggers when an attacker interacts with something they should never touch.
That signal is often cleaner than another log source. It is simpler than building and maintaining hundreds of behavioral rules.
And it does not require a large SOC team to get value.
Practical Implications
If we treat deception like we once treated MFA, we will keep it artificially gated.
Late adoption limits the trust within those key networks and environments. I saw this with MFA, where teams struggled to adopt a Zero Trust framework, lacking a layered security approach. Even today, breaches still happen when Identity trust is missing MFA.
This is also true with deception, where late or limited adoption leads to weaker trust and longer detection timeframes within those key environments.
Instead, teams should:
- Stop thinking of deception as an advanced optimization.
- Start treating it as a foundational Assume Breach control.
- Leverage modern cloud and IaC to deploy and manage security canaries.
- Use automation and AI to reduce operational overhead.
- Design deception early alongside threat detection, not after it.
Deception provides high signal (low volume) with less tuning than complex detection rules.
It can level the playing field rather than widen the gap.
The Tracebit Perspective
At its core, this is about signals.
MFA evolved because the ecosystem reduced friction and made strong identity controls accessible to everyone. If a user got an unexpected MFA 'Push' while sitting on their couch at home, they could hit "deny" on the mobile app and give the security team a high fidelity signal to investigate.
Deception is at a similar inflection point.
Cloud native infrastructure, containers, infrastructure as code, and AI have removed many of the operational hurdles that once made deception feel complex.
Modern deception programs can be low maintenance, highly adaptive, and widely deployed across the attack surface. A 'set-and-forget' approach to deception doesn't mean it's out-dated or stale.
The question is no longer whether deception is technically possible for teams of all sizes.
The question now is whether we continue to treat it as optional or embrace it as a foundational part of modern security design.
Conclusion
MFA was once considered complex and reserved only to the larger enterprises or government agencies. Now it is a standard part of any security program and foundational to security frameworks like Zero Trust and NIST.
I feel like deception is on the same path.
If we lower the barriers, simplify deployment, reduce on-going maintenance, create dynamic and realistic decoys, and align with cloud native practices, then teams of all sizes can implement deception early in their programs.
Not as the final layer, but early in their defense strategy.
Not just for external threat actors but protecting against internal risk, too.
Because the earlier you design for attacker behavior, the stronger your defenses become.
If you are curious to learn more about how Tracebit has lowered the barrier to deception, book a demo.

