0% found this document useful (0 votes)
5 views

Paraphrased Videos 2

The document features an interview with Alexey Novikov, Director of the Security Expert Center at Positive Technologies, discussing his background in cybersecurity and digital forensics. He emphasizes the importance of understanding the motivations behind cyberattacks and the role of forensics in investigating incidents. The conversation also highlights the evolution of cybersecurity practices and the need for proactive measures to prevent breaches.

Uploaded by

Gleb Marchenko
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Paraphrased Videos 2

The document features an interview with Alexey Novikov, Director of the Security Expert Center at Positive Technologies, discussing his background in cybersecurity and digital forensics. He emphasizes the importance of understanding the motivations behind cyberattacks and the role of forensics in investigating incidents. The conversation also highlights the evolution of cybersecurity practices and the need for proactive measures to prevent breaches.

Uploaded by

Gleb Marchenko
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Hi everyone! I’m Lex, and you’re watching "These Beards.

"
Today we have an interview on a fascinating topic: cybersecurity and digital forensics — or just “forensics,” if I’m not
mistaken.
Our guest today is Alexey Novikov, Director of the Security Expert Center at Positive Technologies.

Before we dive into the main topic — which, by the way, isn’t widely covered in the media or even on YouTube, which
caught my attention — I want to point out that in Belarus it’s nearly impossible to find anyone willing to speak openly
about forensics due to NDAs and legal constraints. So I was thrilled when Positive Technologies agreed.

But before getting into forensics, I always start by exploring my guest’s background — it helps everyone understand
where their expertise comes from.

So, Alexey, you grew up in Rostov-on-Don. Let’s start there. Tell us what that was like — where did you live, when did
you move, what was your first job?

Hi! Thanks for having me — it’s great to be here.


So, the story really begins in high school. I got bored with the local school and decided to transfer to a more
technically focused one. That led me to a lyceum affiliated with the Don State Technical University in Rostov. I did
grades 10 and 11 there, with actual university professors teaching us — so it felt like a pre-college experience. You’d
already get to explore departments, fields of study, and how university life works.

This was around 2001, just before Russia introduced the unified state exam (EGE). So our graduation exams also
doubled as entrance exams to the university.

I looked around at other options in the city — law, business, medicine — but none of them appealed to me. I always
liked math and physics.
At the lyceum, we had programming classes, and by then I had access to a PC — we used Turbo Pascal.

When I checked the university’s list of programs, one name really stood out: “Computer Security.”

You're making it sound like you made a very conscious choice — most students just follow their parents’ lead.

Sure, my parents gave suggestions, but I was pretty clear on what I didn’t want.
And yeah, the program really was my own choice.
It turned out to be relatively new — only in its second year — and if I got in, I’d be part of the third cohort.
No other university in the city offered anything similar.
I talked to older students from the lyceum already enrolled in that program — they explained how it worked, and it
sounded great.

The full course lasted five and a half years because the curriculum was still being shaped. I graduated with a degree in
mathematics, but specifically focused on computer security.

In Russian universities, the field of InfoSec spans seven specialties. One of them is “mathematician-cryptographer,”
which involves designing encryption algorithms.
Mine was more general: applied math related to programming and security — not as advanced as a full-blown math
department, but focused on areas like statistics, probability, algebra, and modeling.

There are also specialties that lean more toward engineering — like physical surveillance countermeasures, including
acoustic and electromagnetic shielding.
Yes, like the old spy movies! There’s actually a program in Russia (BGTU) that trains people to detect and block such
threats.

Back to my department — it was officially “Software for Computing Systems and Automation.” Originally, it trained
programmers.
Then, the department head added the InfoSec program, seeing its rising demand.

That gave us a unique education: programming-heavy, mathematically grounded, and security-focused.


We studied a wide array of programming languages and got deep into Unix and C.
One of our most memorable tasks was writing a server in pure C — including memory management.
Our instructor was brilliant: he would test our code using standard clients. If the file didn’t transfer properly, it meant
segmentation fault — fix your memory bug.
Then, in another class, we’d analyze vulnerabilities and learn how those same bugs could lead to remote code
execution.

All of that gave us a really solid foundation. But did you feel that your university actually prepared you for the job?

Depends on the day! Sometimes I think yes, sometimes no.


Some courses were incredibly useful — especially networking and Linux.
Also, the programming focus gave me hands-on knowledge, which later helped me understand how vulnerabilities
arise and how attackers exploit them.

I think everyone here already has a good idea of what cybersecurity is — we’ve had four episodes on it. And one key
takeaway is that programmers tend to fall for cyber traps less often.
That’s why learning to code is so valuable, even if it’s not your main goal.

Speaking of which, if you're wondering which language to start with — based on interviews on this channel, Python is
a solid choice.
Python content has been consistently in the top 5 here, and it's beginner-friendly.
It’s one of the most in-demand languages, and Python devs earn well — even junior backend developers in Russia
start at 80,000 rubles/month.

There’s a ton of material online, but structured programs are often more effective.
That’s why I want to mention that GeekBrains is currently enrolling for their Python developer course.
It’ll teach you how real development teams work, introduce front-end and back-end, and you’ll build six portfolio
projects.
They also include bonuses like English language courses.
And you'll get a state-recognized certificate upon completion.

There’s a big discount on the course right now — check the link in the description.
Start your journey into tech, and may the spirit of cybersecurity be with you!

[Music plays]

So, when did you start working in the field?

In the final years of university, some older classmates invited me to work at a company maintaining Rostov.ru — a
local web portal.
I joined as a C# developer, building Windows forms so editors and journalists could upload content to the site.

I did that for two years. Eventually I started working on the database — Postgres — and got into DB design and
security.
I managed to juggle work and studies since most lectures were in the evenings — and, let’s be honest, by the final
years, not many people attend class regularly.

That job helped me realize development wasn’t really for me.


So I started exploring other options — and ended up in a company focused on InfoSec, specifically on detecting and
preventing cyberattacks.
That’s how I began working in my actual field of study.

So it was a deliberate shift?

Yes, very much so. I was looking for something related to my degree.
The company was government-owned, and I joined their Security Operations Center (SOC) — although back in 2007,
the term wasn’t widely used yet.

I spent my time monitoring attack attempts — which ones failed, which succeeded, what the intruders did, what they
might try next, and how to kick them out.
I stayed there for about ten years and gained a huge amount of experience.
Did they pay you decently?

Yeah, for a government job it was solid. Over those 10 years, I worked on attack detection, risk assessment, and
incident response.
To detect attacks, you have to understand how hackers think — how they get in, how they move, and how to block
them.
You need to know the infrastructure inside out: how emails are handled, who opens what, what third-party
integrations exist, etc.

When an incident happens — say, money disappears from an account — the company needs to figure out fast:
Was it internal fraud? Was it malware? A phishing scam? A malicious insider?
Each scenario requires a completely different response.

That case — disappearing funds — we’ll revisit it later. It’s very illustrative.

So after a decade in public sector, you switched to Positive Technologies?

Exactly. I’d built solid experience in incident response and complex attack investigations.
At that point, I was ready for a new challenge — and joined Positive Tech to help develop their expert center.

To respond to an incident at a facility and investigate, the approach depends on the hacking group's motives. If
they’re after data, their behavior follows one pattern; if they’re chasing money, it’s a different pattern. Some hackers
breach companies without even realizing what they’ve accessed. Their monetization might come later—say,
deploying ransomware across as many computers as possible. A recent example: a few years ago, hackers would
infiltrate large companies, find 5,000 computers, and think, “I’ll run 5,000 crypto miners and collect cryptocurrency.”
They didn’t care about stealing from accounts or extorting the company, unlike today’s ransomware tactics. These
schemes evolve in various ways.

There’s a dedicated team tracking these groups, analyzing what they exploit and how they operate. Multiple units are
involved: first, operational monitoring through Security Operations Centers (SOCs), where we detect incidents early
and respond alongside clients. Other teams, as a vendor and cybersecurity software provider, investigate these
groups, track their methods, and share findings with our developers. This allows our products to automatically flag
threats, alerting clients: “This group is targeting you, aiming for this specific attack.”

It sounds comprehensive, but we’re not fully there yet. One area we’re developing is threat intelligence—gathering
data from open sources. To protect a company, you need to see it through a hacker’s eyes. For example, people often
register corporate emails on external platforms like online stores or delivery services. They’re lazy, reusing the same
password as their corporate accounts. If that store gets hacked and its database leaks—whether on the dark web or
public forums—hackers collect these login-password pairs. They then test these credentials across a company’s
services, scanning for vulnerabilities like exposed SSH or VPN endpoints. Shockingly, it often works, leading to a
breach.

When investigating such incidents, the first question clients ask is: “How did they get in?” Identifying the entry vector
is critical, like finding “patient zero” in an epidemic, so we can patch that vulnerability. Unfortunately, breaches often
exploit simple oversights.

This leads to forensics. The term, from English, translates to криминалистика (criminalistics) in Russian. In
cybersecurity, forensics involves analyzing technical artifacts to understand what happened. Narrowly, it’s examining
a single machine to answer specific questions: What did the attacker do? When? What evidence confirms this? For
instance, if a machine is suspected of being compromised, we analyze its image to trace the attacker’s actions—
downloads, changes, and so on. This is highly specialized.

Broadly, forensics extends beyond one machine to the entire infrastructure: Where did the attacker come from?
Where did they go? What were their goals? This goes beyond “classic” forensics, though I’d argue there’s no strict
“classic” definition. Different specialists analyze the same artifacts but apply findings differently. For example,
forensic experts working for law enforcement or courts answer precise questions about evidence, like whether a hard
drive contains traces of malicious code compilation, its date, or its hash. Their work is technical, not about pointing
fingers or solving the crime single-handedly.
Quality forensic work requires teamwork. Technical analysis (what happened in the infrastructure) is distinct from
human-related aspects (who did it), which often fall to law enforcement. Initially, I thought forensics covered
everything, but it’s more about incident response. An incident—say, stolen funds—requires a pipeline: preparation,
monitoring, response, and investigation. Companies vary in maturity. Some are clueless, discovering a breach only
when funds vanish. Others, like online stores or banks, know cybersecurity directly impacts revenue. A DDoS attack
on a store or a data leak at a bank can cripple business. Even a factory’s production halt due to ransomware affects
the bottom line.

Mature companies monitor proactively, integrating security and IT teams to prevent incidents. But most have faced
incidents—some unnoticed because they didn’t disrupt business. For example, a flower delivery service might
tolerate a crypto miner on their server if it doesn’t hog resources. Is that an incident? From a security perspective,
yes, since it’s unintended behavior. But for their business, it’s not critical.

We distinguish between cybersecurity incidents and IT incidents. A failing hard drive causing website downtime
might seem like a security issue, but it’s often just hardware failure—an IT problem. A cybersecurity incident involves
vulnerable data or systems compromised by external (or internal) actors. Per standards like GOST, a computer
incident is any disruption of normal system function, often due to malicious actions.

Preparation is key: understand your infrastructure, patch obvious vulnerabilities, deploy protections, and monitor
actively. Different businesses prioritize different incidents. A factory might shrug off ransomware on a receptionist’s
computer if it can’t reach critical systems, rating it low-priority. But a document management service hit by
ransomware on its core server faces a high-priority crisis. Preparation means identifying critical incidents and
protecting key assets—like computers handling sensitive data or customer-facing services. From there, you build
layered defenses, monitor for threats, and develop response plans to isolate and mitigate incidents quickly, ensuring
attackers can’t spread further.

At the moment we were under attack—either during the attack itself or perhaps afterward—we needed to respond
in some way to minimize confusion. Naturally, we often have to investigate simultaneously or in parallel to figure out
what exactly happened and how it unfolded in the environment where we’re working, like analyzing factors for FAQs
or implementing protections. There’s another approach, though—a service we also provide called compromise
assessment. Essentially, a company might say, “We’re worried about a cyber incident; we think we have critical data,
and it might already be stolen. Come check it out.” Our team, the forensic analysts, digs into the infrastructure,
examining what’s there, identifying vulnerabilities, checking security tools—where they’re deployed, how they’re
configured, or if they’re even present. They analyze artifacts across systems to determine if there’s been a
compromise.

We had a case where we investigated an organization and confirmed it was compromised by a group targeting
confidential data theft. The company asked how long it had been going on. We scanned all 1,000 computers,
identified the attacker’s behavioral patterns, and found the oldest machine—one that hadn’t had its OS reinstalled in
ages, carrying the earliest traces. The answer? Eight years. Why would attackers linger in a network for so long? On
one hand, it’s almost a compliment to their stealth—they’re continuously extracting data without being noticed. But
if it went undetected for eight years, maybe it wasn’t a priority. This probably ties to the evolution of cybersecurity,
prioritization, and so on. Still, many incidents persist for a long time.

As I mentioned initially on Forbes.kz, you need to dive deep, understand who’s attacking, and with what intent.
There’s threat intelligence for that. I’d say there’s a group, let’s call them Group X, focused on stealing specific types
of data. They target companies, siphon that data continuously—it’s like a poisoned well. You come in, find evidence
of compromise, but how does this differ from threat intelligence or riding along on an IT audit? Threat intelligence
studies attack groups, their goals, and their tools. You learn Group X steals data using certain methods. Then, the
compromise assessment team comes in, checks for traces of that group’s tools, malware, backdoors, or similar
behaviors. It’s tough to walk into a company and answer, “Are we compromised?” without any leads. It’s like being
told, “Go somewhere, find something.” Threat intelligence gives you focus—you know the company, maybe it’s an
industrial firm producing high-tech goods with patents and know-how. Those are the targets. You list five groups that
might be after that data, note their tools, and search for those traces in the infrastructure.
How’s this different from forensics? It’s similar, but forensics might involve analyzing a hard drive to answer specific
questions, like in a legal context. Here, you’re applying the same technical skills to answer business questions. The
skill set is roughly the same, but the scope and goals differ. Forensic analysts often focus narrowly, like on a single
drive, while compromise assessments require broader insight.

To clarify, the team providing this service is called the compromise assessment team. They’re like forensic analysts on
steroids, working with threat intelligence and sometimes overlapping with penetration testing. Speaking of which, we
do offer pen testing. It’s part of what we discussed—offensive and defensive security. Pen testing can be used at any
stage. For preparation, you might give testers three days to find the ten easiest ways to break in. You take those
cases, fix them, and move on. Later, after building defenses, you might say, “Now steal my data without getting
caught.” This turns into cyber exercises, like at our events, where defenders actively counter pen testers, who try to
execute risks despite protections. It’s a way to test how robust your defenses are. No explosions, but maybe some
smoke or a Ferris wheel derailment—pretty cool stuff.

Back to compromise assessment. It’s not just for when an incident has already happened. Some clients realize
something’s wrong but lack the expertise to investigate. They call us to help respond and conduct a technical
investigation—figuring out what happened, how, why, and how to prevent it. The same team handles both
compromise assessments and incident response, including forensic analysts. The difference? In assessments, you’re
checking if there’s a compromise without confirmed evidence. In incident response, the client knows there’s an
issue—clear traces—and you unravel the story from there. The input data differs, but the process is similar.

From a business perspective, it’s tricky to know what to thank us for. In assessments, if we say, “You’re clean, no
compromise,” it’s hard to appreciate the value. In incident response, we might say, “Here are the traces we found,
and your systems are secure now.” It’s clearer, but still abstract. When an incident hits, clients often panic, saying,
“Everything’s bad, help!” How closely do we work with them? It depends. Some things, like direct hard drive access,
they might refuse. That’s tough because investigations require trust and collaboration. We set requirements upfront:
full infrastructure access is non-negotiable. Without it, we can’t guarantee results. We often go on-site, especially for
serious incidents where clients fear active attackers. They might even unplug the internet, leaving the office offline,
so we analyze systems in place.

Beyond access, what else? Understanding the infrastructure is key. You ask for a network map. Good case: they show
you a detailed diagram. Bad case: they pull an A4 sheet from the printer, scribble a vague sketch, or worse, hand you
an outdated map. Then you dig into how the infrastructure and business processes work, verifying if the incident is
truly cyber-related or just paranoia. Once you find the entry point, you trace it across the infrastructure. The process
depends on the incident type, affected business processes, and the client’s goals. Some want normalcy by Monday,
ensuring no attacker steals funds or data. Others care less about recovery and more about how the breach
happened, who’s to blame, and how to strengthen defenses.

Sometimes, you find the entry point but can’t fully evict the attacker. Attackers vary—some are sophisticated, others
use automated tools like botnets brute-forcing credentials. These botnets create “normal” internet noise—SSH
brute-forcing starts within 30 minutes of exposing a port. If an incident stems from this, it’s often due to oversight,
like an employee leaving a port open over a weekend and forgetting to secure it. Weeks later, attackers exploit it. If
it’s a botnet, it might just mine crypto or spread malware. Worse, some sell access on the dark web, advertising,
“We’ve got entry to this company.” Each scenario requires tailored responses, balancing technical investigation with
business needs.

So, here’s the deal: dive into this sector, buy in—I can’t help but interrupt. To brighten things up, heck, even the
darknet has its own vibrancy. A long time ago, it was all anonymized, and I’m just sitting here. Inflation hits in waves,
everything’s like that, but it doesn’t matter much. The phrasing’s a bit off, but it’s all complicated anyway. Technically,
there could be something there, and ways to deal with complexity do pop up—legally, the big issue is cross-border
internet stuff. It’s nice to douse a server in Germany, where a user seems to be from the US but is actually chilling in
the Maldives, a citizen of some other place. On the flip side, there are folks who launch attacks blindly, fishing for
luck, and then there are those who go straight for the jugular, targeting specific sectors, companies, or countries with
precision. They’re driven by different motives—some chase cash, hitting the financial sector to siphon money. Others
steal data, and then there are ransomware crews who attack to extort and decrypt for profit. The question comes up:
why pay these guys? Where’s the guarantee they won’t hit you again? That’s a sharp question. There’s no guarantee
they won’t repeat it, but while you’re negotiating, you’re buying time to recover data. In good cases, a team like
Frenzy, specializing in incident response, steps in to analyze what went wrong in the infrastructure, patch the holes,
and help out. Am I right that negotiating itself is part of the Frenzy process? It varies—usually, the infrastructure
owner handles talks, often with the incident response team. I also wanted to know about the outcomes of these
response teams—whether they’re chasing an incident or doing an assessment. What’s the deliverable? A patch for
the漏洞, a profile of the attacker, maybe something else? Oddly enough, it’s a paper report—yep, paper, not even a
USB drive. But seriously, that’s how it works. We set goals: first, figure out how the breach happened, which is
critical, and then recommend technical and organizational steps to prevent it from happening again. Here’s the top
five things to do to avoid this kind of incident. Next, we track where the attacker went and what data they accessed—
that’s key. We provide technical details to back our findings, sometimes precise enough to point to specific attacker
groups, though we rarely name names on record. Do we help catch attackers? We focus on technical investigations.
Finding a specific person is for law enforcement—our data can help them, but that’s not our job. Often, companies
don’t rush to report incidents to authorities; they keep the report confidential, draw conclusions, and quietly move
on. Others, especially in finance, need to report to claim cyber insurance. They file a statement, attach our technical
report, and follow the process. Is it legal to slap a “secret” label on these reports, especially if it’s an international
crime group with terrorist ties? Each organization decides that internally. Sometimes, companies sit on info that
should be shared. If a hacker group targets, say, the medical sector and one company hides an incident, others in the
sector can’t prepare. That’s a bad trend. Info-sharing in the cybersecurity community isn’t always smooth, but there
are efforts like Russia’s FinCERT under the Central Bank, which collects and shares incident data among banks. It’s a
solid system, but it’s rare and only works in a few sectors. Do companies ever go to law enforcement first, then call
you, and do authorities ever get in the way? Rarely—usually, we’re the first call, and we even advise whether
involving law enforcement makes sense. Sometimes, there’s not enough evidence to justify it. I saw some of your
talks—two caught my eye: one on honeypots, another on network traffic discovery. You mentioned a cool solution
called SAM. What’s that? It’s about preparing infrastructure to detect incidents. You need sensors to monitor what’s
happening. Network traffic discovery solutions analyze all company traffic, spotting traces of attacks. Hackers can’t
breach systems telepathically—there’s always a trail, even if it’s encrypted or masked. Logs from servers and systems
are another piece; a SIEM system collects and analyzes them to detect anomalies, like an admin logging in at 2 a.m.
from the CFO’s machine. You also need tools to analyze files, especially email attachments, which often carry
malicious code exploiting vulnerabilities. Sandboxes and endpoint protection help catch these. But no system works
without people—AI and tech don’t cut it alone. A human needs to interpret alerts. Take an antivirus flagging the
same virus daily: it might seem fine, but it could mean something’s lingering. Staff leaking info? Our team’s on the
defense side by choice, and the data we handle isn’t juicy enough to sell. Plus, we see enough cases to know
attackers eventually get caught. Paperwork bogging you down? Not really—it’s mostly the report explaining what
happened, split into a short, high-level summary and a deep technical breakdown. As for tools, we use forensic
distros like Kali Linux, but each analyst has their own kit, blending paid and custom tools based on their expertise and
approach.

So, here’s the deal: dive into this sector, buy in, I can’t help but interrupt. I need some spark to liven things up; even
the darknet’s got its own vibe. It’s been de-anonymized ages ago, yet I’m still stuck. Inflation hits in waves,
everything’s a mess, but it doesn’t matter much—there’s still some rhythm to it, though it’s all so damn complicated.
Technically, it’s possible, and there’s a way to tackle the complexity, but legal issues pop up, especially with cross-
border internet stuff. You put out a server fire in Germany, and a user seems to come from the US, but in reality,
they’re chilling in the Maldives, a citizen of who-knows-where.

On the flip side, there are folks who launch attacks blindly, fishing for luck. Then there are those who go straight for
the jugular, targeting specific sectors, companies, or countries with precision. These groups, sometimes called APT22
or other names, have varied motives. Some chase cash, hitting the financial sector to siphon funds. Others steal data,
and then there are ransomware crews who attack to extort via encryption and blackmail. The question comes up:
why pay these guys? What’s the guarantee they won’t hit again? That’s a sharp question. There’s no real guarantee,
just their word. While you’re negotiating, you’re also working on data recovery. In solid cases, a team like Frenzy
steps in for incident response, analyzing what went wrong in the infrastructure, helping patch holes.
Am I right that negotiations are part of the standard Frenzy process? It varies. Usually, the infrastructure owner
handles talks, often with the incident response team. I’m curious about the outcomes from these response groups—
whether they chase the incident or do an assessment. What’s the deliverable? A patch for the vulnerability, a profile
of the attacker, maybe more? Oddly enough, it’s a paper report. Seriously, not even a USB drive—just paper. But
that’s how it is. We set goals: first, figure out how the breach happened. That’s critical. Then, we give
recommendations—technical, organizational, whatever—to prevent repeats. Here’s your top five to-do list to avoid
this mess again.

We also map where the attacker went and what data they accessed. That’s huge—knowing exactly what they got. We
back it up with technical details to confirm our findings. Sometimes, it’s precise enough to point to specific culprits,
but we don’t name names on record. Do we help catch attackers? Our job is technical investigations. Finding a
specific person is for law enforcement. The data we collect can help them, but that’s not our gig.

When an incident hits, some companies hush it up, keeping the report confidential and quietly fixing things. Others,
especially in finance, need to report to authorities for cyber-risk insurance. They file a claim, attach our technical
report, and follow the process. Is it legal to slap a “secret” label on these reports, especially if it’s an international
crime group with terrorist ties? Each organization decides that internally. Should some info be public? Sure,
sometimes. If a hacker group targets a sector—like medical firms—and one company hides an attack, others can’t
prepare. That’s a bad trend. Info-sharing in the security community is spotty. In Russia, for example, FinCERT, tied to
the Central Bank, collects incident data from banks and shares sanitized details with others. It works, but only in a
few sectors.

Do companies ever go to law enforcement first, then regret it? Rarely. We’re usually the first call, and we advise
whether involving authorities makes sense. Sometimes, there’s not enough evidence to bother. I saw some of your
talks—cool stuff on honeypots and network discovery. You mentioned a slick solution, SAM. What’s that? It’s about
prepping infrastructure to spot incidents. You need sensors everywhere to track what’s happening. Network
discovery tools analyze traffic. A company sets it up, routes all traffic through it, and it sniffs out anomalies. Hackers
can’t hack via telepathy—there’s always a trace in the traffic, even if they hide it with encryption or tunnels.
Monitoring traffic helps investigations, but it’s not everything.

Actions also leave logs on servers or VMs. Companies should deploy SIEM systems to collect and analyze logs,
spotting odd behavior—like an admin logging in at 2 a.m. to a CEO’s machine. It’s basic but effective. You can
automate alerts to security teams or IT to catch incidents early. You also need tools to analyze files circulating in the
network, as many attacks start via email attachments with malicious code exploiting vulnerabilities. Microsoft
patches these monthly, but hackers jump on new exploits fast. Sandbox solutions and endpoint protection are must-
haves to catch these.

All these systems need people. No matter how fancy the AI, humans interpret the alerts. Take an antivirus: it flags
and deletes a virus daily. Fine, but if it’s the same virus every day, something’s wrong. Someone needs to dig in. That
person needs skills. How do you ensure your team doesn’t leak info? First, our data isn’t that juicy to outsiders.
Second, our folks choose the good side deliberately. They see enough cases to know crime doesn’t pay. Frenzy and
incident response teams work to prevent incidents, not add to them.

Does paperwork bog you down? Nah, it’s minimal. The report’s the main thing—often split into a short, high-level
summary for non-techies and a deep technical dive for specialists. We’ve got processes, lawyers, and contract teams
to handle the rest. What’s your team’s go-to toolset? Any special Linux distros for forensics? There’s standard stuff
like Kali, preloaded with tools, or network topology scanners. But it’s less about a universal kit and more about
methodology. Some use paid, high-quality software; others lean on custom setups based on their expertise or even
hobbyist tinkering.

This sinner,
you might not have it anymore, you could’ve
installed that Zion program, but
the fact that you ran it will stay with you.
Where can you discuss this with someone?
There are chats and communities for this,
Russian-speaking ones, naturally.
There’s a forum dedicated to Forever Keydown—
easy to find, we’ll leave the link in the description.
I’m there myself, and there’s also a specialized
conference for Frenzy forensic experts,
probably held in Russia. It’s called, I think,
Forensic. Some guys from my team go there to speak.
We’ll drop that link too.
Do you hang out in Telegram chats?
There’s a Telegram chat, but I’m not a big fan.
People throw in all sorts of stuff there—
life updates, honestly, I don’t have time for that.

On to the random question section,


ones that didn’t make it into the main talk.
Let’s wrap up with these.
I’ve asked this before:
Why pay hackers for blackmail?
I’ve answered this—don’t pay hackers for blackmail.
But why not? People do pay.
Sure, people pay, but I genuinely believe
that’s completely the wrong move.
Don’t pay extortionists, just like you wouldn’t pay terrorists.
Take a recent example, like the disk encryption case—
healthcare organizations, some big ones, I think three.
I thought you’d bring up the U.S. pipeline,
but that came later, after the pipeline.
People were freaking out about cryptocurrency wallets.
Okay, the oil case—encrypted, causing chaos for regular folks.
There was a situation with vaccines,
not sure if it was for or against COVID.
Anyway, take the oil example.
Why not pay? It’s their business, right?
What do you do?
By paying, you encourage criminals to keep doing it.
It fuels their economy.
Maybe we should focus on improving systems
so these incidents don’t happen again—
endure today to win tomorrow.
Paying gives criminals a huge incentive to keep going.
As long as their business model works,
these incidents will keep happening.
Once it stops working, they’ll move on.
A good example is speeding tickets.
When cameras are everywhere, you slow down for them,
but speed elsewhere—
especially those cameras that measure average speed over a distance.
That instantly disciplines drivers.

Now, the dumbest hacker incident I’ve investigated?


From an execution standpoint, back when I handled incidents,
the dumbest ones were when hackers didn’t understand their tools.
There was a case where a hacker used a brute-force tool for RTP,
uploading it to the same machine.
The problem? That node was exposed.
When we checked the logs during the incident,
we could see which other companies they’d targeted.
A bad move.
The opposite bad case?
When a company’s own security team causes the incident.
It’s awful when the reason is the security folks themselves.
Like when a security guy opens an email—
say, an employee forwards a weird email from accounting,
asking, “Hey, check this out.”
The security guy clicks the document on their machine,
gets infected, and it spreads from there.
That’s just dumb on the other side.

The most elegant incident?


Probably the most technically complex one.
For example, targeted attacks by specific groups.
These groups zero in on a particular company,
using custom tools built just for that target.
When you investigate, you see the tools and realize
they’re one-of-a-kind, made for this company alone.
The tools have pre-set data—network files, machine details, top execs.
You know it’s a bespoke job, a custom hack tool for that company.
It’s always tough, makes you sweat during the investigation,
proving to yourself you’ve found everything,
outsmarted the hacker, and mapped their entire structure.

Ever feel the urge, during an investigation,


to go all-in on the hacker?
Like, block them, hack their system, take them down?
No, that’s illegal.
The thrill of outsmarting them exists,
but if you go after their infrastructure,
you’re breaking the law.
You become the criminal.
There were cases like that—guys called to an incident response,
they open the server console, start analyzing,
see two strange sessions in the process list.
On the other side, the hacker’s in the console too,
watching the same process list.
They realize they’re being watched and vanish.
That’s the thrill of the chase.

Does investigating incidents put you in danger?


Could a group you’re investigating come after you—
show up at your house, mess with your computers?
There was a case earlier this year where a group,
Lazarus, deliberately attacked security researchers.
They targeted folks doing infosec research.
The danger is real—you have to be obsessive about your own security.
You can’t be the security guy who opens a shady file
and infects yourself or the company.
You have to stay vigilant to avoid becoming the source of an incident.

Funny thing—the Lazarus group shares its name with a Pascal compiler.
How much does a forensic expert earn?
For a beginner, say a student who’s worked a couple of years,
not super ambitious, the range starts at around 150k and up.
Pretty decent, comparable to programmer salaries.

Got hobbies outside work?


I collect coins—Belarusian ones, but I’ve narrowed it down.
I focus on precious metal coins, starting from 1905.
Russian Empire and onward.
My favorite, most valuable coin?
Oddly, it’s outside my usual focus.
In 2005, they picked the best Sherlock Holmes film adaptations.
The Soviet version was among the top.
To celebrate, New Zealand’s mint issued a commemorative silver coin set—
four coins, one side with the British queen, the other with colored scenes from the film.
It’s in a cool box shaped like a film clapperboard.
I love it, even though they’re still circulating coins.
The mintage was 8,001.
I found it online, tracked it down through friends of friends of friends—
musicians on a cruise ship who stopped in New Zealand.
They bought it at the mint, sent it to me with the receipt.

Would you buy an NFT coin?


Probably not. I don’t see the value or history in it.
What about a coin with a cool ownership story?
Maybe, but it’s not my thing.
I like coins I can hold, like my enameled ones or
a 1900s British gold coin used by U.S. pilots in the Gulf War.
It had value in the desert—traded for goods, saved lives.
That’s a real story, but I don’t chase ownership tales.

If there were no electricity, in a steampunk world,


what would you do?
Something with metal—blacksmithing, making knives.
I love knives.
Maybe a private detective?
Nah, too people-focused.
Without tech, I’d be a craftsman, not a sleuth.

Last question: what’s the speed of light?


Roughly 3 x 10^8 meters per second.
Distance to the sun, if light takes 8 minutes to reach Earth?
8 minutes times the speed of light.
Nailed it.

That’s all my questions—thanks for sharing your expertise!


We’ve got a contest tied to the 10th Positive Hack Days conference,
happening now with a partner event.
The prize? A backpack for the 10th anniversary of Positive Hack Days.
To win, comment with the wildest phone scam someone tried on you,
and when you realized it was a scam.
Use the hashtag #contest in the pinned comment.
Check the links for details.

Shoutout to Positive Technologies—reach out to them via email.


We did this with Denis in past episodes.
If you haven’t subscribed to this channel, do it now!
Follow the links for Telegram, Instagram, and more.
Our guest was Alexey, Director of the Expert Security Center at Positive Technologies.
You’re awesome, thanks!
That’s it—see you all later!

[Music]

Oh, we didn’t touch on one topic—


incidents targeting individuals, not just companies.
In Russia and Belarus, there’s a huge trend:
scam calls pretending to be bank security,
claiming you made a transaction, etc.
Can those be investigated?
It’s a state-level issue, but users need to be savvy.
You have to know when you’re being scammed.
Incidents will drop when people, from childhood,
understand that tech isn’t always safe.
My kids, for example—
as soon as they can walk, I teach them:
cross the road only at the zebra crossing,
only when the light is green.
Tech should be the same.
Give a kid a basic phone?
Explain that not just grandma, mom, or dad might call—
strangers could too.
With smartphones, teach which sites are safe, which aren’t.
Photos? Anything you post online or send via Telegram
is public. It’s no longer yours.
It’ll resurface eventually.
People need to live in harmony with digital tech.
Older generations struggle, but for kids,
it’s second nature.
My kid figured out YouTube cartoons on an iPad at 7.
Tech needs to be intuitive,
but kids must learn the security basics from the start.

Look, if the audience falls asleep, I’ll just give a lecture. I mean, if no one’s interested, I’ll mumble through something,
show about 20 slides, and we’ll all go home. But if you’re actually curious about something, I’d love for you to ask
questions. I want us to have a discussion today about what grabs your attention. Cryptography is a vast field—you
could talk about it forever. I manage to teach it for a whole semester, two hours a week. Here, we’ve only got two
hours total, so let’s focus on topics that matter to you. How about this: everyone comes up with one word related to
cryptography that they’re curious about. You’re experienced folks—what do you want to dive into again? Practical
knowledge, maybe? Like decrypting something? That’s good. I can’t sum it up in one word, but I’d love to explore the
line between security and paranoia.

Alright, asymmetric algorithms. We’ll definitely touch on those—they’re essential. Lattice-based cryptography? Oh,
wow, that’s post-quantum stuff. I know it exists, and I’ll explain why it matters. Mathematical details? I’ll point you to
the right resources. State security, huh? Digital signatures, maybe? Sounds like a medical condition sometimes! What
mathematical methods do you use in cryptography? Quantum cryptography, too? Julia’s all about quantum crypto, as
usual. I won’t spill too much yet. Okay, we’ve got a list of topics—I’ve noted them. As we go through the slides, I’ll
linger on the ones you mentioned. If new questions pop up, ask them. In fact, I’m begging you to ask, because I didn’t
prepare enough slides, and your questions will help me fill the time.

Plus, there are some treats in this cup. I haven’t checked them closely, but I’ll hand them out for good questions. And
for great answers, too! If I ask something interesting and you give a sharp response, I’ve got bigger treats in my bag—
let’s see how it goes. So, what’s the boring part of our lecture about? A bit of cryptography history and its
mathematical foundations. What’s on the agenda? A touch of history. You can split cryptography’s history into many
periods, but I like two main ones: classical and modern. Classical cryptography, as textbooks describe it, is everything
before the 20th century. Cryptography as a science really kicked off mid-20th century with Shannon’s work. Before
that, it was the stuff you read about in novels. Speaking of, any famous books about cryptography? Time for candy!
The Gold Bug—grab a chocolate. The Dancing Men—another chocolate.

That’s from Conan Doyle’s Sherlock Holmes stories. They once said the greatest cryptographic feat belonged to the
world’s most famous, albeit fictional, detective. What was classical cryptography about? Creating a way to write text
so others couldn’t read it without the key. Simple tricks, like swapping each letter for another in the alphabet, made
text unreadable to the untrained eye. Other methods included turning text into pictures or vice versa, like in ancient
Egypt, or writing letters so they overlapped, as used in old Rus’. But these methods were easy to crack. Frequency
analysis was a known technique—count how often a letter or symbol appears in a text. The most frequent one likely
matches the most common letter in the language. In Russian, that’s “О,” followed by “Е,” then consonants like “Р,”
“С,” “Т.” By the 5th century, Arab scholars wrote treatises on breaking such ciphers. Later, Europe rediscovered this,
with Leon Battista Alberti, the father of European cryptography.

So, they moved to more complex systems, replacing one letter with multiple ones. The most common letter, like “О,”
might be swapped with two or three letters randomly. But even that wasn’t secure, as Mary Stuart learned the hard
way—she lost her head over bad encryption. What happened there? I won’t ask too much, but anyone recall why
Mary Stuart was executed? It’s a humanities question, I know. She was the one who, when told the people had no
bread, said, “Let them eat cake.” No, wait, that’s not her. She plotted against the English crown. At one point, England
had a king, and Mary Stuart, a royal relative, was a threat to the throne. You couldn’t just execute a royal—kings have
“blue blood,” not like common folk. Killing a king sets a bad precedent. So, they locked her in a tower. She stayed
there, harmless, until a conspiracy emerged.

Mary exchanged encrypted letters with plotters, and in one, she approved the king’s assassination. That letter got her
executed. She wasn’t foolish, just bad at cryptography. Her encrypted letter was cracked right in front of the king’s
council, and that was that. Hope that never happens to you! These “technologies” were from when cryptography was
still an art, not a science. That lasted until the 20th century. Around the 1940s, Claude Shannon’s work gave
cryptography a scientific foundation. His first paper was on securing communication channels. People read it and
thought, “Sounds cool, but what?” So, he wrote another, explaining the math, which kicked off information theory.
From there, things took off.

Here's a paraphrased English translation from the Russian text, maintaining the original meaning and length:

It was precisely because of this that Nigma was, in fact, successfully hacked. So, what’s happening with modern
ciphers? Modern ciphers, since the days of DES, rely on some fascinating principles. To crack them, you essentially
need to try every possible key combination. It’s a fairly straightforward process. You take a key, attempt to decrypt
the data with it. If you get readable text, great, you’ve cracked it. If not, too bad, it didn’t work. The critical point here
is that there’s no simpler way. Without attempting decryption, you can’t tell if a key is correct or not. If your
encryption system adheres to this principle, it’s secure. All that’s left for a robust symmetric encryption system to do
is ensure two things: first, it must be impossible to determine if a key is correct without trying to decrypt; second, the
key must be sufficiently long.

Can I ask a question? Sure. How does the system know the text is correct during key enumeration? That’s an
excellent question! For great questions like that, I hope these are pencils—two crossed ones. Pencils, right? Or
maybe a pen? Pencils, pen, yeah. It’s a great question because if the text comes from a perfectly random source, it’s
impossible to tell. If the plaintext is completely random and one version can’t be distinguished from another, there’s
no way to verify its correctness. But there are a few issues with this. The first problem is that we need to check
whether a legitimate user entered the correct key. If they input the wrong key, we need to inform them somehow.

So, we insert a short flag at the start of the encrypted file. This flag shouldn’t give attackers a significant advantage
but should filter out 99% of incorrect keys. What about a key hash? Could we run the key through an algorithm, get a
hash, and compare them? No, hold on. We don’t have a key hash. Could we use one? A key hash is essentially the
encrypted key under a different password. We take the unencrypted key, run it through encryption, and compare the
hashes. That’s a solid idea, but there’s a catch. The system’s security would then depend on two algorithms: the
encryption’s strength and the hash function’s strength. If the hash function is less secure than the encryption, it’s
easier to crack the hash and then decrypt.

What if we obscure which hash function is used? That brings us to another concept, dating back to the 19th century.
A French military figure, Auguste Kerckhoffs, published a book called The Application of Cryptography in Military
Affairs. Cryptography wasn’t even a formal science yet, but he laid out six rules for using it correctly. One, now known
as Kerckhoffs’ principle, states that a system’s security shouldn’t rely on the secrecy of its algorithms. We must
assume the attacker knows everything except the key. For example, if you encrypt a text once, you might figure out
which cipher was used and select a matching decryption algorithm. But if you encrypt the same text again with the
same cipher, you can’t determine the number of iterations without decrypting. That’s not quite how it works, though.
Encryption involves an algorithm with two inputs: the text and the key. Without the key, you can neither encrypt nor
decrypt.

Even if an attacker knows the algorithm—back then, everyone used DES; now, half the world uses AES, and in our
corner of the globe, it’s GOST, Kuznyechik, or Magma, depending on the year—the algorithms are public, but the key
remains secret. So, back to the question: how do we know if the decrypted text is correct? If we can’t tell, we have a
perfectly secure cipher. But this security stems not from the cipher’s properties but from the plaintext’s nature,
which prevents distinguishing one from another. As cryptography evolved, standards emerged, and governments
became active in standardizing it.

This was what I call classical cryptography, or more accurately, symmetric cryptography. Its defining feature is that the
same key is used for both encryption and decryption. These are the simplest algorithms mathematically, relying on
substitution and permutation networks, ideologically outlined in the 1950s. But they have a major flaw: key
distribution. How do you securely share the key? To establish a secure channel using symmetric encryption, you need
a pre-existing secure channel to transmit the key. It’s like schoolkids passing notes with a symmetric cipher but
needing to hide the key in a tree hollow first. If they have that hollow, they could skip encryption and just leave notes
there. This issue, known as the key distribution problem, is a major limitation.

In the 1970s, a new field emerged: public-key cryptography, or asymmetric encryption. Here, one key encrypts, and a
different key decrypts. The encryption key is shared publicly—posted online, printed on business cards, included in
email signatures. Anyone can encrypt a message with it, but only the holder of the private key can decrypt it. How
does this work? Imagine a massive phonebook with 2^64 entries. To encrypt, you take the first letter of your text, say
“P” from “Privet,” open the “P” section, pick a random surname, and note its phone number as part of the cipher. For
the next letter, “R,” you do the same. Anyone with the phonebook can encrypt. But to decrypt, you’d need to search
the entire book for each number, an infeasible task. The intended recipient, however, has a private key: a phonebook
sorted by phone numbers, allowing quick lookup of the corresponding surname’s first letter.

This example illustrates public-key cryptography’s principles, though such a phonebook doesn’t exist. It also shows
that no public-key system is perfectly secure. You could theoretically sort the phonebook by numbers, but it would
take an astronomical amount of time—perhaps longer than the universe’s existence. Unlike symmetric systems,
there’s no issue with distinguishing correct from incorrect text here. I’ll explain why later.

Symmetric and asymmetric encryption form two of the three pillars of modern security. The third? Not the
transmission channel or storage. It’s the human factor—competent execution and intelligent use of cryptographic
tools. Unfortunately, this third pillar is often the weakest. Symmetric encryption is typically divided into three
categories: block ciphers, which encrypt text with a single key for transmission; hash functions, which produce a
short, irreversible output from input text (crucial for password hashing); and stream ciphers, similar to block ciphers
but optimized for real-time communication, like voice over cellular networks. Stream ciphers, like RC4, are fast and
implementable on simple hardware, though sometimes less secure.

Now, about hacking a VKontakte account—is it possible, and how can you protect yourself? First, hacking someone’s
account is illegal. If your VKontakte messages were leaked, start by logging out from your phone (Settings > Log Out)
and logging back in. I’ll explain why this helps. As for Wi-Fi vulnerabilities, my students once set up a fake “MIPT Free
Wi-Fi” hotspot in the university library, harvesting about 20 passwords from VKontakte, Mail.ru, Google, and Rambler
accounts. Such attacks exploit open networks, so always verify Wi-Fi sources.

The most challenging and intriguing aspect of this project was, frankly, not leaking passwords. The toughest part was
restricting access to the legitimate Wi-Fi access point, essentially overwhelming it. In other words, there’s ND3K,
ND3K—wait, what? I didn’t get that. Anyway, ND3K. I usually call it something like DOS, or more precisely, a Denial of
Service attack, an attack that disrupts service. The hardest task for the students was not about manipulating people;
it was about blocking the legal Wi-Fi access point so that everyone craving free internet would connect to the “MIPT
Freebie” hotspot. From there, recalling our “dead whale” incident, when people logged into VKontakte, they ignored
all security warnings. You see, software developers are clever folks. They know most users aren’t exactly geniuses. If
you try to log in through a hotspot that attempts to capture your VKontakte, Google, or Mail.ru password, you’ll get a
glaring warning that the connection isn’t secure—it’s likely being monitored, and you’re doing something wrong. But
here’s the kicker: it worked. The problem is people’s obsession with free stuff is so strong that these security alerts
don’t stop them.

There are two great anecdotes about this. First, a company was hired to audit the security of a major financial
organization. The very next day, the auditors handed the director a printout of his entire financial plan for the next
year—a huge stack of papers. His jaw dropped. “Where did you get this?” they replied, “From the dumpster.” While
accountants were drafting documents, making notes—sometimes more critical than the final reports—they’d label
them “drafts” and toss them in the trash. Those ended up in the dump. If you work in a serious organization, check
what gets thrown out. The second anecdote is even better, tied to this love for freebies. An audit company
announced, “Tomorrow, we’ll conduct a security audit of your internal network. Prepare—we’ll infect you with a
virus.” They gave a full heads-up to the internal security team. The next day, at the designated time, the security
team monitored servers and traffic entry points, watching closely. An attack happened, but they couldn’t pinpoint its
source. Turns out, scenario one: employees walking to work spot a random USB drive on the pavement. Why not pick
it up? It’s worth, what, 100 or 500 rubles? A small win. Curiosity takes over—they plug it into their work computer to
see what’s on it. From there, it’s the classic spread of viruses or Trojan horses. The second, even more brilliant
scenario: employees are greeted by two attractive women in dresses who say, “Hi, we’re conducting a sociological
survey on IT company security. Complete it, and you’ll get a free USB drive.” The questions were subtle but revealing:
“What’s the minimum password length allowed in your company? Does it bother you to change passwords
regularly?” Bit by bit, they gathered the company’s entire password policy on paper. Employees, thrilled with their
free USB, plug it into their computers, and the same virus-spreading scenario unfolds. Human stupidity, greed for
freebies, and curiosity are the biggest security pitfalls.

I’m not sure of the exact percentage, but I’d bet most internet infections come from two types of websites: gaming
sites and those offering free videos—guess which kind. Back to symmetric encryption, unless someone’s come up
with a new question. Symmetric encryption relies on two things: a robust algorithm that prevents anyone from
knowing if decryption worked without trying, and ensuring you can’t tell if the key is correct without attempting to
decrypt.

What was the goal of the sociological study? Oh, you mean the one in the lecture hall? To pass the course? No, not
quite. What did we want besides passwords? Honestly, I didn’t need those passwords—I’ve got enough accounts.
The students gained skills in executing DOS attacks. I regret not warning the provider beforehand; I still kick myself
for that. We should’ve informed them about the study, or else the Criminal Code would’ve come for us. Before I
posted those password photos on Facebook, they didn’t notice a thing. The goals were: first, students got hands-on
practice; second, they saw firsthand how unreliable Wi-Fi networks are and how crucial it is to heed security
warnings and connect to trusted networks.
There’s also a scenario where no warning appears. If you control the hotspot, you can redirect VKontakte traffic to
your own page. You don’t even need to infiltrate the system—just collect traffic as it passes through and decrypt it
later. No warnings there. Remember my talk about the universe’s lifespan? About 90% of passwords can be cracked
quickly, especially on GPUs—they’re brutal. This faith in GPUs is probably less hyped than quantum computers, but
still. Quantum computers are far off, and I don’t buy into them. I know folks working on quantum computing who
keep asking, “What about quantum computers?” I’m skeptical. GPUs do optimize hash function cracking, but there’s
a catch.

When you log into VKontakte, you enter your login and password, which are sent to the site as plain text—the site
sees them as they are. But this happens over a secure channel, Transport Layer Security (TLS), or HTTPS (HTTP with
an “S”). You’ll see a padlock in your browser. This channel encrypts all data between you and the server and verifies
that the site is really VKontakte. You can’t just swap VKontakte for a fake page on your Wi-Fi hotspot. When you visit
VKontakte, your computer remembers it used a secure channel last time and won’t connect otherwise. If it’s the real
VKontakte, the IP sends a public key certificate with encryption parameters, signed by a trusted certification
authority. That certificate is part of a chain, leading to a root certification authority. You can’t fake a signature
without the private key. There aren’t many root authorities—say, 100—and their certificates are embedded in your
browser, whether on Android, iPhone, or desktop (Firefox, Chrome, etc.). These lists update regularly, removing
outdated certificates. If a certificate’s chain doesn’t trace back to a trusted root, the browser flags it as unsafe and
warns you. What do users do? They click through anyway. Modern browsers now require multiple clicks to bypass
this, trying to protect users from themselves.

There’s an old joke: Linux developers believe humans are smarter than computers; Windows developers know the
truth. Browsers try to block unsafe paths, but if users accept the dubious certificate, the attacker shows a fake
VKontakte page, grabs the login and password, and might display a “technical difficulties” message to avoid
suspicion. They could even proxy real VKontakte pages through their server, reading all traffic—a “man-in-the-
middle” attack. So, when browsing, pay attention to warnings. If you’re not a tech expert and see one, close the tab
and try later. Programmers will fix their errors. Yandex once had a bug where 0.2% of browsers saw this warning—
they fixed it in three days.

Now, about messengers like Telegram or WhatsApp, specifically Telegram’s “secret chat,” where only two people
communicate with end-to-end encryption. You mentioned two encryption types: public key and private key, where
the private key must be securely shared. How does Telegram handle this? They use a protocol to establish a secure
channel, ensuring only the intended recipients can decrypt messages, typically via a key exchange like Diffie-Hellman,
which allows secure key sharing over an insecure channel.

To address this problem, special protocols for distributing keys are employed. Essentially, it’s assumed that two
parties share a connection. Let me explain: when we use symmetric encryption—could you speak a bit louder? I’m
already using the microphone, I think. Maybe bring it closer? Alright, closer microphone. Good. So, with symmetric
encryption, there’s a third party involved. Everyone knows the key of this third party, which acts as a trusted center.
Using this trusted center—I can explain the details during the break—we generate a session key. By leveraging the
key shared between Alice and the trusted center, and between Bob and the trusted center, we create a unique
session key for Alice and Bob to communicate.

This session key is used in various secure environments. It can be set to refresh regularly, expire, and be replaced
with a new key. This system can scale to include any number of participants. In large information systems, there
might not be just one trusted server but a central authorization server, multiple servers generating session keys, and
thousands of users accessing these services. I’ve been asked to take a break. How long? Five or ten minutes? Five
minutes, so everyone can step out and breathe. If anyone needs to.

Typically, at the start of my seminars—not lectures, but seminars with problem-solving—I explain what information
security is and how it works. Information always has value, which varies depending on perspective. It might be
priceless to you, but to an adversary—whether a business rival or a malicious actor—it has a specific worth. This is
the maximum they’re willing to pay for it or the price they’d sell it for. There’s also the cost of breaching an
information system. The goal of information security, and all this cryptography business, is to ensure the cost of
breaking into a system far exceeds what an attacker is willing to spend to access your data. The uniqueness of
modern encryption algorithms lies in this: the electricity cost of cracking something like DES or GOST far outweighs
the value of the encrypted information.

This is especially true for everyday platforms like VKontakte or Odnoklassniki. These systems are fairly secure by
design. I can’t say how much it would cost to hack them, as I wouldn’t attempt it myself. I could suggest simple
methods, like spoofing a Wi-Fi network, replacing your home access point, or placing a camera above your keyboard.
But only someone with cheap access—like someone living in your home—could pull that off. If you don’t trust those
people, it’s a bit late to worry about security. Someone mentioned sitting in the yard? You could do that too, I
suppose. They said there’s Wi-Fi out there, right? Of course, there is, even without it. Be ready.

If your VKontakte account only holds information valuable to you and your loved ones, and it gets hacked, the
question isn’t how it happened but who did it. If you work for a large company, you need to assess the value of your
information, the cost of protecting it, the cost of a potential breach, and the losses from implementing security
measures. The most secure computer is what? A broken one. But is a broken computer really secure? It’s less secure
because it’s interesting to attackers. A truly secure computer is one disconnected from all networks, including power,
locked in a safe with the key lost. But you can’t use it. Security is always a balance—ensuring the system is usable yet
protected enough that the cost of hacking exceeds the information’s value to an attacker. Estimating this is tough, but
large companies calculate it down to the ruble.

Let’s pivot to WhatsApp. Why doesn’t WhatsApp allow web access without the phone being on? I think it’s about
syncing messages between the phone and other devices. But there’s a theory they track users via phones. Can you
comment? They could access location data, GPS, and more. First, if you didn’t grant WhatsApp access on your iPhone
or Android, it can’t access that data. But don’t you automatically grant access when installing? No, apps now
explicitly request permissions for camera, location, or microphone. Some apps get precise location access, others
only city-level. If you didn’t explicitly allow it, the app doesn’t get it. But I granted access because permissions are
auto-accepted. Are they tracking users? They’d be foolish not to. Why? Marketing. Information is money. Your online
behavior—sites visited, emails read—is processed to sell you to advertisers. The internet knows your gender, age,
hobbies, and job, even if you never shared it. Big Data analyzes your behavior against known groups, deducing details
like you’re a man, 89, and possibly pregnant.

There’s a story about a U.S. family receiving a catalog with pregnancy offers for their 17-year-old daughter. The father
called, outraged, only to learn Big Data flagged her as pregnant based on purchases like pickles and milk, matching a
pregnant demographic. The retailer apologized, sent a discount, but a week later, the father called back, admitting he
didn’t know everything about his family. Big Data knows you through behavior analysis. Every website banner you
see? Your data was sent to advertisers first. They bid to show you ads based on your profile—male, solvent, two kids,
car, no mortgage. The highest bidder wins, and the site earns. This data is valuable, not to spy agencies, but to
advertisers. For privacy, use ad blockers or browsers like Tor, which reset your identity each session, making tracking
harder.

So, can WhatsApp collect your data? Yes, they can and will. But requiring both phone and computer for tracking
seems inefficient. As a programmer, I wouldn’t build a tracking system for the 1% who use WhatsApp on both devices
when 99% use phones. Those phones, with the app, already process your chats. Who says Telegram or WhatsApp
doesn’t read your messages? Their creators? It’s about trust. Do you believe Pavel Durov when he says, “I won’t read
your chats”? Look into his honest eyes. I’m not saying they do, but these apps rely on trust: trust your phone has no
bugs, the app has no bugs, no one forced a bug into the app, and your chat partner won’t leak your messages. Plus,
operators—well, we’re somewhat protected from them, at least until 2018.

Speaking of, who voted for United Russia? Fair Russia? LDPR? All these parties backed the Yarovaya laws, mandating
secure TLS connections for services like VKontakte, Odnoklassniki, WhatsApp, and Telegram. These connections rely
on a small secret embedded in your phone.

These are root certificates. Well, more precisely, certificates from trusted root certification authorities. Yes, they also
have their own certificates. The thing is, asymmetric encryption works because, firstly, it lets us encrypt data with a
public key in a way that no one else can decrypt it, and secondly, it supports digital signatures. Whoever holds the
private key can sign a document, and anyone can verify that signature. Essentially, a certificate is a public key signed
by someone else. So, a service’s certificate, like for VKontakte, Telegram, or Odnoklassniki, is a public key signed by
the public key of a trusted root authority. Anyone with access to the root authority’s public key—which is built into
every phone and browser—can verify that the service’s certificate is legitimate, confirming that the server is
genuinely what it claims to be.

But forging this is tough. To impersonate Odnoklassniki or VKontakte, you’d need to fake the digital signature of the
root certification authority. I’m not saying it’s impossible—there are ways. In fact, it happened in a Middle Eastern
country where a government forged a root authority’s certificate to intercept Gmail communications. This was
quickly spotted, and within a couple of days, that certificate was revoked from all trusted root authority lists. If
something like this happens, it’s usually detected fast. But the key point is, it’s unlikely to happen without serious
effort—think soldering irons.

Now, back to SORM (Russia’s surveillance system). SORM monitors communication channels at our ISPs. If you look at
the boxes installed at providers, they capture all the traffic we send online. But here’s the catch: about 30% of
traffic—our VKontakte messages, WhatsApp chats, Telegram, and photos on social media—is encrypted. The
remaining 70%? I suspect most of it is torrents or maybe unencrypted VKontakte videos, since encrypting video is
resource-intensive.

So, those ISP boxes? They’re useless for encrypted traffic. But nothing stops someone from, say, visiting Mail.ru’s
offices at Leningradsky Prospekt 42 and asking them to install a monitoring box on their servers. I don’t know if those
boxes are there—I haven’t seen them. They might be. So, if you want to protect yourself from, let’s say, your
government, use services based in another country. As they say, if you’re not worried about the FBI reading your
chats, use Facebook. If the FBI scares you, stick to VKontakte. It’s your choice.

What’s coming in 2018? A new law requires ISPs to provide all user traffic, including messages, in decrypted form. It’s
written in the law. Reminds me of a joke about an owl and mice. The mice come to the owl, saying, “Wise owl,
everyone picks on us. What should we do?” The owl says, “Become hedgehogs.” The mice are thrilled: “Great idea!
We’ll be spiky, untouchable, the coolest animals in the forest!” But then they ask, “How do we become hedgehogs?”
The owl replies, “That’s tactics. I handle strategy.” Similarly, the law demands decrypted messages by 2018, but it
doesn’t say how. Right now, ISPs don’t have the tech to do it.

The only feasible way is something like what Kazakhstan did: place a server between you and services like Facebook,
VKontakte, or Google, pretending to be them. To use the internet, you’d have to install a new “trusted” root
certificate from the provider. It’s framed as “for your security,” but really, it lets the provider read all your messages.
This might happen in 2018. There are two scenarios: one, they’ll just ban VPNs, asking, “Why do you need them?
What are you hiding?” Two, it’ll be like China, where Twitter is banned, yet 5 million people still use it. China ensures
99% of people are “protected” from certain information, like kids. The remaining 1%? They’ll hide, but using a VPN
could become a crime. Not blocked technically, just punishable. That’s the pessimistic outlook for now.

You might also like