"Deepfake from Sobyanin": Hackers cloned regional heads to deceive bank clients

Cybercriminals have adopted artificial intelligence (AI) technologies. They have begun to massively use images of famous officials and businessmen to extort money from Russians. Read about how it works in the NI article.
A new type of fraud is rapidly spreading online using deepfakes – images of famous people brought to life using AI technology.
There have already been cases described where Russians are persuaded to “transfer money to a safe account” or invest savings in a “super-profitable business” by electronic images with the appearance and voice of heads of Russian regions, famous artists and successful entrepreneurs.
"Over the past week, a new wave of fake videos featuring heads of Russian regions has appeared on social networks. Fraudsters have created fake Telegram channels of governors of Vologda, Smolensk, Samara and other regions, where they have been spreading disinformation," the Russian Ministry of Internal Affairs' Cyber Police Department reported.
The Ministry of Digital Development clarified that the first deepfake with Moscow Mayor Sergei Sobyanin appeared on the Internet a year and a half ago.
Alexey Primak , founder of the Institute of Financial and Investment Technologies, recalls that previously, to achieve their nefarious goals, fraudsters “only” sent a text message or email from a “close” person who was in trouble and needed money. However, now their technical capabilities have increased significantly.
“Now scammers make audio, synthesize voice and create video messages,” the expert warns.
The arsenal of AI technologies available to fraudsters is rapidly expanding. Photo: Image by Midjourney
Today, digital "cloning" of other people's images using AI technologies has become widespread. The cyber police clarified that the dissemination of information allegedly from officials may pursue various goals, ranging from phishing to psychological influence aimed at destabilizing the situation.
The head of the Department of Personology and Behavioural Analysis at the Academy of Social Technologies, psychiatrist, psychologist, and expert in lie detection, Alexey Filatov, told NI that previously, the country had already recorded mass incidents of sending fake audio messages on behalf of the heads of municipal enterprises, universities, schools, hospitals, and other organisations.
“Audio fakes have a higher degree of trust because they are more engaging and are less easily verified for authenticity,” the expert explained.
He pointed out that first, attackers collect samples of a person’s voice (for example, from public speeches, interviews, audio recordings), and then, using neural networks, synthesize fake phrases that imitate the person’s timbre, intonation, and manner of speech.
"Modern models allow creating such samples in a matter of minutes, and in combination with deepfake video generation, scammers can even imitate video calls. Today, there are at least a dozen open sites and specialized programs on the Internet that allow almost anyone with average computer skills to do this," Alexey Filatov clarified.
According to the head of the analytical department of the investment company Rikom-Trust, Ph.D. in Economics Oleg Abelev , financial market participants can install protection against deepfakes.
“Today, this is software that allows for better identification of clients due to the fact that they provide additional information to complete a transaction, multi-factor identification, when the security system checks not only the code, but also the device itself from which the request for personal data is made, the IP address, location, as well as anti-fraud systems that are trained on typical operations.
And if something unusual happens, then this operation is checked against the database of fraudulent activity samples and if there is a match, the operation is automatically blocked until the account owner provides the necessary confirmation,” the expert said.
He is confident that when software appears that can recognize deepfakes, then we can say that AI is guarding the interests and funds of clients of banks and other financial companies.
However, user protection technologies are still lagging behind the pace of technical activity of fraudsters.
"Development companies are already testing counterfeit detection algorithms by analyzing micro-oscillations of the voice, speech synthesis anomalies, and artifacts in visual rendering. Some banks and messengers are developing authentication mechanisms based on additional factors, including a request to pronounce random words, analysis of biometric characteristics, and verification of audio file metadata. However, there is no mass and reliable solution yet," warned expert Alexey Filatov .
According to the Ministry of Internal Affairs, in response to user complaints about the proliferating "clones" of officials, the administration of the Telegram messenger has already identified and marked suspicious channels with deepfakes as unreliable. Some of them have been completely deleted. However, fraudsters seeking to get to the wallets of Russians are not sitting idly by: they are creating more and more fake images that imitate the appearance and voice of the original with detailed accuracy.
"We recommend that users trust only verified sources. It would not be a bad idea to check the inclusion of the channel in the list of personal pages of social network users, the audience size of each of which is more than ten thousand users," the cyber police advised.
Psychologist Alexey Filatov added that in any situation the main thing is to remain vigilant and cool-headed.
“If you suddenly get a call from the ‘governor’, a law enforcement officer, a voice message from a manager or a ‘bank employee’ asking you to do something urgently, you need to be wary: it’s all a scam,” the expert is convinced.
Experts advise not to panic when dealing with scammers. Photo: Image by Midjourney
To avoid becoming a victim of criminals, the specialist recommends playing it safe once again.
"Use additional verification channels: call back official contacts, ask questions that the fake may not know the answers to. Do not trust video calls blindly, especially if the image quality is strange and the facial expressions look artificial," advised Alexey Filatov .
Expert Alexey Primak recommends keeping yourself in hand in any situation and not allowing panic to break through your defenses.
"Let's talk about investment fraud separately. Thanks to deepfakes, videos of famous people appear on the Internet, for example, Pavel Durov , Oleg Tinkov* and other famous personalities. And if suddenly a famous person advertises, say, a new crypto trading terminal, gullible people lose their vigilance. The goal of all this is to deceive investors.
Remember: an advertising video from a famous person, especially if the subject of the advertisement is not related to him or her and is not his or her business, is not a reason to invest. If the investment result looks too sweet and tempting, and the profitability is too good to be true, it is usually 100% a scam,” warns Alexey Primak.
The voice of Russian actress Alena Andronova, recorded for a voiceover for a banking app, was stolen by porn site creators. Photo: Championship
It should be noted that in the West, the technology of using deepfakes has been widespread for a long time and is completely legal. Initially, it was created, among other things, as a means of protecting the health and lives of film actors when performing complex stunts. However, then the practice of using deepfakes spread so much that digital clones began to displace living people, which led to actors' strikes .
Russia also had its share of artistic scandals.
The actress said she filed a defamation lawsuit against Tinkoff Bank. She accused the credit institution of stealing samples of her voice, which, against the actress's will, "migrated" from the bank's audio services to voiceovers for porn site ads.
Andronova demanded 6 million rubles from the bank in court. She also announced her intention to unite with voice actors and the Union of Russian Announcers to seek legislative protection of voices. Andronova stated that it is necessary to protect a citizen's voice equally with his image and to prohibit outsiders from using a voice recording for speech synthesis if it was not explicitly indicated among the purposes of the recording.
*Added to the register of foreign agents by the Russian Ministry of Justice.
newizv.ru