This article is a part of Poland Unpacked. Weekly intelligence for decision-makers
In October 2024 we reported on the case of Dr Michał Sutkowski. The well-known physician spent more than a year fighting criminals who used his likeness to create fake advertisements for products such as a diabetes drug – without success. New ads kept appearing, primarily on Facebook, the social-media platform owned by the American group Meta. Today Dr Sutkowski’s story no longer shocks: the number of victims now runs into the thousands.
Politicians, doctors, journalists, businesspeople and influencers regularly complain that their faces are being used to promote products or services that can fairly be described as scams.
“On Instagram, the only solution turned out to be paying for profile verification. Only then did the problem disappear. It sounds like a classic mafia practice – a protection fee,” says Jacek Kłosiński, a social-media expert, describing his own experience.
A protection fee: victims of fraud
Jacek Kłosiński is a well-known trainer, popular also in the influencer sphere. He has 20,000 followers on Facebook and nearly 17,000 on Instagram. He, too, has become a victim of scammers.
“Looking at my own situation, as well as that of other creator friends, I can see that impersonation on social media is an ever-growing problem. For years it has not been addressed by the platforms. I am not a celebrity, but my profiles are followed by tens of thousands of people, which made me a target for fraudsters. The scheme is simple. A criminal sets up a profile that looks like mine. Then, posing as me, they message my followers – usually with offers to invest in cryptocurrencies. Exactly the same mechanism has been operating for years on the profiles of many other creators,” explains Mr. Kłosiński.
He complains that the response from social-media platforms is woefully inadequate.
“Despite numerous reports from my followers and from me, the platforms do absolutely nothing about it. They only take action when you pay for it. When I stopped paying as a test, the problem came back almost immediately,” he adds.
His story is only the tip of the iceberg. One institution trying to bring it under control is NASK.
Explainer
NASK
NASK (Naukowa i Akademicka Sieć Komputerowa) is Poland's Research and Academic Computer Network, but it’s much more than its academic-sounding name suggests. NASK matters because it: manages .pl domains, runs CERT Polska (Poland's cybersecurity response team that protects the country’s digital infrastructure), provides internet infrastructure and offers public services (they run initiatives like the “Niebezpiecznik” (security awareness project) and provide tools to check if your email has been compromised in data breaches.
The scale is growing; NASK steps in
The scale of these phenomena continues to expand. Unfortunately, despite the declarations made by social-media platforms, Poland still faces a serious problem.
“The scale and pace at which scams are appearing – including campaigns that exploit the images of public figures and well-known individuals – still clearly exceed the real effectiveness of platforms’ mechanisms for detecting and swiftly removing such content. We are seeing extremely high dynamics in these undesirable activities. In just one month we identified more than 12,000 cases of scams and duplicated materials using the images of well-known people,” explains Ewelina Bartuzi Trokielewicz, head of the Department of Audiovisual Analysis and Biometric Systems at NASK.
The problem is serious. As both NASK representatives and independent experts we spoke to complain, Meta’s actions are, at best, inadequate.
Facebook, Instagram and thousands of ignored reports
In Poland, several institutions deal with the use of online platforms for fraudulent activity. The main one is NASK – specifically its Disinformation Analysis Center (Ośrodek Analizy Dezinformacji, OAD). One of its tasks is to combat manipulated or harmful advertisements published on social-media platforms.
“The Disinformation Analysis Center at NASK has contact points designated by Meta, through which ad hoc communication is conducted. In our day-to-day handling of disinformation incidents and advertisements containing deepfakes, we use a dedicated tool – a reporting platform made available by the service for registering materials and content that violate the platform’s rules or Polish law. In cases that require priority treatment, we also contact the designated contact point directly each time,” explains Dr Agnieszka Lipińska, head of NASK’s Disinformation Analysis Center.
Here are the figures for 2025. OAD submitted exactly 9,094 reports to Meta.
“Of these, 4,369 (48%) were deemed valid by Meta. As a result, 3,607 (40%) were removed and 762 (8%) were subjected to moderation. The remaining 4,725 reports (52%), which we assessed as violating the platform’s rules or Polish law, were rejected by Meta’s moderation. By ‘rejected’ we mean reports that received a negative response in the system or elicited no response from the platform at all,” says Izabela Jarka, head of the Rapid Response and Disinformation Detection Team.
NASK has provided our newsroom with detailed statistics broken down by quarter:
Meta disputes the claims
We asked Meta why most reports concerning disinformation or manipulation submitted by teams from Polish public institutions are either dismissed or ignored. In response, Meta’s press office (the company asked that no specific individuals be quoted) said it could not agree with such an assertion. The statement stressed that reports from public institutions are treated seriously and reviewed in line with applicable procedures.
We also asked whether Meta places its own internal rules above interpretations of the law by Polish officials or public-institution staff, and where it draws the line in determining what constitutes a breach of its rules when reports are filed by such actors. The company did not answer these questions directly, but instead described its internal process. Its cornerstone, Meta said, is an assessment of whether the reported content complies with its “Community Standards”. If content violates those standards, it is removed. If it does not, but may potentially breach local law, Meta says it conducts a legal review.
Problems extend to other areas
The operation of the Disinformation Analysis Center and the reporting procedures for irregularities within Meta’s systems are only one set of tools available to NASK.
“Other teams within NASK also report content to Meta. Over the course of a year, as part of Dyżurnet – the Internet Illegal Content Response Unit (focused in particular on child sexual exploitation) – around 100 reports were submitted to Meta. The reported material was available on Facebook and Instagram,” adds Dr Lipińska.
Unfortunately, as NASK representatives point out, the vast majority of these reports were classified as content that did not violate Meta’s rules or Community Standards.
“By contrast, as an example of a positive response from the platform, it is worth mentioning the case of so-called ‘shaming patrols’. After reports were filed, these profiles were removed from Instagram very efficiently,” Dr Lipińska notes.
Explainer
Shaming Patrol (Pol. Szon Patrol)
Szon Patrol was a disturbing social media trend that emerged in Poland in mid-2025.
These Shaming Patrols involved groups of teenage boys (and sometimes mixed groups) who roamed public spaces wearing yellow vests, filming and photographing young women and girls they deem to be dressed or behaving “provocatively.” The term “szon” is a Polish slang for prostitute (popularized by filmmaker Patryk Vega’s movies).
These self-appointed “morality police” created social media profiles (often named “szon.patrol_[city name]”) where they posted photos, videos, and links to the social media accounts of their targets, exposing them to public shaming and harassment. What started on TikTok quickly spread to Instagram and Facebook.
In addition, in December 2025 alone, CERT Polska – operating within NASK – added more than 16,000 domains to its Warning List. These domains advertised fake investment platforms while impersonating well-known news websites.
“In many cases, these were websites promoted via social-media platforms,” the NASK representative emphasizes.
Our own newsroom has encountered a similar situation. In January this year, a video appeared online in which Sebastian Kulczyk, seemingly speaking in XYZ’s studio, encouraged viewers to invest via his “own” investment platform. It was, of course, an outright forgery. After we reported the ad directly to Meta’s team, it was removed quickly. The official route based on the “report post” function, however, ended with the automated system deeming the advertisement compliant with Meta’s standards.
Explainer
Sebastian Kulczyk
Sebastian Kulczyk is one of Poland’s most prominent businessmen and investors, though he operates somewhat in the shadow of his late father’s legendary status. Sebastian is the son of Jan Kulczyk, who was Poland’s wealthiest person and most famous entrepreneur until his death in 2015. Jan built a business empire spanning telecommunications, energy, insurance, and breweries during Poland’s post-communist transformation.
After his father’s death, Sebastian and his sister Dominika inherited the family fortune. Sebastian has taken a different path from his father: focus on investments and venture capital and more international orientation.
The problem is becoming serious
NASK’s conclusions are echoed by experts in the social-media ecosystem.
“The problem is clearly growing and evolving. Scams are becoming increasingly sophisticated thanks to the use of AI – for example, video deepfakes and voice cloning. This drastically lowers the barrier to entry for fraudsters and increases the credibility of fake offers. There is a lot of it already, and there will be much more,” says Wojtek Kardyś, a social-media expert and co-founder of the Digital Republic Foundation (Fundacja Rzeczypospolita Cyfrowa).
“This is no longer just a problem; it is a genuine digital pandemic. We are seeing an almost avalanche-like increase in the number of threats,” notes Dagmara Pakulska, a social-media expert and trainer, and a lecturer at AGH University of Science and Technology.
In 2024, for example, the number of cybersecurity incidents in Poland exceeded 100,000 – almost 30% more than a year earlier. As Ms. Pakulska points out, however, numbers are only part of the story.
“In my view, the worst change is qualitative. Not long ago, scams on social media were associated mainly with the ‘Nigerian prince’ and broken Polish resulting from clumsy use of Google Translate. Today we are living in the era of generative artificial intelligence, and digital fraudsters have reached a Hollywood level. The most dangerous trend is deepfakes and videos using the images of well-known figures [such as Rafał Brzoska or Omena Mensah—ed.], in which their voices and facial expressions are used to promote the ‘investment of a lifetime’. This can lull even true digital natives into a false sense of security,” the expert adds.
Explainer
Rafał Brzoska and Omena Mensah
Rafał Brzoska and Omena Mensah are Poland’s most high-profile power couple.
Rafał Brzoska is the founder and CEO of InPost, the company behind those ubiquitous yellow and white parcel lockers (Paczkomaty) you see everywhere in Poland (and not only!). If you've ever picked up an online order from one of those automated lockers instead of waiting for a delivery person, you've used his invention.
Omena Mensah is a Polish philanthropist and art collector who has built a massive social media following. She’s become one of Poland's most recognizable media personalities and recently introduced GQ to Poland.
NASK and CERT versus Meta: “We see no downward trend”
In December 2024, CERT published a statement calling for several changes to be introduced across Meta-owned platforms. These measures, it argued, would improve user safety in Poland. Among the proposals were the hiring of Polish-speaking moderators, the blocking of accounts that repeatedly publish false content, and the introduction of a more effective mechanism for removing harmful material itself. A few months later, in March 2025, CERT reported that the vast majority of these demands had not been implemented.
Since then, little has changed.
“Users continue to report fake investment platforms and other scams promoted through the advertising mechanisms of social-media platforms. Ads that unlawfully use the images of doctors, politicians, popular celebrities or journalists remain widespread. We do not observe a downward trend in such content – despite Meta’s declarations about deploying, among other things, artificial-intelligence mechanisms to better detect so-called celeb-bait,” says Karol Bojke, an expert at CERT Polska, which operates within NASK.
The lack of moderation is, in Wojtek Kardyś’s view, one of the social-media platforms’ gravest sins.
“I believe there is no real moderation at all. What else can you call what big tech offers? Take Facebook: in Poland it has 26m users and just 65 moderators – for the entire country. X is no better, with two. That is why responses to user reports are so slow. On top of that, big tech’s business model is built on advertising reach. It leads platforms to allow paid, unchecked campaigns that are blatant scams,” Mr. Kardyś adds.
Meta and its response to the problem
We asked Meta to respond to a series of questions concerning its cooperation with NASK, as well as its actions to combat scams and breaches of Community Standards.
According to the reply sent by Meta’s press office, the company “remains in ongoing contact with NASK and CERT teams. This is a long-term partnership aimed at developing efforts to combat scams and to increase online safety.” Its practical dimension, Meta says, is the “exchange of information” – for example via an information channel. This is the same channel through which NASK submits information to Meta about problematic advertisements.
Meta’s representatives say they are determined to tackle fraud on their platforms and insist they do not tolerate deceptive content. Based on the data provided to us, over the past 15 months the number of user reports concerning scam advertisements has fallen by more than 50%. Since the beginning of 2025, more than 134m advertisements intended to defraud users have been removed. In the first half of 2025, the number of user reports related to so-called celeb-bait scam ads is said to have fallen by 22%.
These figures, however, are global. The company did not respond to our question about data specific to Poland.
How scams operate
Interestingly, NASK has concrete observations that could contribute to improving standards in the Polish online space. For example, content exploiting the images of well-known individuals – as a reminder, NASK identifies tens of thousands of such cases each month – follows a very specific pattern.
“These materials are published cyclically, rotated within the advertising ecosystem, then removed after a few hours. Shortly afterward, they reappear, often in a modified form, and the most extensive campaigns are replicated in thousands of instances,” explains Ewelina Bartuzi Trokielewicz from NASK.
According to the expert, this mechanism indicates the systemic nature of the problem.
“Despite platforms implementing certain moderation tools and response procedures, current mechanisms are not adapted to the speed, scale, or creativity of the methods employed by those behind these scams. Moderation remains largely reactive rather than preventive,” adds Ms. Bartuzi Trokielewicz.
Meta disputes this assessment. According to a statement from the company’s press office, Meta “does not act solely reactively.” The company says it invests in new technologies and collaborates with experts and partners from other firms to ensure, as it puts it, that users feel safe.
Social media’s actions are largely superficial
NASK representatives note that in recent months there has been one observable improvement in Meta’s handling of criminal activity on its platforms. For example, responses to reports have become somewhat faster, and enforcement of rules has occasionally been intensified in selected regions.
“However, it is difficult to consider this a lasting, systemic breakthrough. Moreover, as investigations such as Reuters’ have shown, these measures are often driven by the desire to reduce regulatory pressure rather than genuinely tighten the advertising system on a global level. Instead of implementing global changes, platforms have intensified enforcement only where there was external pressure. At the same time, they try to make scam ads invisible in public search results. Removing or hiding such ads creates the impression that VLOPs [Very Large Online Platforms] are performing better than they actually are,” explains Ewelina Bartuzi Trokielewicz.
According to NASK representatives, effective preventive measures are still lacking. In addition, there are no transparent statistics on the number of fake advertisements or on content removal, making it difficult to objectively assess the effectiveness of platform changes.
Meta, as with previous issues, disagrees with the assertion that the visibility of posts or ads in searches is restricted. Its press office maintains that content intended to defraud users is removed.
Meta’s press office also insists that statistical data is made available – citing responses to our questions as evidence. The company also publishes transparency reports accessible on its websites.
Platforms are losing the fight against scams – and businesses pay the price
Dagmara Pakulska says that combating scams is a never-ending game of cat and mouse. Unfortunately, in this game, platforms are always two steps behind the fraudsters.
“The actions of giants like Meta are often purely reactive and far from sufficient. There is also a paradox that frustrates every marketer in the country. Legitimate companies frequently lose their advertising accounts over trivial errors in graphics or the use of a prohibited word. At the same time, blatant scams exploiting the images of well-known individuals or organizations remain as sponsored ads for weeks, deceiving users,” notes Ms. Pakulska.
Jacek Kłosiński argues that social networks may have no real incentive to solve the problem – because it benefits them.
“This entire situation generates profits for them by forcing people to pay for ‘protection’. Until platforms face legal consequences or are compelled to act from above, there is little reason to expect they will do anything on their own. This black market for scammers, con artists, and fraudsters simply brings platforms benefits that are hard for them to give up,” Mr. Kłosiński explains.
Expert's perspective
Platforms know a lot about us
Although social-media platforms now have increasingly advanced moderation tools, their effectiveness against scammers remains limited. Security procedures often cannot keep pace with the speed at which criminals modify and test new methods. In Poland, the phenomenon has already reached mass scale: about 36% of respondents reported being victims of a cyberattack or fraud in the past year. The most frequently cited examples are phishing and impersonation.
This shows that the issue is not merely gaps in moderation, but a broader, systemic problem. The more data is concentrated in a single ecosystem, the more severe the consequences of each incident – whether it involves a data leak, misuse of information, or delayed response by the platform itself.
Our research at Incogni indicates that Meta is the most frequently penalized company in the social-media sector worldwide for privacy violations. Facebook alone has repeatedly faced sanctions for GDPR breaches in Europe and numerous proceedings in the United States. This illustrates the immense challenge of managing data security in such large, centralized structures.
Within a single system, vast collections of sensitive information are processed. The same mechanisms that enable precise advertising targeting are also exploited by scammers to personalize attacks – making them alarmingly similar to genuine communications.
Experts call for a clear direction of change
Wojtek Kardyś emphasizes the need for very concrete measures, primarily focused on tightening advertiser verification processes before ads are allowed to run.
“To make this possible, big tech companies need to be pressed harder through regulation. Closer cooperation between platforms and law enforcement is essential, as is systematic support for digital education so that users can independently identify social-engineering tactics. In Poland, we are still waiting for the implementation of the DSA [Digital Services Act],” the expert adds.
Dagmara Pakulska shares a similar view.
“Moderation relies on automated systems that do not understand cultural context or the specifics of the Polish market. Worse, and let’s be honest, platforms profit from these ads. Until a scam is reported – often hundreds of times by users themselves – money flows straight into big tech’s coffers. While platforms claim they do not intentionally support scams, in practice their business model, based on automated ad sales, means revenue is generated until a campaign is reported and taken down. The fact that in 2024 the president of UODO (personal data protection watchdog) and the courts had to issue a precedent-setting decision to physically prevent Meta from displaying the images of Rafał Brzoska and Omena Mensah is an act of desperation and proof that platform self-regulation simply does not work,” Ms. Pakulska explains.
She also calls for an end to anonymous advertisers on social media.
“Platforms should finally take responsibility for whom they sell reach to. Especially for ads related to financial services, extremely rigorous verification should be required. Currently, setting up a ‘dummy’ advertising account is unfortunately far too easy,” adds Ms. Pakulska.
Key Takeaways
- The scale of scams using the images of well-known individuals on Meta platforms in Poland has reached an alarming level, and the company’s countermeasures are widely considered insufficient. In 2025, NASK’s Disinformation Analysis Center submitted more than 9,000 reports to Meta concerning harmful advertisements. Half of these were either rejected or left unanswered. Experts warn that Facebook and Instagram moderation systems cannot keep pace with the volume and speed of scams, which increasingly exploit AI technologies, including deepfakes and voice cloning.
- Meta disputes the accusations, claiming it acts in accordance with procedures and invests in user safety, yet the lack of transparent data for Poland makes it difficult to assess the effectiveness of these measures. The company reports that globally millions of ads have been removed and that user reports are declining, but it provides no figures specific to Poland. Meta also declined to clarify the hierarchy between local law and its own platform rules, raising questions about the company’s priorities in the context of national realities.
- Experts and institutions call for stricter regulation and more effective advertiser oversight. Proposals include rigorous verification of financial-service ads, the hiring of Polish-speaking moderators, and greater transparency from platforms. Analysts highlight a structural problem: as long as fraudulent ads go unreported, they generate revenue, and the current model – based on automation and limited moderation – encourages abuse.
We wrote about this because we considered it important and newsworthy. In the interest of full transparency, we note that the RiO fund, owned by Omena Mensah and Rafał Brzoska, CEO and shareholder of InPost, is an investor in XYZ.
