Skip to content

Bots, Fake Accounts and Manipulation: What You Need to Know

  • by
IA, AI, cyberattaque, cyberattack, bots, resilience, gouvernance, vérité, manipulation

In France and Switzerland, municipal elections are approaching. But regardless of the date and the country, as the days go by, AI techniques are improving and being refined to take control of public debate. All means are permitted to create buzz and sway opinions. Machines, automated scripts, networks of fake accounts… bots that like, comment, express outrage and provoke. This is what is known as automated social engineering.

And this phenomenon, now structural, applies equally in the economic world and on a global scale.


A deliberately vague but massive scale

There is no single, indisputable figure for the proportion of bots and fake accounts on social media. This lack of clarity is no accident.

Platforms refuse to allow independent audits and use their own vague definitions of what constitutes a “fake account”. In academic terms, this is referred to as organised epistemic uncertainty.

However, some statistical geniuses have managed to provide some estimates..


On Twitter/X, 9 to 15% of active accounts are believed to be fake, with much higher peaks during political or media events or campaigns, which can reach 50%.

Cross-analyses estimate that on average 10 to 15% of social media accounts are fake or automated, with significant variations depending on the platform and time period.

During targeted political campaigns, up to 20% of visible interactions (likes, retweets, comments) may come from inauthentic or coordinated accounts.

In certain documented electoral contexts (Asia, Eastern Europe, Latin America), investigations have shown that 30 to 40% of accounts amplifying certain political narratives were fake or manipulated, often via coordinated networks. The recent elections in Moldova are a case in point.

Important: The problem is not the number of accounts, but rather their ability to saturate the information space.


Who uses these bots to their avantage – their goal and their methods

Bots are used by very different player with distinct objectives, but using the same technical mechanisms.

Four categories of main players

  • States and parastatal players
  • Political organisations (parties, among others)
  • Marketing, soft power & ‘influence-for-hire’ agencies
  • Cybercriminals & fraudsters

Below is a summary of their goals and methods, accompanied by recent examples.

1 – States and parastatal players

Goals and Methods


Goal: influencing opinion without official endorsement, amplifying a narrative, polarising, delegitimising institutions, sowing doubt about facts.

Methods
– Networks of fake media/accounts to ‘provide a source’
– Coordinated multi-platform amplification (Twitter, Meta, YouTube, TikTok)
– Ecosystem tactics: copies of news sites, fake testimonials, deepfakes, account relays

Recent examples


“Doppelganger / Portal Kombat” operation pro-Russian networks (since 2022) – disinformation campaigns via clone websites and fake accounts against Ukraine and the EU.

Paris 2024 OG – disinformation campaigns– Russia.

Electoral interference campaign – US (2024) via Russian proxies and amplifiers disseminating polarising, propagandistic AI content.

Electoral interference campaign – Germany (Dec. 2025) via bots and fake accounts spreading deepfakes + disinformation, amplified + targeted cyberattacks. Origin: Russian secret services & Storm-1516, a pro-russia group

2 – Political Organisations

Goals and Methods


Goal: create an illusion of support or an impression of massive rejection in order to influence voters, the media, opponents, and sometimes public decision-making.

Methods
– Astroturfing: simulating ‘citizen’ mobilisation
– ‘Comment brigades’: groups of mixed accounts (humans, bots or hybrids) flooding the comments sections of elected officials/local authorities
– Local micro-targeting: sensitive topics (security, urban planning, schools, migrants, taxes)

Recent and recurrent examples


Influential bodies on Facebook and Instagram using fake accounts and pages, posing as local citizens or media outlets to disseminate politically biased content (e.g. during the elections in Moldova) – monitored by CIB Meta

Influential bodies on YouTube, monitored by Google. ‘Independent’ channels broadcast political content that is supposedly neutral or citizen-focused, but their political orientation is aligned with state interests, which often come through soft power agencies. Google Threat Analysis (TAG) delete them.

3 – Marketing, soft power & ‘influence-for-hire’ agencies

Goals and Methods


Goal: Selling or buying visibility – followers, likes, comments, artificial trends – then converting them into reputation, pressure, contracts, and sometimes political influence.

Methods
– Black/grey market for account creation (SMS verification, account farms)
– Multi-platform ‘boost’ networks
– Outsourcing to agencies that know how to ‘push’ a topic

Current examples


Fake news market with fake ‘verified’ accounts in Russia, the US and the UK . Underground economy estimated at over $1 billion per year. Customers: governments, businesses, criminals. (Cambridge Online Trust and Safety Index) euronews

Influence campaign on Meta to manipulate public debate, particularly around the Israel-Gaza war and other political issues (2024). Operated from Israel via STOIC, a company specialising in political marketing and business intelligence, which used over 500 fake or compromised accounts. Network dismantled by Meta.

OpenAI: influence campaigns using its tools (including players linked to Israel): Use of ChatGPT by Russia, China, Iran, and Israel to generate large-scale AI content (images, automated responses) to influence political debates. True industrialisation. OpenAI deleted the accounts and reported the facts. TIME.

4 – Cybercriminals and Fraudsters

Goals and Methods


Goal: Obviously money through fraud, phishing, sextortion, pig butchering scams, account hacking, data theft. Everything is industrialised via bots.

Methods
– Fake account farms for outreach + credibility
– Automated messaging/comments and social engineering at scale
– Messaging bots (WhatsApp/Telegram) to run scripts, payments, reminders

Recent examples


Sextortion — Meta deletes 63,000 Instagram accounts linked to extortion networks (coordinated operation, widespread use of fake accounts). Engadget, 2024

Scam centres — WhatsApp removes 6.8 million accounts linked to criminal networks (January–June 2025); Meta describes industrialised multi-platform fraud. AP News

Phishing ‘as-a-service’ via Telegram bots: cyber investigations reveal ‘marketplaces’ and bots facilitating phishing automation (kits, crypto payments, orchestration). Sekoia.io Blog jan 2025


Elected officials and business leaders equally exposed

Political and institutional leaders

Among all elected officials, the most vulnerable are mayors and local authority leaders. Unlike their national counterparts, who are surrounded by professional teams and a protective ecosystem (monitoring, alerts, legal services, etc.), local elected officials are left to fend for themselves.

In most towns:

  • No dedicated communications officer.
  • One versatile person to manage the website, municipal newsletter and social media.
  • No structured monitoring.
  • No expertise in bots, AI, disinformation, etc.
  • No response procedure in the event of a coordinated attack.

The result is sporadic communication with citizens via social media, where any controversy can quickly take a personal turn.

Some towns are more targeted than others, even small ones, because they:

  • are home to a well-known or wealthy personality
  • are associated with a sensitive project (urban planning, security, hospitality, environment)
  • become, through their image, geography or history, a symbol that can be exploited in a national narrative

And for malicious players, all these municipalities are a goldmine. A few dozen or hundred well-coordinated accounts are enough to create the illusion of widespread discontent, a local scandal or a loss of political legitimacy.

In short, the local level has become:
– an ideal testing ground for narratives before rolling them out more widely
– a testing ground for influence strategies
– a low-cost entry point for manipulating public opinion

    Four specific risks for mayors and local authorities leaders


    For them, the consequences are immediate and sometimes dramatic:

    Media snowball effect – digital buzz can be enough to trigger an article or regional coverage.

    Political decisions made under artificial pressure – abandoning a project, modifying a communication, hardening or softening a position

    Personal destabilisation – targeted attacks, hostile comments, feelings of isolation

    Loss of internal trust – municipal officials, municipal council, local partners

    And what is most worrying is that these dynamics are often invisible to elected officials themselves, who have neither the tools nor the reflexes to distinguish between genuine mobilisation and automated orchestration.

    Three very recent concrete cases


    Disinformation campaign conducted via dozens of fake websites and local French-language news outlets (Sud-Ouest, LyonMag, l’Est Républicain), whose names were hijacked by a pro-Kremlin network (February–June 2025). Goal: to pollute the local information environment with pro-Putin propaganda, five months before municipal elections. Reporters sans Frontières, nov 2025

    Fraud and data theft in Montreux/VD via a fake profile that usurped the identity of VMCV, the local public transports. The perpetrator was a cybercriminal. June 2025. See FB page of the town.

    Fake accounts, identity theft and cyberbullying (Sept. 2025) in the municipality of Rochefort-du-Gard (Gard), in the pre-election context. A fake account took on the identity of an association supporting the outgoing mayor to create confusion, with cyberbullying in the background. Le Dauphiné

    Business Leaders

    Leaders of businesses, economic institutions and strategic organisations are now prime targets, with the same vulnerability as elected officials: confusion between signal and reality.

    Like politicians, business leaders are exposed to popularity indicators — likes, comments, shares, trends — which they often interpret as authentic signals from the market, public opinion or their stakeholders. However, these indicators are easy to manipulate, low-cost and large-scale.

    This manipulation is not insignificant. In 2025, attacks directly controlled by bots already accounted for 12% of third-party fraud cases recorded*. While they are not the primary vector of attack (identity theft, account hacking or phishing), they are nevertheless a cross-cutting accelerator, enabling the industrialisation of attacks. This creates a deeply polluted information environment, in which artificial signals can be perceived as economic realities. *Source: sumsub report, 2025-2026

    Unlike elected officials, business leaders — particularly those in SMEs with 100 to 2,000 employees — are even less aware of information risk. They vastly underestimate it for three main reasons:

    • they see themselves as ‘non-political’ actors;
    • they confuse digital marketing with information warfare;
    • the subject is too often relegated to the communications department or community manager, when they have one.

    This is a strategic error. In an environment saturated with artificial signals, a leader’s ability to distinguish noise from reality becomes a governance issue, not just a communication issue.

    Five specific risks for busienss leader


    1. Strategic decisions skewed by an artificially amplified debate with bot dynamics and coordinated comments + falsified content that can:
    – creating the illusion of customer, societal or regulatory expectations
    – pushing for hasty repositioning
    – influencing investment, communication or market withdrawal decisions
    Key risk: making decisions based on a fabricated perception rather than reality.

    2. Damage to the organisation’s reputation and image through coordinated comment campaigns, accompanied by fake content, which may:
    – creating a perception of crisis where there is none
    – sowing doubt among partners, investors or customers
    – forcing the company to respond to artificial controversy
    Key risk: being subject to the agenda of invisible actors

    3. Manipulation of leaders themselves by fake news spread on the internet, where leaders are exposed to:
    – biased narratives repeated in the media or on social networks
    – artificial ‘buzz’ signals
    – content presented as citizen-generated, expert-generated or neutral, increasingly generated by AI
    These signals can influence their public stance, strategic discourse and internal priorities. They are all the more effective because they circulate in environments perceived as legitimate or professional. Key risk: adjusting one’s strategy to simulated opinion.

    4. Identity theft and authority manipulation (CEO impersonation), attacks that directly exploit the identity of executives via fake accounts, automated emails or messages, cloned voices, or AI-generated content.
    Threefold objective: trigger financial decisions, bypass internal processes and exploit hierarchical authority.
    Key risk: the manager’s authority becomes a target for attack.

    5. Financial and competitive manipulation, particularly in competitive contexts, tenders or fundraising, via bots used to:
    – amplify or stifle legitimate alerts
    – influence the perceived value of a company
    – attack a competitor through persistent negative narratives and distorted content
    Key risk: artificial distortion of competition and the market.

        Four concrete cases


        #BoycottZara on social media (2024–2025) – A study by digital reputation specialists mentions that during a controversy surrounding a marketing campaign by Zara (a brand owned by Spanish group Inditex), a significant portion of negative interactions were generated by inauthentic or automated profiles, contributing to a boycott hashtag that gained widespread traction. This amplification created a distorted public perception of rejection of the campaign, impacting the company’s reputation, including in Europe (Italy, France, Spain, etc.). Cyabra

        Mass attacks on reputation through fake reviews and coordinated comments (2024) in the United Kingdom. Several British SMEs (restaurants, B2B services, professional firms) were targeted by waves of synchronised negative reviews on Google, Trustpilot and social media, without any real trigger event. Consequences: loss of credibility, forced responses and unjustified internal questioning.

        Fake ‘economic’ media networks in France/Europe (2023–2025) – Investigations by EU DisinfoLab and Viginum have uncovered networks of cloned economic news sites that use the codes of the professional press to disseminate biased content on competitiveness, taxation, industry and international relations (fake analyst articles, reposted by LinkedIn accounts and ‘experts’ and circulated among executives and investors). Executives are not targeted directly, but their mindset is.

        Sophisticated but unsuccessful attempt at CEO impersonation – Mark Read, CEO of WPP (UK). A fake WhatsApp account with a cloned voice in a Teams meeting was used to initiate a financial transaction.. The Guardian, may 2024 – The authority of the leader becomes the attacker’s weapon.


        Then, faced with these risks, what can be done?

        Whether you are a business leader or an elected official, there is no magic bullet for eliminating information manipulation. However, there are at least five governance, vigilance and cultural levers that can be activated quickly, even with small teams.

        • Learn to distinguish between signals and noise again, in other words: don’t react impulsively and analyse the situation by asking the right questions (is it widespread or just noisy? What are the impacts? Who is speaking? From what profile? etc.). Speed does not mean rushing.
        • Take the issue beyond mere communication. This issue must be addressed at the decision-making level with the support of those who publish. Fake news influences decisions, affects the legitimacy of leaders or organisations, and can trigger artificial political or economic crises.
        • Identify in advance the blind spots specific to your territory or activity. Vulnerabilities are never generic. They depend on the local context at a given moment. Without a contextualised diagnosis, both internally and externally, you are acting blindly.
        • Integrate information risk into governance. Like cyber risk a few years ago, information risk must become a topic for executive committees and part of crisis scenarios, so as not to confuse artificial pressure with democratic or economic reality.
        • Moving from a culture of reaction to a culture of resilience. The most robust organisations, both public and private, detect issues earlier and coordinate better. Resilience is a collective skill that can be developed with individual talents in a supportive organisational framework.

        At Oz’n’gO, We assist business leaders and public officials in identifying their information blind spots, critically interpreting digital signals, and integrating these risks into their governance.

        (without being a technical expert)

        • Recent profiles, with little or no personalisation.
        • Generic photos or photos from image banks/AI.
        • Vague bios, political slogans, lack of real background information.
        • Regular posts, including at night.
        • Short, repetitive, emotional comments, often out of context.
        • Lack of real interaction (no replies, no discussion).
        • Groups of accounts always commenting together.
        • Sudden and massive amplification of a specific message.
        • Engagement disproportionate to the actual visibility of the account.

        NB : tools like Botometer or Bot Sentinel can help, but they cannot replace human analysis and produce false positives.

          You cannot copy content of this page

          Verified by MonsterInsights