Support My Work!
austins research
austins research
Discord
  • Preamble
  • About austins research
    • About my tooling
    • austins research services
  • AWFixerOS Research
    • What is AWFixerOS?
    • NotebookLM Work
    • NotebookLM Notes
    • Navigating the Labyrinth
    • The Feasibility of various init systems
    • Sources
    • Audio Overviews and Media
  • Dislang Related Research
    • Dislang Research
    • Assessing the Feasibility of a Dedicated Discord API Programming Language
    • Designing a Domain-Specific Language for Discord API Interaction
    • Discord DSL Design Principles
    • Sources
  • Project Specific Research
    • Feasibility Analysis: Open-Source Discord Bot Platform with No-Code Builder and Advanced Dashboard
    • Automating Discord Server Membership Upon Auth0 Authentication
  • News Research
    • Gemini Report - Shapes Inc Issue
    • The Discord Dilemma:
  • Physics Research
    • Page 1
  • Gemini Deep Research
    • using UDEV to make a dead man switch
    • SMTP Email Explained
    • AI: Reality or Misinturpritation?
    • Creating a custom Discord Server Widget
    • Cloudflare Pages & Static Blogging
    • Firebase, Supabase, PocketBase Comparison
    • A Comparative Analysis of Large and Small Language Models
    • Building a Privacy-Focused, End-to-End Encrypted Communication Platform: A Technical Blueprint
    • Architecting a Multi-Tenant Managed Redis-Style Database Service on Kubernetes
    • Building an Open-Source DNS Filtering SaaS: A Technical Blueprint
    • Leveraging Automated Analysis, Checks, and AI for C++ to Rust Codebase Migration
    • Constructing Automated Code Translation Systems: Principles, Techniques, and Challenges
    • Universal Webhook Ingestion and JSON Standardization: An Architectural Guide
  • The Investigatory Powers Act 2016: Balancing National Security and Individual Liberties in the Digit
  • The e-Devlet Kapısı Gateway: Breaches, Fallout, and the Erosion of Digital Trust in Turkey
  • Evolving the Discord Ecosystem
Powered by GitBook
LogoLogo

Support Me

  • My Coinbase Link
  • By subscribing to my blog
  • Support Page
  • Apply to join the Community

Stuff About me

  • My Blog
  • my website :)
  • My brain site
  • Privacy Statement
  • Terms of Service

Company Plugging

  • AWFixer Foundation
  • AWFixer Tech
  • AWFixer Development
  • AWFixer Industries
  • AWFixer and Friends
  • AWFixer Shop

© 2025 austin and contributers

On this page
  • I. Introduction: The Double-Edged Sword of Digital Communities
  • II. Discord's Anti-Scam Arsenal: Strategies and Technologies
  • III. The Rogues' Gallery: Common Scam Bot Archetypes on Discord
  • IV. The Evasion Playbook: How Scam Bot Creators Circumvent Discord's Defenses
  • V. The Wider Defense Ecosystem: Third-Party Tools and Community Vigilance
  • VI. The Scale of the Challenge: Statistical Insights and Ongoing Battles
  • VII. Fortifying the Gates: Recommendations for Discord, Administrators, and Users
  • VIII. Conclusion: Navigating a Dynamic Threat Landscape
  • Works cited

Was this helpful?

Export as PDF
  1. News Research

The Discord Dilemma:

Unmasking Scam Bots and the Arms Race for Platform Security

I. Introduction: The Double-Edged Sword of Digital Communities

Discord has emerged as a dominant platform for online communication, fostering vibrant communities around diverse interests, from gaming and education to business networking and social interaction.1 With over 300 million users worldwide, its open and accessible nature is a key strength.1 However, this widespread adoption and ease of connectivity have also rendered it an attractive target for cybercriminals deploying an array of scam bots designed to exploit unsuspecting users.1 These malicious automated accounts engage in activities ranging from phishing and malware distribution to financial fraud and identity theft, posing a significant threat to user safety and platform integrity.1

The challenge of combating these scam bots is substantial. Scammers are not static; they continuously evolve their tactics to circumvent security measures, leading to a persistent "cat-and-mouse" game between platform operators and malicious actors.6 This report delves into the multifaceted strategies Discord employs to counter scam bots, examines the common types of scams proliferating on the platform, and critically analyzes the sophisticated methods bot creators use to evade detection. Furthermore, it explores the role of third-party tools and community vigilance in this ongoing battle, assesses the scale of the problem through available data, and offers recommendations for enhancing the security of the Discord ecosystem for the platform, server administrators, and individual users. The dynamic interplay between offensive and defensive measures underscores the complexity of maintaining trust and safety in large-scale digital environments.

II. Discord's Anti-Scam Arsenal: Strategies and Technologies

Discord employs a multi-layered approach to combat spam and scam bots, combining automated systems, policy enforcement, and user-driven reporting mechanisms. The platform's commitment to safety is articulated through its Safety Library and regular transparency reports detailing enforcement actions.7

A. Official Platform-Level Defenses

Discord's primary defenses include proactive spam filters, rate limiting, and clear guidelines against malicious activities. The platform actively scans for suspicious links and files, warning users before they click or download.7 Official announcements are made only through designated channels, and Discord emphasizes that its staff will never ask for user passwords or account tokens via direct messages (DMs) or email.4 Users are encouraged to verify official communications by looking for specific badges on staff profiles.4

Key technological and policy measures include:

  1. Proactive Spam Filters: Discord utilizes automated spam filters to protect user experience and platform health.7 These filters target unsolicited messages, advertisements, and behaviors like "Join 4 Join" schemes, which, even if not unsolicited, can be flagged if they involve sending a large number of messages in a short period, straining services.7 A dedicated DM spam filter automatically sends messages suspected of containing spam into a separate inbox, with customizable filter levels for users.7

  2. Rate Limiting: To counter spammers who exploit features through bulk actions, Discord enforces rate limits on activities such as joining many servers or sending numerous friend requests simultaneously.7 Accounts exhibiting such behavior may face action.

  3. Account Verification and Server Security Levels: While not explicitly detailed as a direct anti-bot measure in all contexts, server administrators can set verification levels (e.g., requiring a verified email or phone number, or a minimum account age on Discord) for new members to post, which can deter newly created bot accounts.9

  4. User Reporting Systems: Discord heavily relies on its user base to report policy violations, including scams and malicious bots.4 Clear instructions are provided on how to report abusive behavior directly within the app.4 These reports are critical for identifying and acting against emerging threats.

  5. Content Moderation and Takedowns: Upon identifying scam activity, Discord takes actions such as banning users, shutting down servers, and, where appropriate, engaging with law enforcement authorities.4 This is guided by their Community Guidelines and specific policies like the Deceptive Practices Policy and Identity and Authenticity Policy.4

  6. Link Scanning and Warnings: The platform attempts to warn users about questionable links, although it stresses that this is not a substitute for user vigilance.7 When a link directs a user off Discord, a pop-up indicates the destination website.4

  7. Two-Factor Authentication (2FA): While a user-side protection, Discord strongly promotes 2FA to secure accounts, making it harder for scammers to take over accounts even if credentials are stolen.1

B. Enforcement Statistics and Transparency

Discord's Q1 2023 Transparency Report provides quantitative insights into its enforcement actions, illustrating the scale of its anti-scam efforts.8

Metric Category

Accounts Disabled/Removed

Servers Removed

Key Trend/Proactive Rate

Number of Reports/Warnings

Spam

10,693,885

N/A

71% decrease in spam accounts disabled vs Q4 2022; 99% proactive disablement

6.02M spam reports on 3.09M unique accounts

Deceptive Practices

6,635

3,452

29% increase in accounts disabled; 43% increase in servers removed

N/A directly, but part of overall user reports

Overall Policy Violations (excluding spam)

173,745

34,659

13% increase in accounts disabled; 70% proactive server removal overall

117,042 non-spam user reports (15.5% led to action)

Warnings (Individual Accounts)

N/A

N/A

N/A

17,931 (41% increase)

Warnings (Server Members)

N/A

N/A

N/A

1,556,516 accounts warned (27% increase)

Source: 8

The significant decrease in accounts disabled for spam (71%) is attributed by Discord to "less spam on Discord and improvements in our systems for detecting spam accounts upon registration, as well as quarantining suspected spam accounts without fully disabling them".8 This suggests a shift in detection strategy towards earlier intervention or different handling of suspected spam accounts, rather than necessarily a 71% reduction in attempted spam. The high proactive disablement rate for spam (99%) indicates the effectiveness of automated systems in catching these before user reports. Conversely, the increase in actions against "Deceptive Practices" (which includes malware, token theft, and financial scams) by 29% for accounts and 43% for servers, suggests that more sophisticated or targeted malicious activities might be on the rise or are being more effectively identified.8 The low rate of appeal success (2% of appellants reinstated) suggests Discord is confident in its enforcement decisions.8

These measures collectively form Discord's frontline defense. However, the ingenuity of scam bot creators necessitates a constant evolution of these strategies.

III. The Rogues' Gallery: Common Scam Bot Archetypes on Discord

Scammers deploy a variety of bot-driven schemes on Discord, each with distinct methods and objectives, primarily aimed at extracting sensitive information, financial assets, or account access from users.1 Understanding these archetypes is crucial for users and administrators to recognize and mitigate threats.

Scam Type

Modus Operandi & Objectives

Key Red Flags & Evasion Tactics

Associated Snippets

Phishing Scams (General)

Bots send DMs or server messages with malicious links leading to fake login pages (mimicking Discord, banks, etc.) or malware-infected sites. Goal: Steal login credentials, personal data, or install malware.

Urgent calls to action, offers too good to be true, slight misspellings in URLs, pressure to act quickly. Bots may use shortened URLs or try to appear as official communications.

1

Fake Nitro Giveaways

Bots DM users or post in servers announcing "free Discord Nitro" (a premium subscription). Links lead to phishing sites to steal account/payment details or install malware.

Unsolicited offers for valuable items, links not from official discord.gift domain, requests for login credentials to claim. Often use convincing graphics. Bots may have names like "NITRO FREE#8342".

1

Malware Distribution Bots

Bots distribute malicious files disguised as game hacks, tools, or enticing content. Files can be RATs (Remote Access Trojans), spyware, or adware. Goal: Gain control of device, steal data, use device in botnets.

Unsolicited file shares, promises of cheats/hacks, files flagged by antivirus/browser. Malware can be hosted on Discord's CDN or external sites.

4

Crypto & NFT Scams

Bots promote fake crypto investments, airdrops, or NFT mints, often in crypto-focused servers. Promise high returns or exclusive access. Goal: Steal cryptocurrency, NFTs, or wallet credentials.

Unrealistic profit promises, pressure to invest quickly, requests for private keys/seed phrases, links to unfamiliar exchanges/minting sites. Bots may impersonate project staff.

1

Impersonation Scams (Staff/Friend/Support)

Bots or compromised accounts impersonate Discord staff, friends, or support agents from other services (e.g., Steam). Common tactics: "Accidentally reported you, contact this 'admin'" or fake account issue warnings. Goal: Steal credentials, extort money, gain account access.

Requests for passwords/tokens (Discord staff never ask), threats to account standing, unusual language from a "friend," pressure to contact specific "support" accounts not through official channels. Look for official "SYSTEM" or staff badges.

1

Fake Game/Program Downloads

Bots offer links to download games, programs, or experimental code, often via DMs or in specialized servers. Goal: Distribute malware, steal credentials via fake login prompts on download sites.

Offers from unknown sources, links to unofficial download sites, requests to disable antivirus. Files may be shared directly or via links/QR codes.

2

Account/Token Theft Bots

Bots use various methods (phishing, malware) specifically to steal Discord account authentication tokens. Goal: Full account takeover for further malicious activities (spamming contacts, server raiding).

Any link or file that asks for Discord login outside the official app/site, or unexpected requests to "re-verify" account via a link.

2

"Graphic Designer" / Service Scams

Bots (or humans using bot-like scripts) contact users, especially streamers, offering services like graphic design or viewer bots, often as a pretext to phish credentials or sell fake services.

Unsolicited DMs offering services, generic compliments about content followed by a sales pitch, requests for Discord username to "help."

17

The prevalence of these scams, as evidenced by numerous user reports and official Discord statements, highlights the diverse attack vectors employed.1 For example, users have reported bots with names like "NITRO FREE#8342" or "Twitch#0081" sending unsolicited DMs with malicious links, sometimes leading to file downloads like "Free_Nitro.rar".13 The "accidental report" scam, where a user is pressured to contact a fake Discord staff member, is a common social engineering tactic that can lead to significant financial loss and account compromise.4 The effectiveness of these scams often hinges on creating a sense of urgency, offering enticing rewards, or exploiting user trust in familiar interfaces or supposed authority figures.1

IV. The Evasion Playbook: How Scam Bot Creators Circumvent Discord's Defenses

Scam bot creators employ an increasingly sophisticated array of techniques to bypass Discord's security measures. This involves not only technical circumvention but also the exploitation of platform mechanics and human psychology.

A. Technical Bypass Techniques

  1. CAPTCHA Solving and Bypassing:

    CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) systems are a common hurdle for bots. However, scammers have developed multiple ways to overcome them.18 These include:

    • Optical Character Recognition (OCR): Advanced OCR software can interpret the distorted text in image-based CAPTCHAs with considerable accuracy.18

    • Machine Learning Algorithms: Bots can be equipped with machine learning models trained on vast datasets of CAPTCHA examples, enabling them to predict correct solutions for various CAPTCHA types, including image recognition challenges.18

    • Session Replay: Some bots mimic human behavior by replaying recorded interactions of real users successfully solving CAPTCHAs.18

    • AI-Powered Tools: Specialized AI tools are designed to solve CAPTCHAs by emulating human cognitive processes more closely than traditional methods.18 The AkiraBot framework, for instance, is noted for employing multiple CAPTCHA bypass mechanisms.19

    • Human Fraud Farms: In some cases, CAPTCHA solving is outsourced to individuals in low-wage countries who manually solve them in real-time for bots.

    The development and availability of these CAPTCHA-bypassing tools and services point to a specialized sub-market within the cybercrime economy. This commoditization lowers the technical barrier for creating effective scam bots, making advanced evasion techniques accessible even to less skilled actors. Consequently, platforms like Discord cannot depend solely on CAPTCHAs and must invest in more complex, multi-layered detection strategies, such as behavioral biometrics and advanced risk scoring, to identify automated activity. The fight is not just against individual bot creators but also against an ecosystem that supplies them with these evasion tools, demanding continuous innovation in detection technologies.

  2. Use of Proxies and VPNs:

    Discord actively tracks IP addresses to identify and block malicious activity.20 To counter this, scam bots extensively use proxies and VPNs:

    • IP Masking and Rotation: Bots route their traffic through proxy servers, masking their true IP addresses. They often employ rotating proxies that frequently change IP addresses, making it difficult for Discord to implement effective IP-based bans or rate limits.19 The AkiraBot, for example, utilizes the SmartProxy service for this purpose.19

    • Managing Multiple Accounts: When deploying a large number of bots, each bot can be assigned a unique IP address through a proxy. This prevents actions from one bot account from impacting others and helps avoid detection based on an unusual volume of activity from a single IP.20 Dedicated IPv6 proxies are often preferred for managing bot fleets due to the large pool of available addresses.20

    • Bypassing Geo-Restrictions and Bans: If a specific IP or region is blocked by Discord or a particular server, proxies allow bots to appear as if they are connecting from an unrestricted location, thereby bypassing these limitations.20

  3. Acquisition and Use of Aged/Verified Accounts:

    Newly created Discord accounts often face stricter scrutiny and limitations. To bypass this, scammers acquire and use "aged" accounts (those created some time ago) and/or accounts that have undergone some form of verification (e.g., email or phone verified).22

    • These accounts are perceived as more legitimate by Discord's anti-spam systems and are less likely to be immediately flagged or restricted.9

    • An underground market exists where such accounts are sold. For example, platforms like Xyliase Shop offer "Fully Verified Discord Token Accounts" (email and phone verified, long-aged) for as little as $0.12, "Email Verified Discord Token Accounts" for $0.06, and "Unclaimed Discord Accounts" for $0.30.23

    • The purported benefits for buyers include avoiding restrictions, enabling smoother messaging, and gaining easier access to multiple servers.22

    The existence of this illicit marketplace for aged and verified accounts demonstrates that scammers understand how platforms like Discord might assess account trustworthiness. Platforms likely use account age, verification status, and activity history as signals in their anti-spam and anti-bot systems, as implied by server verification level settings that can require accounts to be registered for a certain duration.9 Scammers recognize that new accounts used for spam are more easily detected. Therefore, acquiring accounts that have already passed these initial "probationary" periods or possess verification markers becomes a key evasion tactic. This fuels a demand for account farming, account theft, or direct purchasing from these underground marketplaces. Consequently, Discord's trust and safety mechanisms must evolve to detect suspicious behavior even from accounts that appear legitimate based on these static properties. This necessitates more sophisticated behavioral analytics and efforts to disrupt the illicit account market itself.

  4. Simulating Human-like Interaction Patterns:

    To evade detection systems designed to flag robotic or unnatural behavior, bot creators increasingly focus on making their bots interact in ways that mimic human users:

    • AI-Generated Text: Large Language Models (LLMs) like those from OpenAI are used to generate unique, contextually relevant, and varied messages. This helps bypass spam filters that rely on detecting repetitive or identical text strings.19 AkiraBot is a notable example of a bot framework using OpenAI for this purpose.19

    • Scripted Interactions with Variations: Bots can simulate conversations by using pre-scripted Q&A exchanges between multiple accounts, with AI-generated variations in responses to make the chat appear more natural and less predictable.24

    • Randomized Behavior and Timing: Introducing elements of randomness in message content, posting times, and interaction patterns helps to break predictable bot-like sequences.25 This includes implementing cooldowns and specific timing logic between actions to mimic human pacing rather than machine-speed execution.24

    • Browser Automation: Frameworks such as Selenium WebDriver, Puppeteer, or Playwright are used to automate web browsers, allowing bots to simulate human navigation patterns, mouse movements, and keystrokes when interacting with Discord's web interface or related phishing sites.19 AkiraBot, for instance, uses Selenium WebDriver to intercept website loading processes and refresh tokens.19

B. Exploiting Platform Mechanics and Vulnerabilities

Beyond purely technical bypasses, scam bots also exploit specific features, inherent trust models, and sometimes vulnerabilities within the Discord platform itself.

  1. API Vulnerabilities and Bot Compromises:

    Vulnerabilities in Discord's API, even if reportedly addressed, can provide attackers with valuable intelligence. Josh Fraser, founder of Origin Protocol, highlighted an alleged Discord API leak that purportedly exposed private channel data including names, descriptions, member lists, and activity data.26 Such information could be invaluable for targeting specific communities or users.

    Furthermore, legitimate and widely used third-party bots can become attack vectors if compromised. The MEE6 bot, popular for server moderation, was reportedly compromised across several high-profile NFT servers, allowing attackers to post malicious links through a trusted channel.26 Similarly, the Ledger hardware wallet company's Discord server was breached when an attacker gained control of a moderator's account and used it to deploy a bot for phishing purposes, instructing users to verify their seed phrases on a fake site.27 These incidents underscore the risk posed by compromised privileged accounts or trusted third-party integrations.

  2. Webhook Abuse:

    Webhooks are a legitimate Discord feature allowing external applications to send messages into channels. However, attackers can abuse them. Malicious actors can create webhook URLs to send spam messages directly into servers or, more insidiously, use webhooks as a C2 (Command and Control) channel to exfiltrate stolen data from compromised user devices by syncing the malware with the webhook to send data back to an attacker-controlled Discord channel.12

  3. Client-Side Malware and File Corruption:

    Some malware is specifically designed to target the Discord desktop client. By modifying core client files, this malware can steal user data, authentication tokens, or take control of the account.5 An example is the "Spidey Bot" malware, which corrupts JavaScript files within Discord's application modules, such as index.js in discord_modules and discord_desktop_core. A tell-tale sign of this specific infection is these files containing more than one line of code.12 Such attacks can be difficult for traditional antivirus software to detect if the malware gains the necessary permissions to modify application files, often tricking the user into granting them.

  4. Hijacking Expired Vanity URLs:

    Discord servers can use custom "vanity" URLs (e.g., discord.gg/YourServerName) if they achieve a certain boost level. If a server loses its boost status, this vanity URL can expire and become available for others to claim.6 Attackers actively monitor for valuable or high-traffic vanity URLs to expire. The moment such a link becomes free, they quickly register it for their own malicious server. Users who click on old, bookmarked, or publicly shared links to the original server are then unknowingly redirected to the scammer's server, where they can be targeted with phishing attempts or malware. This tactic was notably used in the "Inferno Drainer" phishing campaign.6

  5. Domain Rotation for Phishing Sites:

    To maintain the longevity of phishing campaigns and evade detection by security tools and blocklists, attackers frequently rotate the domain names used for their malicious websites.6 Even if one phishing site is identified and shut down, the attackers will have already prepared or deployed new domains to continue their operations. The Inferno Drainer campaign, for example, proactively rotated its phishing domains every few days.6

  6. Exploiting Social Trust within Servers:

    Bots may join servers and initially remain inactive or engage in seemingly harmless behavior to build a facade of legitimacy. This period of dormancy can help them evade immediate detection by moderators or automated systems that look for overtly spammy behavior upon joining. Once a degree of perceived normalcy is established, or during opportune moments, the bot can then unleash its spam or scam messages. Additionally, bots can automate participation in "Join 4 Join" schemes, where users agree to join each other's servers.7 While sometimes viewed as a growth tactic, Discord flags this behavior as potentially spammy due to the high volume of messages and server joins it can generate, which strains platform resources and can lead to account actions.7

  7. Sophisticated Bot Frameworks and Evasion Ecosystem:

    The methods used by scammers are not always isolated tricks but are often part of a broader, more organized approach. Sophisticated bot frameworks like AkiraBot exemplify this, integrating multiple evasion techniques into a single package.19 AkiraBot combines AI-powered message generation (using OpenAI to create unique spam messages tailored to target websites), multiple CAPTCHA bypass mechanisms, and the use of proxy services (like SmartProxy) for network evasion.19 This indicates a professionalization of scam operations.

    This points to an ecosystem of evasion where various tools and services support scam campaigns. This includes marketplaces selling aged or verified Discord accounts 22, proxy service providers 20, and potentially CAPTCHA-solving services.18 Incidents like the MEE6 bot compromise 26 or the hijacking of vanity URLs 6 demonstrate that attackers possess a keen understanding of Discord's specific ecosystem and how to exploit its features or trusted components.

    This systemic nature means Discord is contending not just with individual malicious actors, but with an organized underground economy that supplies the tools and resources for large-scale evasion. Defensive strategies must therefore also aim to disrupt this illicit supply chain, for instance, by working to identify and shut down marketplaces for compromised accounts or scamming tools.

The following table provides a comparative overview of Discord's defenses and the corresponding evasion tactics employed by scammers:

Table 2: Discord's Defense Mechanisms vs. Scammer Evasion Tactics – A Comparative Overview

Discord's Defense Mechanism

Corresponding Scammer Evasion Tactic

Notes on Effectiveness/Challenge

IP-based Rate Limiting/Bans

Proxy Networks (e.g., SmartProxy) / IP Rotation 19

Proxies make IP bans temporary and less effective for persistent actors.

CAPTCHA Challenges

CAPTCHA Solving Services/AI Bypasses (OCR, ML, Session Replay) 18

A developed market exists for CAPTCHA bypass, challenging this defense.

Account Age/Verification Checks (Server-side)

Purchased Aged/Verified Accounts 22

Aged/verified accounts can bypass initial scrutiny filters for new accounts.

Spam Content Filters (Text-based)

AI-Generated Polymorphic Messages (e.g., using OpenAI) 19, Zalgo Text 29

AI-generated, unique messages challenge signature-based spam detection.

Malware Scanning on Uploads (Discord CDN)

Hosting Malware on External Sites & Linking; File Obfuscation

External hosting shifts the detection burden; obfuscation can hide malicious payloads.

User Reporting System

Mass Account Creation for False Positives; Ignoring/Evading Reports; Social Engineering to Discredit Reporters

The report system can be overwhelmed or manipulated, though Discord acts on verified reports.

Vanity URL System

Hijacking Expired Vanity URLs 6

Exploits a specific lifecycle feature of vanity URLs if server boost status lapses.

Official Bot Verification (Checkmark)

Impersonating Verified Bots; Compromising Legitimate Verified Bots (e.g., MEE6) 26

Leverages user trust in verified status; compromised legitimate bots are potent attack vectors.

DM Spam Filters 7

Human-like conversation simulation; AI-generated messages designed to appear non-spammy 19

Sophisticated bots aim to craft messages that bypass content-based spam heuristics.

This constant interplay necessitates continuous adaptation and innovation from both sides.

V. The Wider Defense Ecosystem: Third-Party Tools and Community Vigilance

While Discord implements platform-level defenses, a significant part of the anti-scam effort relies on third-party tools utilized by server administrators and the collective vigilance of the user community.

A. Role and Functionality of Moderation Bots

Many server administrators depend on third-party moderation bots to manage their communities effectively and combat spam, raids, and other disruptive behaviors.1 These bots offer a range of automated functionalities:

  • RaidProtect: This bot specializes in anti-spam and anti-raid capabilities. It distinguishes between "heavy spam" (messages containing invitation links, mass mentions, or numerous images, often used in raids) and "light spam" (frequently sent but less intrusive messages). RaidProtect can automatically kick or ban spammers and sends notifications to a designated log channel with details of detected actions.30 It offers three configurable security levels (High, Medium, Low) and allows administrators to specify ignored channels, roles, or individual users, providing flexibility in its application.30

  • MEE6: A popular multi-purpose bot, MEE6 includes a suite of auto-moderator tools for spam prevention. These tools can filter messages based on bad words, repeated text, excessive capitalization, overuse of emojis, and Zalgo text (chaotic, stacked text characters).29 Depending on the configuration, MEE6 can delete offending messages and issue warnings to users.1

  • Dyno: Another widely used moderation bot, Dyno provides similar functionalities to MEE6, including chat monitoring, spam deletion, and user sanctioning capabilities.1

  • General Moderation Bot Capabilities: Beyond specific brands, moderation bots commonly offer features like monitoring server chat for rule violations, automatically deleting spam messages, issuing warnings, muting, kicking, or banning users based on predefined rules or moderator commands. Many also support "leveling systems," where users gain experience points and levels through server engagement. Higher levels can unlock additional permissions, which serves as a mechanism to restrict the capabilities of new, potentially untrustworthy members and deter "drive-by" trolls or spammers.9 Some bots, like GearBot, also offer comprehensive logging of moderation actions, such as deleted comments and warnings, which is crucial for accountability and effective moderation.9

While these third-party bots are indispensable for managing many communities, particularly large ones where manual moderation is unfeasible, they also introduce an additional layer of complexity and potential vulnerability. These bots often require extensive permissions within a server to perform their functions effectively; for instance, to delete messages, kick users, or manage roles.9 If a popular and widely trusted bot is compromised, as was reportedly the case with MEE6 in some instances 26, it can be turned into a powerful tool for attackers. Such a compromised bot can leverage its trusted status and broad permissions to disseminate malicious links or execute harmful actions across all servers it's a part of. The security of the bot itself—its underlying code, hosting environment, and API key management—becomes paramount. This implies that server administrators must exercise due diligence when selecting and configuring bots, granting only the permissions essential for their operation and keeping them updated. From Discord's perspective, the overall security of its platform is intrinsically linked to the security posture of these widely adopted third-party applications. This might suggest a need for more robust verification processes or security auditing for popular bots operating within the Discord ecosystem.

B. Specialized Anti-Scam Services

Beyond general moderation bots, specialized services are emerging to help users identify and avoid scams:

  • Bitdefender Scamio: This is a free, on-demand scam detection tool available as a Discord bot. It employs AI to analyze various forms of content submitted by users, including text messages, links, images/screenshots, QR codes, and even PDF files, to check for malicious intent.3 Scamio is designed to detect a range of threats such as phishing attempts, impersonation scams, and fraudulent giveaways. It provides users with real-time feedback and tips on how to identify scams and avoid risky online behavior. The service also maintains a history of interactions, allowing it to provide more relevant responses over time and enabling users to revisit previously analyzed scams and the associated recommendations.3 This tool empowers individual users to proactively vet suspicious content before they engage with it, adding another layer of defense.

C. The Importance of User Education and Community-Driven Reporting

A cornerstone of Discord's safety strategy is empowering users through education and relying on community vigilance:

  • User Vigilance: Discord's official safety advice consistently emphasizes the need for users to be cautious: "Be wary of suspicious links and files," "DON'T click on links that look suspicious," and "think before you click" are common refrains.7 Users are advised to scan unfamiliar links using external site checkers like Sucuri or VirusTotal.7

  • Administrator-Led Education: Server administrators are encouraged to educate their members about online safety.1 This can involve creating dedicated channels for security announcements, regularly sharing tips on how to spot common scams (like fake Nitro offers or impersonation attempts), and instructing members on the proper procedures for reporting suspicious activity within the server and to Discord's Trust & Safety team.1

  • Community Forums and Shared Knowledge: External community forums, such as the r/Scams subreddit, and discussions on Discord's own support pages serve as valuable platforms for users to share their experiences with scams, warn others about new or evolving tactics, and seek advice.10 This collective intelligence helps to quickly disseminate information about emerging threats.

  • In-App Reporting: Discord provides built-in mechanisms for users to report malicious content, accounts, and servers directly to its Trust & Safety team.4 The effectiveness of these systems depends on users actively identifying and reporting violations.

Despite the presence of advanced technical defenses and automated systems, a significant responsibility for scam detection and reporting ultimately rests with individual users and community moderators. Automated systems, while powerful, cannot catch every nuanced or novel scam, particularly those that rely heavily on social engineering rather than easily identifiable malicious code or links.1 Discord's active encouragement of user reporting underscores this reality.4 However, user awareness and vigilance levels vary considerably. While some users are adept at identifying suspicious behavior 10, others, especially those newer to the platform or less technically savvy, may be more susceptible to cleverly crafted social engineering tactics.10 Scammers deliberately try to bypass this "human firewall" by impersonating trusted entities (friends, server staff, official Discord accounts) or by creating a false sense of urgency to pressure victims into acting without careful consideration.1 Furthermore, in large or under-moderated communities, moderator burnout or insufficient resources can lead to delays in addressing reported issues, creating windows of opportunity for scammers.32

This highlights the paramount importance of continuous and effective user education. Platform providers like Discord, along with community leaders, must continually seek engaging and impactful ways to keep users informed about the evolving threat landscape. Nevertheless, an over-reliance on user vigilance can inadvertently lead to victim-blaming when scams succeed, as noted in an account where a victim blamed herself for falling for a scam.10 Therefore, while user education is critical, the development of technical solutions should also aim to reduce the cognitive load on users for identifying scams, making it easier for them to stay safe.

VI. The Scale of the Challenge: Statistical Insights and Ongoing Battles

The fight against scam bots on Discord is a continuous and large-scale operation, as reflected in the platform's enforcement data and the persistent evolution of malicious tactics.

A. Analysis of Discord's Enforcement Data (Q1 2023 Transparency Report)

Discord's Q1 2023 Transparency Report offers a glimpse into the volume of actions taken against malicious activities.8 Key figures include:

  • Spam Accounts: A staggering 10,693,885 accounts were disabled for spam-related offenses. Notably, this represented a 71% decrease compared to the 36,825,143 spam accounts disabled in Q4 2022. Discord attributes this significant reduction to "less spam on Discord and improvements in our systems for detecting spam accounts upon registration, as well as quarantining suspected spam accounts without fully disabling them, allowing users to regain access to compromised accounts." An impressive 99% of these spam accounts were proactively disabled before any user report was received. For platform manipulation issues not directly classified as spam, an additional 3,122 accounts and 1,277 servers were removed.

  • Deceptive Practices: This category, which encompasses malware distribution, sharing or selling game hacks, authentication token theft, and participation in identity, investment, or financial scams, saw 6,635 accounts disabled (a 29% increase from the previous quarter) and 3,452 servers removed (a 43% increase).

  • Overall Policy Violations (Excluding Spam): 173,745 accounts were disabled for non-spam policy violations, a 13% increase from the prior quarter. 34,659 servers were removed, a 4% increase, with 70% of these server removals being proactive.

  • Warnings Issued: Discord issued 17,931 warnings to individual accounts (a 41% increase) and 1,556,516 accounts were warned as server members (a 27% increase).

  • User Reports: The platform received 6,023,898 reports for spam concerning 3,090,654 unique accounts. For non-spam issues, 117,042 user reports were received, of which 18,200 (15.5%) identified Community Guidelines violations that led to enforcement action.

  • Appeals: Of the accounts disabled, 22% submitted appeals. From these, only 778 accounts (representing 2% of those who appealed) were reinstated.

  • Comparitech reported that Discord removed 70,000 accounts for various scam-related activities throughout the entirety of 2023.1

The 71% decrease in disabled spam accounts reported by Discord for Q1 2023 is a prominent statistic. While on the surface it might suggest a dramatic reduction in spam activity, Discord's own explanation points to a more nuanced situation. It suggests improvements in proactive detection at the point of registration and a shift in handling, such as "quarantining" suspected spam accounts rather than immediate, permanent disablement. This could mean that the attempted volume of spam might not have decreased as drastically, but rather that Discord's efficiency in identifying and neutralizing certain types of spam (perhaps less sophisticated bots using newly created accounts) has significantly improved. The simultaneous increase in disabled accounts for "Deceptive Practices" (up by 29%) could indicate that more complex scams, which go beyond simple unsolicited messages, are either becoming more prevalent or are being detected more effectively by the platform. The very high proactive disablement rate for spam (99%) is a strong positive indicator of the effectiveness of Discord's automated detection systems. The low success rate of appeals (2%) also suggests a high degree of confidence within Discord regarding the accuracy of its initial disablement decisions. These statistics collectively paint a picture of a shifting battleground where Discord is adapting its strategies, leading to successes against some forms of malicious activity while other, potentially more sophisticated, threats continue to pose a challenge.

B. The "Cat-and-Mouse" Nature of Platform Security

The interaction between Discord's defenses and scammers' evasion tactics is a classic example of a "cat-and-mouse" game. As the platform enhances its security measures, malicious actors adapt and develop new methods to circumvent them:

  • The continuous evolution of scammer tactics, such as the use of AI-generated messages to bypass spam filters 19, the hijacking of expired server vanity URLs to redirect unsuspecting users 6, and sophisticated social engineering schemes, necessitates ongoing adaptation and investment in new detection technologies by Discord and third-party tool developers.

  • When Discord improves its detection capabilities for newly created accounts (e.g., through stricter initial checks or faster flagging), scammers respond by shifting their focus to acquiring and using aged or verified accounts, which may appear more legitimate to automated systems.22

  • The development of advanced bot frameworks like AkiraBot, which features modular evasion techniques including multiple CAPTCHA bypass methods and integrated proxy services 19, demonstrates a trend towards the professionalization and increased sophistication of scam operations.

  • High-profile incidents, such as the Ledger Discord server hack (where a compromised moderator account was used to deploy a phishing bot) 27 and the Inferno Drainer campaign (which employed fake Collab.Land verification bots and hijacked vanity URLs) 6, highlight how quickly potent and damaging attacks can emerge, often exploiting a combination of technical vulnerabilities and social engineering.

C. Challenges in Eradicating Scam Bots

Completely eradicating scam bots from a platform as large and open as Discord presents numerous significant challenges:

  • Scale and Speed: The sheer volume of users, servers, and messages processed by Discord daily makes comprehensive real-time monitoring for malicious activity an immense task. Bots can be created, deployed, and scaled up rapidly, often outpacing manual review processes.

  • Anonymity and Evasion: The widespread availability and use of proxies, VPNs, and services selling temporary phone numbers or email addresses allow bot creators to operate with a degree of anonymity. The ease with which new accounts can be created (or illicitly purchased) makes it difficult to permanently ban malicious actors, as they can often quickly return with new identities.10 As one user on Reddit lamented, even if an account is banned, "they'll make 1,000 more accounts tomorrow".10

  • Sophistication of Attacks: Modern scam bots are increasingly sophisticated. They leverage AI for generating human-like text, employ advanced CAPTCHA-solving techniques, and exploit complex platform features or vulnerabilities. Detecting these advanced bots requires equally advanced and adaptive detection methods that go beyond simple signature-based or rule-based systems.6

  • The Human Element: Social engineering remains a highly effective attack vector that can bypass purely technical defenses. Scammers are adept at manipulating human psychology, creating urgency, exploiting trust, or preying on desires to trick users into divulging information or clicking malicious links.4

  • Cross-Platform Nature of Scams: Many scams are not confined solely to Discord. They may originate from, or have components on, other platforms or services. For example, phishing emails might direct users to fake Discord login pages to steal Nitro subscription credentials 11, or malicious links shared on Discord might lead to harmful websites hosted elsewhere. This makes holistic detection and prevention more complex.

  • Resource Imbalance: Individual scammers or small, agile groups can cause disproportionate disruption and harm on a large platform. The resources required by the platform to detect, investigate, and mitigate these threats can be substantial compared to the relatively low cost for attackers to launch campaigns.

These challenges underscore that combating scam bots is not a one-time fix but an ongoing commitment requiring continuous investment, innovation, and adaptation.

VII. Fortifying the Gates: Recommendations for Discord, Administrators, and Users

Addressing the pervasive threat of scam bots on Discord requires a concerted effort from the platform itself, server administrators who manage communities, and individual users. A multi-layered approach focusing on technological enhancements, diligent administration, and user empowerment is crucial.

A. For Discord (Platform Enhancements)

  1. Advanced Bot Detection: Continue to invest heavily in and deploy sophisticated AI and machine learning systems for behavioral analysis. These systems should aim to identify bots that mimic human interaction patterns or utilize aged/verified accounts, moving beyond reliance on simpler signatures or IP-based heuristics.

  2. Ecosystem Security Initiatives: Develop programs to vet, certify, or provide security guidelines for widely-used third-party bots. Offering more secure development frameworks or APIs for bot creators could also mitigate risks associated with bot compromises. Address reported API vulnerabilities, such as the one mentioned by Josh Fraser concerning private channel data 26, with greater transparency and robust solutions.

  3. Improved Verification Tiers and Trust Scoring: Explore more dynamic or risk-based account verification requirements for access to sensitive actions or larger communities. This could involve incorporating behavioral biometrics or more nuanced trust scores that evolve based on account activity, rather than static verification markers alone.18

  4. Proactive Disruption of Illicit Markets: Actively collaborate with cybersecurity firms, researchers, and law enforcement agencies to identify and disrupt online marketplaces and services that facilitate scam operations. This includes those selling compromised or purpose-created aged Discord accounts, scamming tools, and CAPTCHA-solving services.

  5. Enhanced Transparency on Evolving Threats: Supplement quarterly transparency reports with more frequent and detailed advisories for users and developers regarding newly identified scam tactics, exploited vulnerabilities, and emerging threats. This allows the community to adapt more quickly.

  6. Streamlined and Contextual Reporting for Complex Scams: Enhance the user reporting system to better capture the nuances of multi-stage social engineering scams or coordinated malicious activities that may not be evident from a single message or user profile. Allow for more context to be provided with reports.

B. For Server Administrators

  1. Strict Permission Management: Adhere to the principle of least privilege. Grant bots and human moderators only the minimum permissions necessary for their roles.9 Specifically, avoid granting administrator rights to most bots. Regularly audit permissions for all roles and bots.

  2. Implement Robust Server Verification Levels: For public-facing servers, set the verification level to "High" (members must be registered on Discord for at least 10 minutes) or "Highest" (members must have a verified phone number on their Discord account) to create a barrier against throwaway bot accounts.9

  3. Utilize and Configure Quality Moderation Bots: Employ well-maintained and reputable moderation bots that offer strong anti-spam, anti-raid, and auto-moderation features (e.g., RaidProtect 30, MEE6 29, or specialized anti-raid bots like Beemo 9). Carefully configure these bots according to the server's specific needs and risk profile.

  4. Consistent Community Education: Regularly inform server members about common Discord scams, how to identify red flags (e.g., suspicious DMs, fake giveaways, impersonation attempts), and the correct procedures for securely reporting issues to server staff and Discord Trust & Safety.1 Consider creating dedicated channels for security announcements and tips.

  5. Vetted and Secure Moderation Team: Carefully vet all individuals before granting them moderation privileges. Ensure that all moderators enable Two-Factor Authentication (2FA) on their own Discord accounts to prevent compromise.9 Provide clear guidelines and training for moderators.

  6. Enable Comprehensive Channel Logging: Use moderation bots (e.g., GearBot 9) or server settings to maintain logs of important events, such as deleted messages, user warnings, kicks, and bans. These logs are invaluable for tracking issues, understanding incidents, and ensuring moderator accountability.

  7. Secure Vanity URLs: If the server utilizes a custom vanity URL, administrators should be mindful of maintaining the server's boost status to prevent the URL from expiring and potentially being hijacked by malicious actors.6

  8. Isolate New Members and Implement Leveling Systems: Utilize welcome channels or role-based leveling systems that restrict the permissions of new members until they have demonstrated genuine engagement over time. This can "fizzle out" auto-spam bots that join and immediately attempt to post malicious content, as their messages may be confined to restricted channels or their ability to send links/embeds may be limited.9

C. For Users

  1. Enable Two-Factor Authentication (2FA): This is one of the most critical steps any user can take to protect their account. Even if a scammer obtains a user's password, 2FA prevents unauthorized login without access to the second factor (e.g., an authenticator app code or a physical security key).1

  2. Maintain Skepticism Towards Unsolicited Communications: Do not click on suspicious links, download unfamiliar files, or scan unknown QR codes, especially if they are received via unsolicited DMs or from users not personally known and trusted.1 If an offer seems too good to be true, it almost certainly is. Use link scanning services like Sucuri or VirusTotal for unfamiliar URLs.7

  3. Verify Identities and Official Communications: If a message purporting to be from a friend seems out of character or suspicious, contact that friend through an alternative communication channel (e.g., text message, another social platform) to verify its legitimacy.1 Be aware of how to recognize official Discord system messages: they will have a "SYSTEM" badge, often special text at the beginning of the DM, and the reply input will be blocked by a banner.4 Remember, Discord staff will never ask for passwords or account tokens.4

  4. Protect Personal and Financial Information: Never share sensitive information such as passwords, Discord account tokens, credit card details, or cryptocurrency wallet seed phrases/private keys with anyone on Discord, regardless of who they claim to be.2

  5. Adjust Privacy and Safety Settings: Take advantage of Discord's built-in privacy controls. Configure DM settings to filter messages from non-friends or even filter all DMs if desired ("Safe Direct Messaging").1 Adjust friend request settings to limit who can send requests (e.g., to "Friends of Friends" or "Server Members" only, or disable all incoming requests if preferred).7

  6. Use Reputable Antivirus Software and Security Tools: Install and maintain updated antivirus software on all devices used to access Discord. This can help detect and block malware distributed through the platform.12 Consider using additional security tools like NordVPN's Threat Protection Pro (which can block malicious sites and trackers) 12 or Bitdefender Scamio for on-demand scam checking within Discord.3

  7. Report Suspicious Activity Promptly: If a scam, malicious bot, or policy-violating behavior is encountered, report it immediately to Discord's Trust & Safety team using the in-app reporting features, and also inform the administrators/moderators of the server where the activity occurred.1 Provide as much detail as possible, including message links and user IDs.

  8. Regularly Prune Joined Servers: Periodically review the list of servers joined and leave any that are inactive, no longer relevant, or seem untrustworthy. This reduces the attack surface and potential exposure to threats originating from compromised or poorly moderated servers.15

By adopting these practices, all stakeholders can contribute to a safer and more secure Discord experience.

VIII. Conclusion: Navigating a Dynamic Threat Landscape

The challenge of combating scam bots on Discord is a complex and dynamic issue, deeply intertwined with the platform's open nature and vast user base. This analysis reveals an ongoing technological and strategic arms race. Discord deploys an array of defenses, including proactive spam filters, rate limiting, and user reporting systems, and its transparency reports indicate significant enforcement actions.7 However, scam bot creators are equally adaptive, employing sophisticated evasion tactics such as advanced CAPTCHA bypasses, proxy networks, the use of aged or illicitly acquired verified accounts, AI-generated polymorphic messaging, and the exploitation of platform mechanics like API vulnerabilities or webhook abuse.18

The fight is not merely against individual malicious actors but against an evolving ecosystem that includes developers of sophisticated bot frameworks like AkiraBot 19 and underground markets supplying tools and compromised assets. This elevates the complexity beyond simple spam filtering to tackling organized, professionalized fraudulent operations. The effectiveness of any single defense mechanism is often temporary, as adversaries quickly learn to circumvent it.

Therefore, maintaining a safer Discord environment necessitates a continuous, multi-layered, and collaborative approach. This involves persistent innovation in platform-level security by Discord, incorporating advanced behavioral analytics and AI to detect increasingly human-like bots and sophisticated social engineering campaigns. It also requires robust third-party moderation tools, though their own security and permission management are critical considerations.26 Diligent server administration, focusing on strict permissioning, member education, and the careful use of security bots, forms another vital layer of defense.9

Ultimately, user vigilance and education remain indispensable. While technological solutions aim to reduce the burden on users, an informed and cautious user base that can recognize common scam patterns, protect personal information, and utilize reporting mechanisms effectively acts as a crucial "human firewall".1 The statistics, while showing progress in areas like proactive spam detection 8, also underscore the persistent scale of deceptive practices, highlighting that the battle is far from over. The path forward lies in the synergistic combination of technological advancement, proactive and transparent policy enforcement, strong community governance, and an empowered, educated user community, all working in concert to mitigate the impact of scam bots and preserve the integrity of the digital spaces Discord provides.

Works cited

PreviousGemini Report - Shapes Inc IssueNextPage 1

Last updated 9 days ago

Was this helpful?

Discord Scams: How to Spot and Avoid Them - Comparitech, accessed May 23, 2025,

90% Of Users Ignore These Discord Scams: Don't Be One Of Them! - VPN.com, accessed May 23, 2025,

Stay Scam-Free With Bitdefender Scamio on Discord, accessed May 23, 2025,

Understanding and Avoiding Common Scams | Discord, accessed May 23, 2025,

What Is Discord Malware? - Check Point Software, accessed May 23, 2025,

Sophisticated Phishing Attack Abuses Discord & Attacked 30,000 ..., accessed May 23, 2025,

Safety Library - Discord, accessed May 23, 2025,

Discord Transparency Reports, accessed May 23, 2025,

The 10 Most Common Discord Security Risks and How to Avoid Them - Keywords Studios, accessed May 23, 2025,

Reporting a Discord scam - Reddit, accessed May 23, 2025,

Discord scams that can steal your data | NordVPN, accessed May 23, 2025,

Discord malware: What is it and how can you remove it? - NordVPN, accessed May 23, 2025,

Reporting a bot (scam?) - Discord Support, accessed May 23, 2025,

Are Discord Giveaway Bots Real Or Fake? (How To Boost Trust), accessed May 23, 2025,

These 11 New Discord Scams Can (and Will) Steal Your Data - Aura, accessed May 23, 2025,

We got another discord scam!!!! - Reddit, accessed May 23, 2025,

Common Discord Scams? : r/SmallStreamers - Reddit, accessed May 23, 2025,

How Bots Bypass Captcha and reCAPTCHA Security | Anura, accessed May 23, 2025,

AkiraBot | AI-Powered Bot Bypasses CAPTCHAs, Spams Websites ..., accessed May 23, 2025,

Discord Proxy - Scale Bots and Scrape Anonymously - RapidSeedbox, accessed May 23, 2025,

How To Use Proxies For Effective Bot Mitigation - Geonode, accessed May 23, 2025,

Top 4 Sites to Buy Verified Discord Accounts - Indiegogo, accessed May 23, 2025,

Xyliase Shop: Your Ultimate Destination to Buy Discord Accounts, accessed May 23, 2025,

Discord Chrome Automation Bot w/ Multi-Account AI Conversation Simulation - Upwork, accessed May 23, 2025,

Human behavior simulation. How? : r/LocalLLaMA - Reddit, accessed May 23, 2025,

Scammers Target NFT Discord Channel | Threatpost, accessed May 23, 2025,

Ledger Discord Hack: Users Warned of Phishing Scam - AInvest, accessed May 23, 2025,

Ledger Confirms Discord Breach, Users Targeted by Bot, accessed May 23, 2025,

How to Stop Spam on a Discord Server - Auto Anti-Spam Bot Free - YouTube, accessed May 23, 2025,

Anti-spam | RaidProtect, accessed May 23, 2025,

Anti spam bot for discord using dyno bot automod - a how to discord video - YouTube, accessed May 23, 2025,

Community Safety and Moderation | Discord, accessed May 23, 2025,

One-Time Password (OTP) Bots: How They Work and How to Defend Against Them, accessed May 23, 2025,

https://www.comparitech.com/blog/information-security/discord-scams/
https://www.vpn.com/guide/discord-scams/
https://www.bitdefender.com/en-gb/blog/hotforsecurity/bitdefender-scamio-is-now-on-discord
https://discord.com/safety/understanding-and-avoiding-common-scams
https://www.checkpoint.com/cyber-hub/threat-prevention/what-is-malware/what-is-discord-malware/
https://cybersecuritynews.com/phishing-attack-abuses-discord/
https://discord.com/safety-library
https://discord.com/safety-transparency-reports/2023-q1
https://www.keywordsstudios.com/en/about-us/news-events/news/the-10-most-common-discord-security-risks-and-how-to-avoid-them/
https://www.reddit.com/r/Scams/comments/1igbmlz/reporting_a_discord_scam/
https://nordvpn.com/blog/discord-scams/
https://nordvpn.com/blog/discord-malware/
https://support.discord.com/hc/en-us/community/posts/360055569852-Reporting-a-bot-scam?page=2
https://rafflepress.com/discord-giveaway-bots/
https://www.aura.com/learn/discord-scams
https://www.reddit.com/r/Scams/comments/1jp5fw6/we_got_another_discord_scam/
https://www.reddit.com/r/SmallStreamers/comments/1j434ej/common_discord_scams/
https://www.anura.io/blog/captcha-and-recaptcha-how-fraudsters-bypass-it
https://www.sentinelone.com/labs/akirabot-ai-powered-bot-bypasses-captchas-spams-websites-at-scale/
https://www.rapidseedbox.com/discord-proxy
https://geonode.com/blog/how-to-use-proxy-for-bot-mitigation
https://www.indiegogo.com/projects/--3145594/coming_soon
https://avenacloud.com/blog/xyliase-shop-your-ultimate-destination-to-buy-discord-accounts-and-more/
https://www.upwork.com/freelance-jobs/apply/Discord-Chrome-Automation-Bot-Multi-Account-Conversation-Simulation_~021925427069610806139/
https://www.reddit.com/r/LocalLLaMA/comments/19dn8t7/human_behavior_simulation_how/
https://threatpost.com/scammers-target-nft-discord-channel/179827/
https://www.ainvest.com/news/ledger-discord-hack-users-warned-phishing-scam-2505/
https://www.cryptotimes.io/2025/05/12/ledger-confirms-discord-breach-users-targeted-by-bot/
https://www.youtube.com/watch?v=5Q1GiQLMhl0&pp=0gcJCdgAo7VqN5tD
https://docs.raidprotect.bot/en/features/anti-spam
https://m.youtube.com/watch?v=pVGJVgaKFV0&t=1s
https://discord.com/community-moderation-safety
https://supertokens.com/blog/otp-bots