Bot Management

<< Back to Technical Glossary

Bot Management Definition

Bot management is a strategic approach to filtering access to web applications by automated software. A successful bot management strategy can block unwanted or malicious bots, such as those used for cyberattacks, while allowing useful bots, such as Google crawlers. Bot management strategies are designed to detect and identify the source of bot activity, and determine its nature.

Bot management enhances website security and performance. Malicious bots that access assets can overload servers, deny access to legitimate users, and scrape content for credentials, proprietary assets, or system files. Attackers can use these items to spam content, plan cyber attacks, phish users, and execute bot attacks.

On the other hand, enterprise bot management systems that produce excessive false positives for bad bots can accidentally block search engine traffic, and cause the loss of conversions and revenue.

Bot management uses machine learning, security, and web development technologies to balance these concerns and block malicious bots while permitting legitimate activity. These technologies include user behavioral analytics (UBA), bot pattern databases, and web application firewalls (WAFs) that can intercept traffic and block malicious activity based on business rules or real-time analysis.

Image depicting bot management by showing applications integrating with bot detection and filtering our unwanted bots while keeping useful bots.

Bot Management FAQs

What is Bot Management?

Simply, bot management is the goal of understanding the activity and intent of each individual bot on the network. This enables the bot manager to respond based on incoming bot activity.

What is a bot manager?

A bot manager is any bot management software or product that is capable of blocking some bots and allowing others through, rather than merely blocking all non-human traffic.

Bot management software addresses two key challenges:

  • Differentiate between bot traffic and legitimate human traffic; and
  • Differentiate between good bots and bad bots based on their good or malicious intentions.

 

Some examples of good bots that can benefit or support a site include web crawlers like Googlebot and chatbots that respond to queries.

Many bots aren’t inherently good or bad and depend on context and use. Social bots promote products or ideas on social media but they can also be the source of misinformation or automated threats

Some bots are always bad. Malicious bots perpetrate credential stuffing, online fraud, and other offenses. Scalpers are malicious bots that use automated methods to attain goods in bulk that can later be sold at a profit, especially things like event tickets or airline seats. Scrapers steal website data by scraping and duplicating sites without permission.

Any basic bot management practice consists of two steps related to malicious bot traffic:

  • Effective detection
  • Appropriate mitigation

 

Many bot management solutions challenge users they suspect of being bots a CAPTCHA to verify that they are human. However, especially as CAPTCHA farm services become more popular, traditional CAPTCHAs are no longer very effective against malicious bots.

To avoid both false negatives and positives, your bot management solution must:

  • Mitigate sudden, dramatic behavioral changes using a real-time feedback loop
  • Identify anomalies and effectively adapt mitigation
  • Dynamically adjust in real-time to new patterns using iterative machine learning (ML)
  • Use both behavior-based and fingerprinting approaches to distinguish between bots and human users

 

How Does Bot Management Work?

Modern bot management techniques must both identify increasingly sophisticated attacker bots which emulate human users skillfully, and maintain day to day operations by distinguishing malicious bots from legitimate bots.

Currently, several approaches are used to detect and manage bots:

Static methods. Static bot management methods are passive. For example, this might include parsing HTTP header information and other web requests with analysis tools to identify known malicious bots.

Challenge-based methods. These are tools that “challenge” or test website visitors to determine whether or not they are human, such as CAPTCHA verification. Sophisticated malicious bots or CAPTCHA farms can avoid CAPTCHAs, so this is not fail-proof.

Behavioral methods. Behavioral methods profile visitors to match activity with known bot patterns. This method classifies activity using several profiles and distinguishes between human users and then between good bots and bad bots.

Proprietary methods. In many cases a proprietary bot management solution, which deploys some range of proprietary interrogative techniques, algorithms, and formulas, is the best way to produce an effective bot management solution and a superior user experience. Bot management and/or mitigation services typically identify bots and secure systems using automated tools, monitor mobile app and API traffic, and prevent API abuse by implementing rate-limiting, restricting bots across the entire landscape.

Expect modern bot management solutions to support multiple bot detection techniques, including:

Bot signature files and profiles. Bot management platforms maintain active lists of known bots with signatures, which bot management solutions draw upon to identify anomalous bot activity and block it.

Transactions per second (TPS). Bot management solutions can detect bot activity using TPS. It works by first setting a time interval, and then flagging any incoming traffic that exceeds the parameter.

Malicious IP address blocking. An updated list of malicious IP addresses to block is essential to most bot management solutions.

IP reputation analysis. IP reputation analysis tells you where a bot comes from and if it is a risky domain associated with malicious bot activities or cyberattacks.

These bot detection techniques allow bot management tools to log and manage bot traffic in line with applicable policy.

Why is Bot Management Necessary?

It is critical for every organization to prioritize bot management as part of core security and operations processes. Some of the major risks bot management can help organizations avoid include:

Distributed denial-of-service (DDoS) attacks. DDoS attacks deploy networks of compromised devices or bots to overwhelm processing resources and bandwidth by spamming servers with requests. This can render applications, sites, or services unavailable.

Credential stuffing. Credential stuffing attacks see cybercriminals automatically try stolen or leaked credentials using bots until one is accepted. These brute force attacks enable account takeovers and often succeed because of reused credentials across accounts.

Credit card and gift card fraud. Attackers can use bots to launch brute force attacks that create counterfeit gift cards they exchange for cash. They can also test stolen credit card information using bots making small purchases; if the purchases are valid and go unnoticed, attackers can reuse them.

Intelligence harvesting. Attackers can scan or crawl sites, forums, and social media platforms with bots to find user information and confidential details for phishing attacks.

Web scraping. Web scraping attacks use bots to extract proprietary information from storage or sites such as intellectual property, pricing data, product information, or other hidden files. Some sites that are particularly vulnerable to web scraping include sites for travel ticketing and online gaming.

Does VMware NSX Advanced Load Balancer Provide Bot Management?

Yes. The bot detection step is the first and most critical step of bot management on the VMware NSX Advanced Load Balancer platform. In this step, the request goes through various checks called decision components. Each decision component—itself a bot detector—provides some information to characterize the request.

For example, Vantage matches the IP address of the client against the IP reputation database updated by Pulse. The VMware NSX Advanced Load Balancer marks the client as Bot with a high confidence level if it makes a match.

Next in the IP location step, Vantage looks up the Client-IP in the network location DB and matches the ISP and Organization name against known search engines and cloud providers. It then marks the client as either as bot or undetermined and assigns a confidence level.

Next at the User-Agent step, the system looks for SQL injections and other threats by conducting a heuristic scan of the incoming user agent string. Pulse populates the User-Agent Database and the system checks the client and marks it either Bot or Human.

Learn more about bot detection, bot management, and bot configuration on Vantage here.

For more on the actual implementation of load balancing, security applications and web application firewalls check out our Application Delivery How-To Videos.