Bot detection: Identifying the good, the bad, and the ugly
“RoboCop,” “Ex Machina,” “I, Robot”: Hollywood loves to make movies about robots taking over the world, which, thankfully, is not yet a reality (…though, it is 2020). However, on the internet, bots truly have achieved significant critical mass, representing an estimated 37.2% of all internet traffic, according to Imperva’s 2020 Bad Bot Report. Bots are software programs that perform routine tasks and execute commands automatically. All bots are not created equal. There are different types of bots — some good, some bad.
- Good bots: Good bots include search engines, which can help drive traffic to a website, as well as virtual assistants and chatbots, which provide a quick and efficient means of customer engagement.
- Bad bots: Bad actors can use bots for fraud or malicious disruption in many ways. In 2019, bad bots made up an estimated 24.1% of internet traffic, says the Imperva report. Bots are used for brute-force or credential stuffing attacks, card testing, and distributed denial of service, just to name a few.
- Questionable bots: There is a whole class of bots whose activity can be either good or bad, depending on a business’s goals. Scraper bots are a prime example. While these bots can aggregate content that is used to drive third-party sales channels (think of online travel agencies and airlines), scrapers are also a key tool in promotional abuse, a rising problem. Spider bots are another example. While businesses can use these to index their sites to help improve search engine optimization routines, bad actors can also use them to find vulnerabilities.
Bad bots and questionable bots can have a number of adverse impacts on a business.
- Fraud losses: Fraud losses are an obvious impact of bad bot activities like credential stuffing and card testing. 80% of eCommerce merchants said that increasingly sophisticated bot attacks are contributing to rising losses, according to Kount’s 2020 Bot Landscape and Impact Report.
- Server capacity: Bot traffic creates server load, which can either slow site performance for good users or require a firm to invest in additional hardware to maintain desired response times.
- Distorted performance analytics: Bot volume can skew website performance analytics, which can make it difficult to understand the true performance of marketing campaigns.
Historically, bot detection focused on the perimeter — e.g., web access firewalls (WAFs). As bots have grown more sophisticated, they’ve dodged these defenses by deploying low-and-slow attacks that mimic human behavior. Or they spread attacks across multiple IP addresses and proxies to bypass velocity controls. Now, to protect themselves, businesses need a broader complement of identity data to detect and stop bad bots while letting good ones through. This includes consortium device data, behavioral biometrics, and behavioral analytics.
With little in the way of deterrents or consequences, bot-based attacks will only get more sophisticated. As such, it’s imperative that any firm with a digital presence deploy equally sophisticated defenses.