Welcome to bot detection. It’s a cat and mouse game, an ever changing battle where each side makes moves and counter moves. You can see this with the creation of captcha-less challenges.
But to say captcha are useless because bots can pass them is somewhat similar to saying your antivirus is useless because certain malware and ransomware can bypass it.
How are you measuring this? On my end, when I look at the metrics I have available to me, the volume of bot requests that are passing captcha does not exceed that of humans. We continually review false positives and false negatives to make sure we aren’t impacting humans while still making it hard for bots.
Hi! I didn’t forget about your response. I sifted through the links to find the study in question. I imagine my response isn’t going to satisfy you but please hear me out. I’m open to hearing your rebuttals regarding this too.
The study is absolutely correct with what they studied and the results they found. My main issues are the scope and some of the methodologies.
On one hand, I see the “AI” they used was able to solve captchas better than humans. My main issue with this is that this is one tool. Daily, I work on dozens of different frameworks and services, some that claim to leverage AI. The results and ability to pass captcha varies with each tool. There’s an inevitable back and forth with each tool as these tools learn how to bypass us and as we counter these changes. There’s not just one tool that everyone is using as their bot as is the case in the study, so it’s not exactly how this works in the real world.
I recognize that the list of sites they chose were the top 200 sites on the web. That said, there are more, up-and-coming captcha services that weren’t tested. I think it’s worth noting that the “captcha-less”, like Turnstile, approaches are still captcha but skip straight to proof of work and cutting out the human altogether.
We should absolutely take studies like this to heart and find better ways that don’t piss off humans. But the reality is that these tools are working to cut down on a vast amount of bot traffic that you don’t see. I understand if you’re not ok with that line of reasoning because I’m asking you to trust me, a random Internet stranger. I imagine each company can show you metrics regarding FP rates and how many bots are actually passing their captcha. Most do their best to cut down on the false positive rate.
I mean, it’s a while since i worked in backend. But one of the base tools was to limit requests per second per IP, so they can’t DDOS you. If a bot crawls your webpage you host with the intention to share, what’s the harm? And if one particukar bot/crawler misbehaves, block it. And if you don’t intend to share, put it in a VPN.
Welcome to bot detection. It’s a cat and mouse game, an ever changing battle where each side makes moves and counter moves. You can see this with the creation of captcha-less challenges.
But to say captcha are useless because bots can pass them is somewhat similar to saying your antivirus is useless because certain malware and ransomware can bypass it.
But they are better than humans at solving them.
How are you measuring this? On my end, when I look at the metrics I have available to me, the volume of bot requests that are passing captcha does not exceed that of humans. We continually review false positives and false negatives to make sure we aren’t impacting humans while still making it hard for bots.
https://duckduckgo.com/?q=captcha+ai+better&t=fpas&ia=web
Hi! I didn’t forget about your response. I sifted through the links to find the study in question. I imagine my response isn’t going to satisfy you but please hear me out. I’m open to hearing your rebuttals regarding this too.
The study is absolutely correct with what they studied and the results they found. My main issues are the scope and some of the methodologies.
On one hand, I see the “AI” they used was able to solve captchas better than humans. My main issue with this is that this is one tool. Daily, I work on dozens of different frameworks and services, some that claim to leverage AI. The results and ability to pass captcha varies with each tool. There’s an inevitable back and forth with each tool as these tools learn how to bypass us and as we counter these changes. There’s not just one tool that everyone is using as their bot as is the case in the study, so it’s not exactly how this works in the real world.
I recognize that the list of sites they chose were the top 200 sites on the web. That said, there are more, up-and-coming captcha services that weren’t tested. I think it’s worth noting that the “captcha-less”, like Turnstile, approaches are still captcha but skip straight to proof of work and cutting out the human altogether.
We should absolutely take studies like this to heart and find better ways that don’t piss off humans. But the reality is that these tools are working to cut down on a vast amount of bot traffic that you don’t see. I understand if you’re not ok with that line of reasoning because I’m asking you to trust me, a random Internet stranger. I imagine each company can show you metrics regarding FP rates and how many bots are actually passing their captcha. Most do their best to cut down on the false positive rate.
I mean, it’s a while since i worked in backend. But one of the base tools was to limit requests per second per IP, so they can’t DDOS you. If a bot crawls your webpage you host with the intention to share, what’s the harm? And if one particukar bot/crawler misbehaves, block it. And if you don’t intend to share, put it in a VPN.
Is that out of date?