Beefy Boxes and Bandwidth Generously Provided by pair Networks
Don't ask to ask, just ask
 
PerlMonks  

Re: RFC / Audit: Mojo Login Example

by sundialsvc4 (Abbot)
on Mar 23, 2020 at 19:14 UTC ( #11114573=note: print w/replies, xml ) Need Help??


in reply to RFC / Audit: Mojo Login Example

(Yes, a very excellent response, Haj ... thanks for sharing.)

Another strategy that I have used with good success is to fairly-quickly kick users who have had too many unsuccessful login attempts over to an alternative login-screen same URL, different content which requires a "captcha."   (Although Mother Google is probably the easiest source of captchas, I doubt it really matters much.)   It’s fine to add some text explaining to the human user why you are doing this ... robots will never read it anyway.)   Of course, thanks to CPAN, the actual implementation requires no thought.

Having forced them over to this alternative login screen, I would make them successfully complete it two or three times before relenting and letting them go back to the old way.

I frankly think that this will ultimately be more effective, and considerably easier to implement, than the strategy you are now contemplating.   (I generally think of these to be better reserved for denial-of-service attacks.)

I would also counsel making "captchas" a mandatory feature of your "sign up for an account" screens, if you allow arbitrary users to do so.   I have about 9,000(!) "junk" user-ids dating from before I did this.   (How they all managed to respond to the mandatory account-validation emails, I have no idea ...)

Replies are listed 'Best First'.
Re^2: RFC / Audit: Mojo Login Example
by jcb (Priest) on Mar 23, 2020 at 22:15 UTC

    The problem is that, if you do proper hash stretching on the server, the server must do a fairly expensive operation before rejecting an incorrect password. This means that brute force password guessing is a denial-of-service attack and the best that you can do is throttle login attempts somehow.

    A simple CAPTCHA is a good option for this; asking the solution to simple math problem will confound most bot herders and allow to prioritize actual users' requests ahead of a bot horde. This has to be site-wide, not per-user, however and is probably best accompanied by an explanation that the server is under high load due to password-guessing attacks and solving the CAPTCHA will get your request priority. Tarpit requests that lack a CAPTCHA solution until they timeout, if you can.

    A large botnet can produce a very diffuse attack, somewhat reducing the effectiveness of filtering by IP address, and storing IP addresses raises privacy concerns, but if your users' accounts are linked to real-world identities anyway (for example, you are running a paid service) the privacy concerns are less severe and you may want to store commonly-used IP addresses per-user and give priority to logins originating from IP addresses or IP address blocks that a user has previously used. Associating processing priority with how many logins have been seen from the same IP address could result in login attempts from password-guessing bots being demoted to "idle" priority and taking perhaps minutes while actual users see quick logins in less than a second.

      I don't mind CAPTCHAs to slow down bots (or, more likely, have them skip that particular target). On the other hand, in my opinion assigning different priorities isn't worth the effort or at least way outside the scope of this example. To find out whether a particular request is a login, the example code needs to go through the routing table in the application, so you've already passed any front ends which might be able to schedule requests according to some priority. Running another backend layer just for logins seems like over-engineering.

      In general, any proactive measures against bad bot behavior are an uphill struggle, even more so in an open source environment. Bot developers are at an advantage: They see your code and can design the attack methods accordingly. It is this imbalance why I recommend security logging by the application, even in a simple example like this. The application can help to detect the attack pattern, or to leave that job to security specialists, but only if the data are made available by the application. In particular, making the log entries machine readable is something the application must take care of.

      A reply falls below the community's threshold of quality. You may see it by logging in.

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://11114573]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others surveying the Monastery: (6)
As of 2020-08-10 07:18 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    Which rocket would you take to Mars?










    Results (56 votes). Check out past polls.

    Notices?