Get in there and comment that bugger out. More recently, in Ubuntu 16.04, the culprit that bit me hard was pam_systemd.so. In the past, that was pam_ck_connector.so. What you need to do is go to the SSH server, edit /etc/pam.d/common-session and comment out the optional module that’s causing you grief. In my experience, you can’t get away with simply disabling PAM login in /etc/ssh/sshd_config – if you do, you won’t be able to log in at all. Optional PAM modules can really screw you here. This is another one that’s really, really difficult to find references to online. But yeah, it can still be this avahi crap. Maybe your slow logins only happen when SSHing to one particular server, even one particular server on your local network, even one particular server on your local network which has UseDNS no and which you don’t need any DNS resolution to connect to in the first place. No services need restarting after making this change, which again, must be made on the client. In theory maybe something might stop working without that mdns4_minimal option? But I haven’t got the foggiest notion what that might be, because nothing ever seems broken for me after disabling it. To fix this one, go to the SSH client, edit /etc/nf, and change this line: hosts: files mdns4_minimal dns ![]() The next most common cause – which is devilishly difficult to find reference to online, and I hope this helps – is the never-to-be-sufficiently damned avahi daemon. You’ll need to restart the service after changing sshd_config: /etc/init.d/ssh restart, systemctl restart ssh, etc as appropriate. To fix this one, go to the SSH server, edit /etc/ssh/sshd_config, and set UseDNS no. The most common cause of slow SSH login authentications is DNS. Which is why I keep screeching in frustration every few years I remember the dreaded debug1: SSH2_MSG_SERVICE_ACCEPT received hang is something I’ve solved before, but I can only remember some of the fixes I’ve needed.Īnyway, here are all the fixes I’ve needed to deploy over the years, collected in one convenient place where I can find them again. There are a few fixes for this, with the most common – DNS – tending to drown out the rest. I have solved this problem after hours of screeching head-desking probably ten times over the years. You might also have enabled debug logging on the server, and discovered that your hang occurs immediately after debug1: KEX done Īnd before debug1: userauth-request for user in /var/log/auth.log. If you’re patient enough, the process will generally eventually continue after the debug1: SSH2_MSG_SERVICE_ACCEPT received line, but it may take 30 seconds. Those of you who got a little froggier and tried doing an ssh -vv to get lots of debug output saw things hanging at debug1: SSH2_MSG_SERVICE_ACCEPT received, likely for long enough that you assumed the entire process was hung and ctrl-C’d out. If you ask about your screen sessions while using one of them, you should see that the session you’re currently reattached to is once again “attached.Most of you are probably getting here just from frustratedly googling “slow ssh login”. The process you left running should have continued processing while it was detached and you were doing some other work. Reattaching to the session then requires that you supply the assigned name. Notice that initially both sessions are marked as being detached. ![]() In the commands below, we list the currently running sessions before reattaching one of them. ![]() Once you have multiple sessions running, reattaching to one then requires that you pick it from the list. ![]() Using this approach, you would name each session when you start it by using a command like this: $ screen -S slow-build If you’re going to run more than one screen session, a better option is to give each session a meaningful name that will help you remember what task is being handled in it.
0 Comments
Leave a Reply. |