Rachel Kroll

The customer stuck due to a hurricane who needed ssh

One problem with working in a customer support environment is that you tend to lose track of just how many tasks you've completed. After a few hours, most of them get pretty fuzzy, and by the end of the week, only the most notable ones stand out. A month later, it's even worse than that. This is just how it goes when there's so much quantity passing by.

This is why I tried to take notes about a handful of them as they happened. After a certain point, the memories start losing "cohesion" (whatever) and then it might as well be "fiction inspired by real life".

A fair number of my posts are sourced from these notes. It's how I can still give some details all these years later without making them up.

Here's something that came in one night almost 20 years ago while working web hosting tech support.

A customer wrote in. They opened an "emergency: emergency" ticket, which is usually reserved for "OMFG my server is down please fix0r" type events. It actually had a HTML blink tag baked into the very string so it would blink in our browsers. It was hard to miss.

It was a Monday night. What they said, more or less: "Three things. I have a Tuesday deadline. I'm stuck in (some airport) because of the weather problems from Hurricane Jeanne in Atlanta. I can't connect to port 22 because the wireless in the airport seems to firewall it off."

"So, if not for that, I wouldn't call this 'emergency'. Also, I can't get to webmin to add another port myself. So, can you open up another sshd on port NNNN (since I know that gets through) so I can get to the machine?"

They ended this with a "Thank you" with a bunch of exclamation points and even a 1. (Whether they were trying to be KIBO or B1FF, I may never know.)

They opened this ticket at 7:40. About five minutes later, one of our frontline responders saw it in the queue (probably noticed the *blinking*), mentioned it out loud, and after a short discussion assigned it to one of the people on the floor.

At 7:50, we responded, stating that some iptables magic (shown in the ticket) had been done to let sshd answer on port NNNN in addition to port 22. Also, there was a note added to clarify that this was made persistent, such that it would persist across reboots. The customer was then asked to try connecting and to let us know if that didn't work out.

Why iptables instead of a second ssh daemon? It was way faster, for one thing, and time was of the essence. You could run the two commands: one to add the rule, and one to make it persistent, and then the customer is good to go. Standing up a second sshd instance on a separate port back in those days would have meant wrangling init scripts to make a second version that points at a slightly different config file. Also, it would create something of a maintenance issue down the road as that forked config would quickly become forgotten.

Sure, someone could have run "sshd -p NNNN", but then they'd have to make sure it kept running, and if it got whacked somehow (reboot?), the customer would be screwed again with their deadline looming.

Also, in terms of cleanup, the customer could just flip the -I (insert) in the iptables command to -D (delete) and save it to make it disappear for good later. Tidying the second-sshd thing would have more fiddly.

In any case, the customer came back a few minutes later, thanked us for the work, and promised to clean it up when they were clear of the problem. We didn't hear back, so things apparently worked out.

I hope they made their deadline.