Ssh Service Running on Multiple Ports with Custom Rules in Linux

SSH service running on multiple ports with custom rules in Linux

Initially, the SSH service can be made to listen to multiple ports by adding the following line to /etc/ssh/sshd_config.

Port 22
Port 5522

In this scenario, you cannot define different rules for different ports.

One of the solutions I could find is to create a new service to run SSH service on port 5522 and then running the service as daemon.

To do so please follow steps below:-

  1. create a copy of the SSH service and name it, here I named the copy as sshd_config_custom
cp /etc/ssh/sshd_config /etc/ssh/sshd_config_custom

  1. Similarly, create a copy of the service too.
cp /lib/systemd/system/ssh.service /lib/systemd/system/sshd-custom.service

  1. open /lib/systemd/system/sshd-custom.service using any comfortable editor and change
ExecStart=/usr/sbin/sshd -D $SSHD_OPTS

to

ExecStart=/usr/sbin/sshd -D $SSHD_OPTS -f /etc/ssh/sshd_config_custom

And

Alias=sshd.service

to

Alias=sshd-custom.service

Save and exit the file.


  1. Now you can add the line Port 5522 in /etc/ssh/sshd_config_custom and can make any required changes to this conf file.

  2. Enable and start the custom service that we have created.

systemctl enable sshd-custom.service
systemctl start sshd-custom.service

Let me know if there is any other suggestions

Is it possible to specify a different ssh port when using rsync?

Another option, in the host you run rsync from, set the port in the ssh config file, ie:

cat ~/.ssh/config
Host host
Port 2222

Then rsync over ssh will talk to port 2222:

rsync -rvz --progress --remove-sent-files ./dir user@host:/path

Setup git SSH server without opening up the management users

I just have to do that for my shop:

  1. SSH port 22 is never (in a big company environment) opened to internet, or even open to enterprise load-balancer/reverse-proxy redirecting traffic: it is not an allowed public ingress point, in my experience
  2. A dedicated SSH port, with its own SSH daemon should be defined on that GitLab server, with:
    • only one dedicated account allowed (AllowUsers)
    • no interactive session possible (PasswordAuthentication no)
    • only a forced command used in ~git/.ssh/authorized_keys, calling gitlab_shell
  3. A load-balancer/reverse-proxy (typically F5 in big company, as reverse-proxy) would redirect a public 22 port from a DMZ to the private dedicated SSH port on your server

Can two applications listen to the same port?

The answer differs depending on what OS is being considered. In general though:

For TCP, no. You can only have one application listening on the same port at one time. Now if you had 2 network cards, you could have one application listen on the first IP and the second one on the second IP using the same port number.

For UDP (Multicasts), multiple applications can subscribe to the same port.

Edit: Since Linux Kernel 3.9 and later, support for multiple applications listening to the same port was added using the SO_REUSEPORT option. More information is available at this lwn.net article.

Using Chef & Amazon with a Non-standard ssh port

As some of the commenters have said, you are being locked out because the instance is listening on port 22, but your security groups are only allowing TCP connections on port 999.
There is no way to connect to the instance in this state.

There are three solutions to this problem that I can see.

Solution 1: Make a new AMI

Create a new AMI which has sshd configured to listen on 999.
Once you have done so, create your EC2 servers using your custom AMI and everything should work.
There are fancy-pants ways of doing this using cloudinit to let you customize the port later, but it hardly seems worth the effort.
Just hardcode "999" instead of "22" into /etc/ssh/sshd_config

The obvious downside is that for any new AMI that you want to use, you'll have to bake a new base AMI that uses the desired port instead of 22.
Furthermore, this diverges away from the Cheffy philosophy of layering configuration on a base image.

Solution 2: Icky Icky Security Groups

You can hack your way around this by modifying your security groups every time you bring up a new server.
Simply add an exception that allows your machine to SSH into the box for the duration of your bootstrapping process, then remove this from the security group(s) that you are using once you are done.

The downside here is that security groups in EC2 are not dynamic, so you either have to create a new security group for each machine (ick!), or let this open port 22 for your workstation on all of your servers during the bootstrapping window (also ick!).

Solution 3: Tunneling

The last option that I can think of is to allow connections on port 22 amongst your live servers.
You can then tunnel your connection through another server by connecting to it, opening an SSH tunnel (i.e. ssh -L ...), and doing the knife ec2 actions through the tunnel.

Alternatively, you can pass through one of the other servers manually on your way to the target, and not use knife to bootstrap the new node.

The downside here is that you need to trust your servers' security, since you'll have to do some agent forwarding or other shenanigans to successfully connect to the new node.

This is probably the solution that I would choose because it does not require new AMIs, doesn't open port 22 to the outside world, and requires a minimal amount of effort for a new team member to learn how to do.

Problems changing linux SSH port on Microsoft Azure

For now, in ARM module, we can't use NSG to NAT one port to another port.
As a workaround, we can change sshd_config port settings, here are the steps:

1. SSH to this VM, change sshd_config settings like this, change port 22 to port 33320:

root@jasonvm:~# vi /etc/ssh/sshd_config
# Package generated configuration file
# See the sshd_config(5) manpage for details

# What ports, IPs and protocols we listen for
Port 33320

2. restart ssh service:

root@jasonvm:~# service ssh restart
root@jasonvm:~# netstat -ant
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:33320 0.0.0.0:* LISTEN
tcp 0 0 10.0.0.4:33188 52.240.48.24:443 TIME_WAIT
tcp 0 0 10.0.0.4:44470 168.63.129.16:80 TIME_WAIT
tcp 0 0 10.0.0.4:33320 114.224.98.58:58180 ESTABLISHED
tcp 0 0 10.0.0.4:33186 52.240.48.24:443 TIME_WAIT
tcp 0 0 10.0.0.4:22 114.224.98.58:58088 ESTABLISHED
tcp 0 0 10.0.0.4:33182 52.240.48.24:443 TIME_WAIT
tcp 0 0 10.0.0.4:44464 168.63.129.16:80 TIME_WAIT
tcp 0 0 10.0.0.4:33180 52.240.48.24:443 TIME_WAIT
tcp6 0 0 :::33320 :::* LISTEN

3. Add inbound rule to NSG:
Sample Image

After that completed, we can use new port and public IP address to ssh this VM:

ssh user@xxx.xxx.xxx.xxx 33320

Unable to connect to any port other than 22 (ssh) AWS

As mentioned in the comments, port access from an external network was blocked via iptables running on the instance.

My approach to iptables/AWS access policies is to always use an "open" AWS access policy and lock everything down using iptables. This way, controlling access is closer to the instance, and doesn't overly complicate things by layering access policies over one another.

And tbh, your mongod instance shouldn't even be in a DMZ (publicly accessible), you should setup iptables to only allow access to the mongod instance from internal networks. Chances are you have an "app server" which needs access to the mongod so make the app server publicly accessible, not the mongod instance.

Here is an example of iptables running on one of my "docstore" instances:

root@service-a-2.sn3.vpc3.example.com ~ # -> iptables -L -vn
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
1419 15M ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 /* 001 accept all to lo interface */
27M 4531M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 /* 005 accept related established rules */ state RELATED,ESTABLISHED
0 0 ACCEPT icmp -- eth0 * 10.3.0.0/16 0.0.0.0/0 /* 015 accept internal icmp */ state NEW
74 4440 ACCEPT tcp -- eth0 * 10.3.0.0/16 0.0.0.0/0 multiport dports 22 /* 777 accepts ssh */ state NEW
420K 25M ACCEPT tcp -- eth0 * 10.3.0.0/16 0.0.0.0/0 multiport dports 27017 /* 777 allow mongo access */ state NEW
462K 28M ACCEPT tcp -- eth0 * 10.3.0.0/16 0.0.0.0/0 multiport dports 3306 /* 777 allow mysql access */ state NEW
656 28976 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 /* 999 drop all INPUT */

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 27M packets, 9639M bytes)
pkts bytes target prot opt in out source destination

Notice that I am using CIDR (classless inter-domain routing) blocks to identify internal networks, this way you don't get bogged down having to specify actual ip addresses.



Related Topics



Leave a reply



Submit