Redirecting Tcp-Traffic to a Unix Domain Socket Under Linux

Redirecting TCP-traffic to a UNIX domain socket under Linux

Turns out socat can be used to achieve this:

socat TCP-LISTEN:1234,reuseaddr,fork UNIX-CLIENT:/tmp/foo

And with a bit of added security:

socat TCP-LISTEN:1234,bind=127.0.0.1,reuseaddr,fork,su=nobody,range=127.0.0.0/8 UNIX-CLIENT:/tmp/foo

These examples have been tested and work as expected.

Does data passed across a unix domain socket cross the kernel boundary?

Of course Unix sockets go via the kernel, but your question is founded on a misconception. You wouldn't see a benefit from introducing another copy step via splice.

How do I determine whether open socket is TCP or unix domain socket?

The first member of the struct sockaddr returned by getsockname is sa_family, just test that against the symbolic constants. The bug on OSX lets you assume the unix domain when the returned address structure is zero bytes, for other platforms and domains, just check the returned structure.

Identify program that connects to a Unix Domain Socket

Yes, this is possible on Linux, but it won't be very portable. It's achieved using what is called "ancillary data" with sendmsg / recvmsg.

  • Use SO_PASSCRED with setsockopt
  • Use SCM_CREDENTIALS and the struct ucred structure

This structure is defined in Linux:

struct ucred {
pid_t pid; /* process ID of the sending process */
uid_t uid; /* user ID of the sending process */
gid_t gid; /* group ID of the sending process */
};

Note you have to fill these in your msghdr.control, and the kernel will check if they're correct.

The main portability hindrance is that this structure differs on other Unixes - for example on FreeBSD it's:

struct cmsgcred {
pid_t cmcred_pid; /* PID of sending process */
uid_t cmcred_uid; /* real UID of sending process */
uid_t cmcred_euid; /* effective UID of sending process */
gid_t cmcred_gid; /* real GID of sending process */
short cmcred_ngroups; /* number or groups */
gid_t cmcred_groups[CMGROUP_MAX]; /* groups */
};

TCP loopback connection vs Unix Domain Socket performance

Yes, local interprocess communication by unix domain sockets should be faster than communication by loopback localhost connections because you have less TCP overhead, see here.

Thrift communication between .NET Core and C using linux domain sockets

The Thrift .Net libs (neither Core nor C#) do not have a UNIX domain socket transport yet. This is partly due to the lack of Unix Domain socket support in .Net.

You could use netcat or socat to pipe a localhost TCP socket to the domain socket:
Redirecting TCP-traffic to a UNIX domain socket under Linux

Or you could add a Domain socket transport to Thrift and contribute it (which would be great!). You could essentially copy the Thrift TCP socket transport impl and then use the info here to create the domain socket bit (from Mono):
How to connect to a Unix Domain Socket in .NET Core in C#

Virtual TCP connections on Linux

I found the solution. IP_TRANSPARENT socket option should allow this.

C++ UNIX Help - simple TCP server socket connection

You problem is here:

else if(s.st_mode & S_IFREG)
{
int fd = open(path, O_RDONLY);
if(fd < 0) { perror("open"); exit(EXIT_FAILURE); }
read(fd, buffer, strlen(buffer)); << Change strlen(buffer)
strcat(buffer, "\n");
if(write(connSock, buffer, strlen(buffer)) < 0)
{ perror("write"); exit(EXIT_FAILURE); }
close(fd);
}

strlen(buffer) can be any value, because you are initializing buffer to 1024 bytes. The memory area is possibly being filled full of zeroes. strlen(buffer) would then be returning 0, because the first character is a null byte. Nothing is being written into the buffer because read will end up writing zero bytes.



Related Topics



Leave a reply



Submit