What Is The Effect of Setting a Linux Socket - High Priority

What is the effect of setting a linux socket - high priority?

Every Linux network interface has a so called qdisc (queuing discipline) attached to it. And the answer to your questions depends on the qdisc in use. Some queuing disciplines like pfifo and bfifo, have no concept of priority. So if they're used, the answer is simple - there will be no prioritization

However, with a prioritizing qdisc such as pfifo_fast (which typically the default qdisc on Linux), the socket priority can have an effect.

This image describes what's going on in a pfifo_fast qdisc:

pfifo_fast queues

We see that packets are placed in queues depending on their priority. When the time comes for the interface to send the next packet (frame actually, but let's not get into that), it will always choose to send the packet with the highest priority. That means that if multiple packets are waiting, those with the highest priority will be sent first. Note that this requires the interface to be congested - if the interface isn't congested and packets are sent as soon as they arrive from the OS, then there is no queuing and therefore no prioritization.

Other qdiscs have different structures and policies. For example an SFQ qdisc:

Sample Image

With that in mind, let's get back to your questions:

  1. Depending on the qdisc, yes, packets from socket_11 may be sent ahead of packets from other sockets. If pfifo_fast is used, and if socket_11 sends enough traffic to saturate the outbound network interface, then the packets from the other sockets might even not be sent at all. This is unlikely in practice since it's usually hard to saturate a network interface before saturating some other resource, unless it's a wireless interface.

  2. The path that packets take from the machine's network interface to the socket is much faster than the network itself. And, as you recall, for prioritization to have any effect, there has to be congestion. In a typical scenario, packets that reached as far as your network interface have already passed the bottleneck of their journey across the network, so congestion is unlikely.

    You can of course use an ingress qdisc or other mechanisms to artificially create a bottleneck, and prioritize incoming traffic. But why would you? That only makes sense if you're building a traffic shaper or a similar network device. Plus, since this qdiscs are a low level mechanism that happens well below the higher level sockets (even before bridging or routing), I doubt that the socket's priority could have any effect on in.

  3. Not that I'm aware of, but I'd be happy to learn. This kernel module comes close, but it doesn't seem to be able to show priority flags, just regular socket options.

When will a TCP network packet be fragmented at the application layer?

It will be split when it hits a network device with a lower MTU than the packet's size. Most ethernet devices are 1500, but it can often be smaller, like 1492 if that ethernet is going over PPPoE (DSL) because of the extra routing information, even lower if a second layer is added like Windows Internet Connection Sharing. And dialup is normally 576!

In general though you should remember that TCP is not a packet protocol. It uses packets at the lowest level to transmit over IP, but as far as the interface for any TCP stack is concerned, it is a stream protocol and has no requirement to provide you with a 1:1 relationship to the physical packets sent or received (for example most stacks will hold messages until a certain period of time has expired, or there are enough messages to maximize the size of the IP packet for the given MTU)

As an example if you sent two "packets" (call your send function twice), the receiving program might only receive 1 "packet" (the receiving TCP stack might combine them together). If you are implimenting a message type protocol over TCP, you should include a header at the beginning of each message (or some other header/footer mechansim) so that the receiving side can split the TCP stream back into individual messages, either when a message is received in two parts, or when several messages are received as a chunk.

How to set socket timeout in C when making multiple connections?

You can use the SO_RCVTIMEO and SO_SNDTIMEO socket options to set timeouts for any socket operations, like so:

    struct timeval timeout;      
timeout.tv_sec = 10;
timeout.tv_usec = 0;

if (setsockopt (sockfd, SOL_SOCKET, SO_RCVTIMEO, &timeout,
sizeof timeout) < 0)
error("setsockopt failed\n");

if (setsockopt (sockfd, SOL_SOCKET, SO_SNDTIMEO, &timeout,
sizeof timeout) < 0)
error("setsockopt failed\n");

Edit: from the setsockopt man page:

SO_SNDTIMEO is an option to set a timeout value for output operations. It accepts a struct timeval parameter with the number of seconds and microseconds used to limit waits for output operations to complete. If a send operation has blocked for this much time, it returns with a partial count or with the error EWOULDBLOCK if no data were sent. In the current implementation, this timer is restarted each time additional data are delivered to the protocol, implying that the limit applies to output portions ranging in size from the low-water mark to the high-water mark for output.

SO_RCVTIMEO is an option to set a timeout value for input operations. It accepts a struct timeval parameter with the number of seconds and microseconds used to limit waits for input operations to complete. In the current implementation, this timer is restarted each time additional data are received by the protocol, and thus the limit is in effect an inactivity timer. If a receive operation has been blocked for this much time without receiving additional data, it returns with a short count or with the error EWOULDBLOCK if no data were received. The struct timeval parameter must represent a positive time interval; otherwise, setsockopt() returns with the error EDOM.

Socket accept - Too many open files

There are multiple places where Linux can have limits on the number of file descriptors you are allowed to open.

You can check the following:

cat /proc/sys/fs/file-max

That will give you the system wide limits of file descriptors.

On the shell level, this will tell you your personal limit:

ulimit -n

This can be changed in /etc/security/limits.conf - it's the nofile param.

However, if you're closing your sockets correctly, you shouldn't receive this unless you're opening a lot of simulataneous connections. It sounds like something is preventing your sockets from being closed appropriately. I would verify that they are being handled properly.



Related Topics



Leave a reply



Submit