I was working on a project which includes developing an application using java sockets. However while reading some fundamentals and newly upcoming IPv6 paradigm which motivated me to ask below question,. It is a common mis-understanding that there is no IPv6 fragmentation because the IPv6 header doesn't have the fragment-offset field that IPv4 does; however, it's not exactly accurate.
IPv6 doesn't allow routers to fragment packets; however, end-nodes may insert an IPv6 fragmentation header 1. As RFC states 2 , one of the problems with fragmentation is that it tends to create security holes.
During the late 's there were several well-known attacks on Windows 95 that exploited overlapping IPv4 fragments 3 ; furthermore, in-line fragmentation of packets is risky to burn into internet router silicon due to the long list of issues that must be handled. One of the biggest issues is that overlapping fragments buffered in a router awaiting reassembly could potentially cause a security vulnerability on that device if they are mis-handled.
The end-result is that most router implementations push packets requiring fragmentation to software; this doesn't scale at large speeds.
The other issue is that if you reassemble fragments, you must buffer them for a period of time until the rest are received. It is possible for someone to leverage this dynamic and send very large numbers of unfinished IP fragments; forcing the device in question to spend many resources waiting for an opportunity to reassemble.
Intelligent implementations limit the number of outstanding fragments to prevent a denial of service from this; however, limiting outstanding fragments could legitimately affect the number of valid fragments that can be reassembled. In short, there are just too many hairy issues to allow a router to handle fragmentation. See Section 4. Commonly used firewalls use the algorithm specified in [RFC] to weed out malicious packets that try to overwrite parts of the transport-layer header in order to bypass inbound connection checks.
While this works well for IPv4 fragments, it will not work for IPv6 fragments. This is because the fragmentable part of the IPv6 packet can contain extension headers before the TCP header, making this check less effective. See Teardrop attack wikipedia. I don't have the "official" answer for you, but just based on reading how IPv6 handles datagrams that are too large, my guess would be to reduce the load on routers. Fragmentation and reassembly incurs overhead at the router.
IPv6 moves this burden to the end nodes and requires that they perform MTU discovery to determine the maximum datagram size they can send. It stands to reason that the end nodes are better suited for the task because they have less data to process.
Effectively, the routers have enough on their plates; it's makes sense to force the nodes to deal with it and allow the routers to simply drop something that exceeds their MTU threshold. That processor power can be dedicated to routing traffic. IPv4 has a guaranteed minimum MTU of bytes, IPv6 is 1, 1, bytes, and recommendation is 1, bytes, the difference is basically performance.
As most end-user LAN segments are 1, bytes it reduces network infrastructure overhead for storing state due to additional fragmentation from what are effectively legacy networks that require smaller sizes.
For UDP there is no definition in IPv4 standards about reconstruction of fragmented packets which means every platform can handle it differently. IPv6 asserts that the fragmentation and assembly will always occur in the IP stack and fragments will not be presented to applications. Stack Overflow for Teams — Collaborate and share knowledge with a private group. Create a free Team What is Teams? Collectives on Stack Overflow. Learn more. What are the benefits of removing fragmentation from IPv6?
Ask Question. Asked 10 years, 5 months ago. Active 8 years, 11 months ago. The choice of variable-sized packets allows applications to refine their behaviour. Jitter and delay-sensitive applications, such as digitised voice, may prefer to use a stream of smaller packets to attempt to minimise jitter while reliable bulk data transfer may choose a larger packet size to increase the carriage efficiency. The nature of the medium may also have a bearing on this choice. If there is a high bit error rate BER probability, then reducing the packet size minimises the impact of sporadic errors within the data stream, which may increase throughput.
In designing a network protocol that is intended to operate over a wide variety of substrate carriage networks, the designers of IP could not rely on a single packet size for all transmissions. This field was a bit octet count, allowing for an IP packet to be anywhere from the minimum size of 20 octets corresponding to an IP header without any payload to a maximum of 65, octets. So IP itself supports a variable size packet format. But which packet size should an implementation use?
But there is a complication here. For example, consider a host connected to an FDDI network, which is connected to an Ethernet network. The FDDI-connected host may elect to send a 4, octet packet, which will fit into an FDDI network, but the packet switch that is attempting to pass the packet into the Ethernet network will be unable to do so because it is too large.
The solution adopted by IPv4 was the use of forward fragmentation. The basic approach is that any IP router that is unable to forward an IP packet into the next network because the packet is too large for this network may split the packet into a set of smaller IP fragments and forward each of these fragments.
The fragments continue along the network path as autonomous packets, and the addressed destination host is responsible to re-assemble these fragments back into the original IP packet. The behaviour is managed by a bit field in the IPv4 header, which is subdivided into 3 sub-fields see Figure 1. The first sub-field is a bit packet identifier, which allows fragments that share a common packet identifier value to be identified as fragments of the same original packet.
The second sub-field is a 3-bit vector of flags. The first bit is unused. If this flag is set, the packet cannot be fragmented and must be discarded when it cannot be forwarded. The third bit is the More-Fragments-field and is set for all fragments bar the final fragment.
For example, a router attempting to pass a octet IP packet into a network whose maximum packet size is octets would need to split the IP packet into three parts. The first packet would have a fragmentation offset of 0, and the More Fragments bit set. The total length would be octets, and the IP payload would be octets, making a total of octets for the packet.
The second packet would have a fragmentation offset value of 64, the More Fragments bit set, total length of and an IP payload of octets, making a total of octets for the packet.
The third packet would have a fragmentation offset value of , the More Fragments bit clear, total length of and an IP payload of octets, making a total of octets for the packet see Figure 2. By this, it is meant that the sending host is unaware that packet fragmentation is occurring, and all the IP fragment packets continue to head towards the original destination where they are reassembled. Another advantage is that while the router performing the fragmentation has to expend resources to generate the packet fragments, the ensuing routers on the path to the destination have no additional processing overhead, assuming that they do not need to further fragment these IP fragments.
Fragments can be delivered in any order, so the fragments may be passed along parallel paths to the destination. The router that is attempting to fragment such a packet is forced to discard it. This design allowed transport protocols to operate without consideration of the exact nature of the underlying transmission networks, and avoid additional protocol overhead in negotiating an optimal packet size for each transaction.
Large UDP packets could be transmitted and fragmented on the fly as required, without requiring any form of packet size discovery. This approach allowed IP to be used on a wide variety of substrate networks without requiring extensive tailoring. TCP has always attempted to avoid IP fragmentation. The reason for TCP attempting to avoid fragmentation was that fragmentation was inefficient under conditions of packet loss in a TCP environment.
Lost fragments can only be repaired by resending the entire packet, including resending all those fragments that were successfully transmitted in the first place. TCP will perform a data repair more efficiently if it were to limit its packet size to one that did not entail packet fragmentation. This form of fragmentation also posed vulnerabilities for hosts.
For example, an attacker could send a stream of fragments with a close to maximally sized fragment offset value, and random packet identifier values. If the receiving host believed that the fragments represented genuine incoming packets, then a credulous implementation might generate a reassembly buffer for each received fragment, which may represent a memory buffer starvation attack. It is also possible, either through malicious attack or by poor network operation, that fragments may overlap or overrun and the task of reassembly requires care and attention in the implementation of fragment reassembly.
Lost fragments represent a slightly more involved problem than lost packets. The receiver has a packet reassembly timer upon the receipt of the first fragment, and will continue to hold this reassembly state for the reassembly time.
For higher delay high capacity network paths, this limit of 65, packets in flight can be a potential performance bottleneck [ RFC ]. Fragmentation also consumes router processing time, forcing the processing of over-sized packets from a highly optimised fast path into a processor queue.
And then there is the middleware problem. Filters and firewalls perform their function by applying a set of policy rules to the packet stream. But these rules typically require the presence of the transport layer header. How can a firewall handle a fragment? One option is to pass all trailing fragments through without inspection but, this exposes the internal systems to potential attack [ RFC ].
Another option is to have the firewall rebuild the original packet, apply the filter rules, and then refragment the packet and forward it on if the packet is accepted by the filter rules. However, by doing this, the firewall is now exposed to various forms of memory starvation attack. NATs that use the transport level port addresses as part of its binding table have a similar problem with trailing fragments.
When it came time to think about the design of what was to become IPv6, the forward fragmentation approach was considered to be a liability. And while it was not possible to completely ditch IP packet fragmentation in IPv6, there was a strong desire to redefine its behaviour. The other change was that the packet identifier size was doubled in IPv6, using a bit packet identifier field. The hope was that these IPv6 changes would fix the problems seen with IPv4 and fragmentation.
Attempts by the sender to time out and resend the large IPv6 packet will meet with the same fate, so this can lead to a wedged state. The TCP handshake completes as none of the opening packets are large. However, the first response may be a large packet. If it is silently discarded because of the combination of fragmentation required and ICMP6 filtering, then neither the client nor the server are capable or repairing the situation. The connection hangs. There is also the case of tunneling IP-in-IP.
Because IPv6 fragmentation can only be performed at the source, should the ICMP message be sent to the tunnel ingress point or to the original source? If the tunnel ingress is used that this assumes that the tunnel egress performs packet reassembly, which can burden the tunnel egress. UDP is an unreliable datagram delivery service, so a sender of a UDP packet is not expected to cache the packet and be prepared to resend it. As the original packet was UDP, the sender does not necessarily have a connection state, so it is not clear how this information should be retained and how and when it should be used.
If the sender adds an entry into its local IPv6 forwarding table then it is exposing itself to a potential resource starvation problem. A high volume flow of synthetic PTB messages has the potential to bloat the local IPv6 forwarding table.
0コメント