Skip to content. | Skip to navigation

Sections
  • Internet Coordination
  • Data & Tools
  • LIR Services
  • RIPE Community
  • Site Map |
  • Contact |
  • Help |
  • RIPE Database Search
spacer
Sections
  • Labs Home
  • Data Repository
  • RIPE Database
  • RIPE Atlas
  • RIPEstat
  • About
You are here: Home > Data & Tools > RIPE Labs > Users > dfk > IPv6 @ Home > Native IPv6 @ Home
Personal tools
  • Log in

Native IPv6 @ Home

Daniel Karrenberg — Sep 02, 2010 05:35 PM
Filed under: ipv6

A short story about ad hoc IPv6/IPv4 measurements.

During the holidays I did some ad-hoc tests on my home broadband connection from xs4all here in NL. As many of you know they ran a big native IPv6 pilot and now IPv6 is available as a production service. I left the FritzBox router running while the family was absent for almost 4 weeks. Connected to the router was a prototype of our small active probe. You can read about it here and see a photo of the set-up too.

Let me relate some of these "holiday" observations. As other holiday acitivities these are somewhat lighthearted and I do not claim that these are science or in any way representative. They are just some tests I ran while no-one was home.

The Experiment

The router was connected by adsl2 to the xs4all service. The probe was connected directly to the router and powered from the router's USB port. Every two minutes the probe did 10 ICMP(6) RTT (round trip time) measurements, vulgo: pings, to k.root-servers.net. It then recorded the results in flash memory. To give you an idea about the topology here are traceroutes showing the paths as they appear today:

IPv4:

traceroute to k.root-servers.net (193.0.14.129), 
    64 hops max, 52 byte packets
 1  10.0.0.8  1.152 ms  0.857 ms  0.881 ms
 2  194.109.5.213  9.540 ms  9.426 ms  9.273 ms
 3  194.109.7.141  9.250 ms  9.125 ms  9.103 ms
 4  194.109.5.6  9.685 ms  10.508 ms  9.290 ms
 5  216.66.84.57  9.374 ms  9.318 ms  17.549 ms
 6  193.239.116.80  9.746 ms  9.570 ms
    195.69.144.240  9.985 ms
 7  193.0.14.129  9.815 ms  10.219 ms  9.282 ms

IPv6:

traceroute6 to k.root-servers.net (2001:7fd::1) 
    from 2001:980:3500:1:3615:9eff:fe0f:510c, 
    64 hops max, 12 byte packets

 1  2001:980:3500:1:224:feff:fe19:b61c  0.766 ms  1.113 ms  0.918 ms
 2  2001:888:0:4601::1  10.804 ms  9.770 ms  9.517 ms
 3  2001:888:0:4603::2  10.337 ms  10.239 ms  8.934 ms
 4  2001:888:2:2::1  15.707 ms  15.457 ms  10.422 ms
 5  2001:7f8:1::a502:5152:1  10.498 ms  10.023 ms  9.863 ms
 6  2001:7fd::1  10.293 ms  10.158 ms  9.991 ms

RTTs

Here are simple plots of the observed mean RTTs. These are means of 10 consecutive measurements taken every 2 minutes.

 

IPv4:

spacer

 

No surprises in the IPv4 RTTs. They are very predictable with just a few outliers.

 

 

 

 

 

 

 

 

IPv6:

spacer

 

Besides being about 2ms larger the IPv6 RTTs show a little more spread and similar outliers.

 

 

 

 

 

 

 

 

I do not dare to speculate what causes the differences as there are so many components involved in the measurement. This may or may not be caused by the network. It also may very well be caused by different handling of IPv6 datagrams in the end-systems. Just as an example: I observe an average difference in RTTs of about 1.15ms between IPv4 and IPv6 when pinging the router from the prototype probe and only 0.12ms when doing the same from my macbook with OSX 10.6.

Never mind the structural small differences in RTT. It certainly appears that for practical purposes the IPv6 service is as good as the IPv4 service. We have come a long way from 6bone and such.

 

Packet Loss & Outages

I observed very little packet loss: outside of complete outages only 181 of 444560 packets were lost, that is 0.04%. The losses that did occur are in all combinations: only v4, only v6 and both at the same time. In general I cannot see a significant difference between IPv6 and IPv4 concerning packet loss.

There were only two outages. To my great surprise I found an outage of more than 100 minutes starting at 23:45 UTC on 3 August and that outage was on IPv4 while IPv6 kept working! Checking dnsmon for the time period does not reveal any server problem. So it looks like a genuine IPv4 network problem while IPv6 was just fine. A whole new level of redundancy!

I also found the opposite: a two hour outage of IPv6 while IPv4 kept running. It started at 20:13 UTC on 23 August. Marco Hogewoning tells me this was because they forgot to provision my connection in the production service when the pilot configuration was phased out. No complaints from this pilot user.

 

Conclusions

It is not really fair to generalise from such an ad-hoc experiment. Certainly the small probe prototype has proven its usefulness and capability to take long baseline measurements. And I was not worried about the electricity bill at all. I am also very happy to see that some broadband consumers can obtain a reliable IPv6 service where I live and that sometimes this service appears to work even when IPv4 does not.

 



        
Tags: ipv6 — Share

5 Comments

spacer
Russell Heilling says:
Sep 03, 2010 02:38 PM
Are you adjusting the payload size of the IPv4 vs. IPv6 echo requests?

The extra bytes of header could skew the RTT measurements by increasing the serialisation delay of the datagrams...

I doubt that this would account for the full 2ms difference though :)
spacer
Marco Hogewoning says:
Sep 03, 2010 03:29 PM
The outage where yoou lost v4 and only had v6 connectivity is a known race conditions in the setup. XS4ALL uses static assignments foor both4 and 6, where a residential end-user gets one IPv4 address and a /48 of IPv6 space. The users connect using PPPoE and this where the problem lies.
Noise pulses on the DSL line can cause the modem to loose it's connection and "retrain" the DSL line. This of course causes the PPP stack to drop it's connection and re-establish as well. However since the connection dropped the remote end (our central BRAS router) never get's a terminate so it holds on to the PPPoE session until it reaches the timeout treshold of LCP echo packets. At the same the modem comes back online and establishes a new PPPoE session.

This is where the problem starts. The IPv4 address is tied to the PPP session itself, so it's still in use by the old session. On the new session we try to assign the same address, the router detects this as a duplicate and won't allow for it. On normal IPv4 only lines this means the modem will not be able to establish a working NCP, tries a few times and then backs off. Reconnects and by that time the old session will be gone and the new session is allowed to take the IP again.
With IPv6 it's slightly different, thhe addresses aren't bound to the PPP link itself but routed over it (using DHCPv6-PD as signalling). The link itself uses fe80::/12 linl-local addresses only.
So upon reconnection the modem tries to setup both IPCP and IPv6CP, the IPCP fails because the address is considered a duplicate. The IPv6CP is perfectly valid as the link-local is generated again bound to a new virtual interface. DHCPv6 is fine with it as well as long as it's the same box conecting.

In the end the modem/CPE ends up with a working IPv6 connection and the big issue is that as far as the CPE is concerned this is a perfect valid situation. The PPP session is opened and there is at least one NCP opened as well.

In fact this is by-design as the PPP standard state that NCPs should be independent of each other. You can't tear down the IPv6CP simply because the IPCP failed. There might be a valid reason for only having v6. Maybe you run dual sessions of v4 and v6 or maybe you just don't have v4 available.

Unfortunately since v6 isn't that widespread the user finds 'the internet to be broken'. In normal cases they will probably powercycle the modem and clear the race, as you were on holiday you didn't notice and this was only resolved when the line dropped again beacuse of a thunderstorm or the neighbor starting his microwave oven :)

We've raised this issue with our vendors to see if we can resolve this, but as said. Strictly speaking this is by design and the way it's supposed to work.
spacer
Robin Harmsen says:
Sep 04, 2010 11:48 AM
Do I clearly understand that power cycling (or resetting the DSL link for that matter) is the only way to solve the race with current CPEs?
spacer
Marco Hogewoning says:
Sep 04, 2010 09:29 PM
Yes, as you have to trigger AAA again to get your address assigned.
spacer
dfk says:
Sep 13, 2010 02:11 PM
Interesting theory. According to the FritzBox Logs the DSL connection never went down in that time period, in fact it never re-synchronised since July 13th. So my guess remains that there was no IPv4 connectivity between my home and k.root-servers.net for about 100 minutes starting August 3rd at 23:45 UTC and it was not my DSL connection. Unfortunately I did not run traceroutes in case of outages in this simple holiday experiment.



Add comment

You can add a comment by filling out the form below. Plain text formatting. Web and email addresses are transformed into clickable links. Comments are moderated.

spacer
Listen to audio for this captcha
Navigation
  • Measuring DNS Transfer Sizes - First Res...
  • RIPE NCC Membership Growth
  • Members and Their Number Resources
  • REX Supports IPv6
  • Labs and the Flu
  • 16-bit ASN Exhaustion, Some Data
  • RIPE Labs Presentation at Moscow Regiona...
  • First version ready: Three weeks to laun...
  • Got the Community Builder
  • Will use Drupal for RIPE Labs
  • vBulletin Prototype of RIPE Labs comes a...
  • RIPE Labs is Going to Happen
  • The RIPE Labs Idea takes Shape
  • DNS Clients Do Request DNSSEC Today
  • IPv6 @ Home
    • Native IPv6 @ Home
  • RIPE Labs Presentation
  • Slides presenting excerpts of two histor...
  • investigating
  • active_measurements
  • Wanted: Partners to Sponsor "RIPE Atlas"...
  • Active Measurements - A Small Probe
  • RIPE Atlas Now on Five Continents
  • RIPEstat: The RIPE NCC Information Toolb...
  • RIPE TTM User Survey Results
  • Addresses for RIPE RIS Beacons in Jeopar...
  • A RIPE Atlas Probe for Every RIPE NCC Me...
  • Response to Alexander Isavnin
  • 128.0.0.0/16 As Seen by RIPE Atlas
  • RIPE Atlas & Anycast Instance Switches
  • IPv4: Business As Usual
  • Timeline of Reverse DNS Events
  • 20120619-rdns-timeline Sheet1.pdf
  • Root Servers in Member Networks
  • Introducing RIPE Atlas Anchors - Pilot P...
Tag Cloud
gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.