This blog was originally published by IBC.
We are missing a critical tool to make right investment and management decisions in networks, writes ASSIA Chief Strategy Officer Tuncay Cil.
For better or worse, broadband speed has been the de facto Quality of Experience (QoE) metric for broadband connectivity for the last decade. Network operators, data centers, network equipment and software vendors, content and application providers, all rely on some interpretation of QoE metrics for broadband internet connectivity results to make key investment, operations and marketing decisions.
Moreover, national regulatory agencies also have been more active in the last years to enforce monitoring consumer QoE/speeds to drive national broadband initiatives and regulate carriers. BEREC, ARCEP, FCC, Ofcom all have different initiatives like the sourcing of a reference system to build a standard SpeedTest solution.
We still have ways to go to achieve this objective and the current tools are clearly not sufficient for the goals of the eco-system.
There is no standard for performance benchmarking and most tools are significantly misleading.
The speed test measurement goals, methodologies, and results differ significantly. Moreover, the content providers, service providers, and regulators all see significantly different results based on different speed test measurement methodologies, and even for the same service in the same location for similar large number of sample set. For example, Akamai, Netflix, YouTube, FCC, and many speed test vendors produce aggregate analytics for broadband speeds for a particular nation/location and the results can vary over 200% at times.
Some of the variation can naturally be attributed to the motivations/goals of measurement:
- Using average speeds as performance benchmarking for “speed as experienced by end-user” (ex: do I get the speed that was advertised to me? How is my connection speed compared to others?).
- End-to-end vs. component by component testing. (ex: Where is the bottleneck in delivery of the service across content servers, internet backbone, last-mile access network, home networks, consumer device, etc)
- A single connection vs. many ex: (aggregate by location, or other attribute of connectivity profile)
Beyond different purposes, the testing method inaccuracies are also responsible for wildly varying speed test results:
- Bias in end-user initiated only-at-time-of-trouble speed tests.
- Using end-to-end testing as a proxy to home network speed.
- Inaccurate data sampling
- Single vs. multiple server measurements
- Single vs. multiple TCP stream measurements
Improving the situation
Providing full visibility on individual component performance while measuring end-to-end connectivity performance would address most gaps in current tooling for broadband speed testing. However the design and deployment of such new tooling have to take into account regulatory rules, service providers operational concerns, varying capabilities of network equipment vendors.
We believe there are five different segments and associated testing capabilities that need to be refined and made available to internet eco-system partners:
- Wi-Fi throughput via device measurement: This test is made from the consumer device to the AP (with or without agent) through the consumer device. The measurement method should allow single-end Wi-Fi measurement to avoid CPE software update where it is not possible to such change to the CPE.
- Wi-Fi throughput from the CPE should also allow single-ended Wi-Fi measurement methodology but this time through the CPE instead of the consumer device. This way the consumer devices would not need additional software installation to enable testing. Also, the measurement should be made automatically on a configurable schedule without end-user involvement. A software agent in CPE can initiate this kind of testing to all devices connected to the CPE.
- End-to-end throughput test by the consumer device provides upstream and downstream measurement as well as latency. The test should be performed to/from a test node placed on or off network. This measurement can be initiated by the consumer device.
- Network throughput test from the CPE is initiated by the CPE agent to/from a Test node placed on or off network. The end-user usage should not impact the test results. In addition, the testing software agent in the CPE should collect traffic data on a frequent basis and can thereby derive maximum and average throughputs over pre-set intervals and peak/off peak times.
- Access Sync/Contracted rate should be passively collected on access nodes in service provider backend Physical and upper-layer transmission impairments can also be detected by Access Network Data Collector (ANDC).