hey hamster whats the oversub ratio these days on ISPs
surely they're not selling the same backhaul to 100 people and then getting upset when those people use that thing they thought they were paying for
Oversubscription rates have certainly gone up since the days of dialup, no question. If you think about it though, that's not really that surprising. It's very much more a mainstream product now, so while the usage habits of the more tech savvy types out there probably haven't changed a
huge amount, there's now a much larger 'mainstream' audience that aren't
nearly as hard on their bandwidth as those power users are.
Realistically, what you need to look at is if links are congesting, where they're congesting and when. If there's no congestion going on, then it doesn't really
matter how much edge bandwidth that link is theoretically serving, because it's keeping up. It's not really a conscious strategy though (and never has been) to maintain some specific ratio, because it's kinda meaningless as you move through the network.
As an example.. back in the dialup days, a Cisco AS5300 could serve up to 240 v.90 modems, for a total theoretical downstream throughput just short of 16Mbit/sec. They had fastethernet connections to serve as backhaul, so 100Mbit there. Obviously, that link is only ever going to be massively undersubscribed. In a larger deployment, you'd aggregate all those links together in a fasteth card in some larger router somewhere, and then have a long-range link via ATM, SDH(/SONET for you US types) or similar.. you'd likely take those multiple 100mbit interfaces, and bring all the traffic back over an STM-1 @155mbit.. looking at the overall traffic in there, how high it peaks and how much headroom you have left, you might still have room to expand and add more NASs into the mix, or you might have to add in another backhaul link if it's congesting. By the time you get to inter-carrier peering links, all thoughs of how many end users they're serving is entirely out the window, and you're just looking at raw traffic levels - not only are the numbers out there changing too quickly, but there's generally going to be more than one path to take, so it's entirely possible that a whole slew of those customers aren't actually using that link anyway, because they're sucking down traffic from some other network, or even internally. Basically, it's way to messy to try to deal with it like that.
These days, broadband connections are always-on, and generally inactive. Faster connections are much more bursty - the days of starting a download and leaving it running at max speed for days at a time are quite limited outside of things like steam, so there's generally a big spike of data then a lot of silence. If that spike winds up congesting something, it's only going to be for a very short time and the user likely won't even notice. I'm not trying to suggest congestion isn't a problem or anything, but with that greater inconsistency in second-to-second data usage, it's a different
kind of congestion to the good old "useless for hours at a time" typical of an oversubscribed link. Also worth noting that while there's absolutely still a demographic that does make use of that "saturation downloading", that uses vast quantities of data from digital distribution platforms, from bittorrent, etc.. they're a tiny minority of all the connections out there now. Usage patterns have changed since those days now that internet access has become much less of a niche product.
Come to think of it, 'contention ratios' were much more commonly used to describe the ratio of end users to available dial-in lines (in the hope to avoid throwing busy signals) than raw bandwidth figures. Not actually being able to get connected was a far more common grievance than not having enough bandwidth once you were there.
Point is no-one's ever really sitting down counting the amount of theoretical bandwidth at the edge, in distribution and core areas except at a very high level for 'ballpark' design. That sort of analysis might be useful in reporting, but it's of little consequence to the actual operation of the network.