Author Archives: Admin

Arbor’s new Net traffic report: “This is just the beginning…”

See this comprehensive new Web traffic study from Arbor Networks — “the largest study of global Internet traffic since the start of the commercial Internet.” 

Conclusion

Internet is at an inflection point

Transition from focus on connectivity to content
Old global Internet economic models are evolving
New entrants are reshaping definition / value of connectivity

New technologies are reshaping definition of network
“Web” / Desktop Applications, Cloud computing, CDN

Changes mean significant new commercial, security and engineering challenges

This is just the beginning…

These conclusions and the data Arbor tracked and reported largely followed our findings, projections, and predictions from two years ago:

And an update from this spring:

Also see our analysis from last winter highlighting the evolution of content delivery networks — what my colleague George Gilder dubbed “storewidth” back in 1999 — and which Arbor now says is the fastest growing source/transmitter of Net traffic.

Preparing to Pounce: D.C. angles for another industry

As you’ve no doubt heard, Washington D.C. is angling for a takeover of the . . . U.S. telecom industry?!

That’s right: broadband, routers, switches, data centers, software apps, Web video, mobile phones, the Internet. As if its agenda weren’t full enough, the government is preparing a dramatic centralization of authority over our healthiest, most dynamic, high-growth industry.

Two weeks ago, FCC chairman Julius Genachowski proposed new “net neutrality” regulations, which he will detail on October 22. Then on Friday, Yochai Benkler of Harvard’s Berkman Center published an FCC-commissioned report on international broadband comparisons. The voluminous survey serves up data from around the world on broadband penetration rates, speeds, and prices. But the real purpose of the report is to make a single point: foreign “open access” broadband regulation, good; American broadband competition, bad. These two tracks — “net neutrality” and “open access,” combined with a review of the U.S. wireless industry and other investigations — lead straight to an unprecedented government intrusion of America’s vibrant Internet industry.

Benkler and his team of investigators can be commended for the effort that went into what was no doubt a substantial undertaking. The report, however,

  • misses all kinds of important distinctions among national broadband markets, histories, and evolutions;
  • uses lots of suspect data;
  • underplays caveats and ignores some important statistical problems;
  • focuses too much on some metrics, not enough on others;
  • completely bungles America’s own broadband policy history; and
  • draws broad and overly-certain policy conclusions about a still-young, dynamic, complex Internet ecosystem.

The gaping, jaw-dropping irony of the report was its failure even to mention the chief outcome of America’s previous open-access regime: the telecom/tech crash of 2000-02. We tried this before. And it didn’t work! The Great Telecom Crash of 2000-02 was the equivalent for that industry what the Great Panic of 2008 was to the financial industry. A deeply painful and historic plunge. In the case of the Great Telecom Crash, U.S. tech and telecom companies lost some $3 trillion in market value and one million jobs. The harsh open access policies (mandated network sharing, price controls) that Benkler lauds in his new report were a main culprit. But in Benkler’s 231-page report on open access policies, there is no mention of the Great Crash. (more…)

Growth Clusters

Ed Glaeser, William Kerr, and Giacomo Ponzetto have a new paper on “Clusters of Entrepreneurship.”

Employment growth is strongly predicted by smaller average establishment size, both across cities and across industries within cities, but there is little consensus on why this relationship exists. Traditional economic explanations emphasize factors that reduce entry costs or raise entrepreneurial returns, thereby increasing net returns and attracting entrepreneurs. A second class of theories hypothesizes that some places are endowed with a greater supply of entrepreneurship. Evidence on sales per worker does not support the higher returns for entrepreneurship rationale. Our evidence suggests that entrepreneurship is higher when Öxed costs are lower and when there are more entrepreneurial people.

Did Cisco just blow $2.9 billion?

Cisco better hope wireless “net neutrality” does not happen. It just bought a company called Starent that helps wireless carriers manage the mobile exaflood.

See this partial description of Starent’s top product:

Intelligence at Work

Key to creating and delivering differentiat ed ser vices—and meeting subscriber demand—is the ST40’s ability to recognize different traffic flows, which allows it to shape and manage bandwidth, while interacting with applications to a very fine degree. The system does this through its session intelligence that utilizes deep packet inspection (DPI) technology, ser vice steering, and intelligent traffic control to dynamically monitor and control sessions on a per-subscriber/per-flow basis.

The ST40’s interaction with and understanding of key elements within the multimedia call—devices, applications, transport mechanisms, policies—and assists in the ser vice creation process by:

Providing a greater degree of information granularity and flexibility for billing, network planning, and usage trend analysis

Sharing information with external application ser vers that perform value-added processing

Exploiting user-specific attributes to launch unique applications on a per-subscriber basis

Extending mobility management information to non-mobility aware applications

Enabling policy, charging, and Quality of Ser vice (QoS) features

Traffic management. QoS. Deep Packet Inspection. Per service billing. Special features and products. Many of these technologies and features could be outlawed or curtailed under net neutrality. And the whole booming wireless arena could suffer.

Quote of the Day

“The Americans have not had to deal with a true economic rival since the British more than half a century ago. America today is as unaccustomed to global economic competition as the British were at their apex. The U.S. often seems lumbering and ill-suited to the demands of economic rivalry.

“The only way to avoid Britain’s fate and meet the challenge of China is to reinvigorate economic life. This is a multiyear endeavor that must be done primarily through innovation, not legislation. America needs to retool its domestic economy to build on the global success of many U.S. companies. It must focus on inventing new products and generating new ideas, rather than defending the rusty industries of yesterday. Fights over health care and climate change are the cultural equivalent of fiddling while Rome burns.

“China thrives because it is hungry, dynamic, scared of failure and convinced that it should be a leading force in the world. That is why America thrived a century ago. Today, such hunger and dynamism seem less evident in American life than petulance that the world is not cooperating.

“The U.S. is in danger of assuming that because it has been a dominant nation on the world stage, it must continue to be so. That is a recipe for becoming Britain.”

— Zachary Karabell, The Wall Street Journal, October 13, 2009

Exa-cation: training the next generation for the exaflood

Google, IBM, and other big technology companies don’t think we’re ready for the exaflood.

It is a rare criticism of elite American university students that they do not think big enough. But that is exactly the complaint from some of the largest technology companies and the federal government.

At the heart of this criticism is data. Researchers and workers in fields as diverse as bio-technology, astronomy and computer science will soon find themselves overwhelmed with information. Better telescopes and genome sequencers are as much to blame for this data glut as are faster computers and bigger hard drives. . . .

Two years ago, I.B.M. and Google set out to change the mindset at universities by giving students broad access to some of the largest computers on the planet. The companies then outfitted the computers with software that Internet companies use to tackle their toughest data analysis jobs.

“It sounds like science fiction, but soon enough, you’ll hand a machine a strand of hair, and a DNA sequence will come out the other side,” said Jimmy Lin, an associate professor at the University of Maryland, during a technology conference held here last week.

The big question is whether the person on the other side of that machine will have the wherewithal to do something interesting with an almost limitless supply of genetic information.

At the moment, companies like I.B.M. and Google have their doubts.

For the most part, university students have used rather modest computing systems to support their studies. They are learning to collect and manipulate information on personal computers or what are known as clusters, where computer servers are cabled together to form a larger computer. But even these machines fail to churn through enough data to really challenge and train a young mind meant to ponder the mega-scale problems of tomorrow.

Correction: Exa-scale.

“If they imprint on these small systems, that becomes their frame of reference and what they’re always thinking about,” said Jim Spohrer, a director at I.B.M.’s Almaden Research Center.

GigaTube

YouTube says it now serves up well over a billion videos a day — far more than previously thought.

An Exa-Prize for “Masters of Light”

Holy Swedish silica/on. It’s an exa-prize!

Calling them “Masters of Light,” the Royal Swedish Academy awarded the 2009 Nobel Prize in Physics to Charles Kao, for discoveries central to the development of optical fiber, and to Willard Boyle and George Smith of Bell Labs, for the invention of the charge-coupled device (CCD) digital imager.

Perhaps more than any two discoveries, these technologies are responsible for our current era of dramatically expanding cultural content and commercial opportunities across the Internet. I call this torrent of largely visual data gushing around the Web the “exaflood.” Exa means 1018, and today monthly Internet traffic in the U.S. tops two exabytes. For all of 2009, global Internet traffic should reach 100 exabytes, equal to the contents of around 5,000,000 Libraries of Congress. By 2015, the U.S. might transmit 1,000 exabytes, the equivalent of two Libraries of Congress every second for the entire year.

Almost all this content is transmitted via fiber optics, where laser light pulsing billions of times a second carries information thousands of miles through astoundingly pure glass (silica). And much of this content is created using CCD imagers, the silicon microchips that turn photons into electrons in your digital cameras, camcorders, mobile phones, and medical devices. The basic science of the breakthroughs involves mastering the delicate but powerful reflective, refractive, and quantum photoelectric properties of both light and one of the world’s simplest and most abundant materials — sand. Also known in different forms as silica and silicon.

The innovations derived from Kao, Boyle, and Smith’s discoveries will continue cascading through global society for decades to come.

Neutrality for thee, but not for me

In Monday’s Wall Street Journal, I address the once-again raging topic of “net neutrality” regulation of the Web. On September 21, new FCC chair Julius Genachowski proposed more formal neutrality regulations. Then on September 25, AT&T accused Google of violating the very neutrality rules the search company has sought for others. The gist of the complaint was that the new Google Voice service does not connect all phone calls the way other phone companies are required to do. Not an earthshaking matter in itself, but a good example of the perils of neutrality regulation.

As the Journal wrote in its own editorial on Saturday:

Our own view is that the rules requiring traditional phone companies to connect these calls should be scrapped for everyone rather than extended to Google. In today’s telecom marketplace, where the overwhelming majority of phone customers have multiple carriers to choose from, these regulations are obsolete. But Google has set itself up for this political blowback.

Last week FCC Chairman Julius Genachowski proposed new rules for regulating Internet operators and gave assurances that “this is not about government regulation of the Internet.” But this dispute highlights the regulatory creep that net neutrality mandates make inevitable. Content providers like Google want to dabble in the phone business, while the phone companies want to sell services and applications.

The coming convergence will make it increasingly difficult to distinguish among providers of broadband pipes, network services and applications. Once net neutrality is unleashed, it’s hard to see how anything connected with the Internet will be safe from regulation.

Several years ago, all sides agreed to broad principles that prohibit blocking Web sites or applications. But I have argued that more detailed and formal regulations governing such a dynamic arena of technology and changing business models would stifle innovation.

Broadband to the home, office, and to a growing array of diverse mobile devices has been a rare bright spot in this dismal economy. Since net neutrality regulation was first proposed in early 2004, consumer bandwidth per capita in the U.S. grew to 3 megabits per second from just 262 kilobits per second, and monthly U.S. Internet traffic increased to two billion gigabytes from 170 million gigabytes — both 10-fold leaps. New wired and wireless innovations and services are booming.

All without net neutrality regulation.

The proposed FCC regulations could go well beyond the existing (and uncontroversial) non-blocking principles. A new “Fifth Principle,” if codified, could prohibit “discrimination” not just among applications and services but even at the level of data packets traversing the Net. But traffic management of packets is used across the Web to ensure robust service and security.

As network traffic, content, and outlets proliferate and diversify, Washington wants to apply rigid, top-down rules. But the network requirements of email and high-definition video are very different. Real time video conferencing requires more network rigor than stored content like YouTube videos. Wireless traffic patterns are more unpredictable than residential networks because cellphone users are, well, mobile. And the next generation of video cloud computing — what I call the exacloud — will impose the most severe constraints yet on network capacity and packet delay.

Or if you think entertainment unimportant, consider the implications for cybersecurity. The very network technologies that ensure a rich video experience are used to kill dangerous “botnets” and combat cybercrime.

And what about low-income consumers? If network service providers can’t partner with content companies, offer value-added services, or charge high-end users more money for consuming more bandwidth, low-end consumers will be forced to pay higher prices. Net neutrality would thus frustrate the Administration’s goal of 100% broadband.

Health care, energy, jobs, debt, and economic growth are rightly earning most of the policy attention these days. But regulation of the Net would undermine the key global platform that underlay better performance on each of these crucial economic matters. Washington may be bailing out every industry that doesn’t work, but that’s no reason to add new constraints to one that manifestly does.

— Bret Swanson

The first day of the rest of the Internet

Yesterday, the Joint Project Agreement between the U.S. Department of Commerce and ICANN expired. Today, a new “Affirmation of Commitments” goes into effect.

Key points from the new Affirmation:

  • ICANN will remain an independent, private-sector led organization.
  • Nations from around the world will have new input through the Government Advisory Committee (GAC).
  • Overall transparency and global involvement should improve.
  • But this Affirmation should extinguish any notions that the UN, EU, or other international players might gain new power over ICANN.
  • ICANN must focus its efforts to ensure three core objectives. That the Internet is:
  1. always on
  2. free and open
  3. secure and stable

More big issues coming down the pike. But for now, I think, a fortuitous development.

The Real Deal

We Hoosiers are lucky:

Perhaps most appreciated was the governor’s overhaul of the Bureau of Motor Vehicles. It’s gone from one of the worst in the country—a place, he says, “where people would take a copy of ‘Crime and Punishment'”—to one of the best, with an “average visit time of seven minutes and 36 seconds.”

I had my own experience about four years ago, before the BMV was overhauled, where I made some seven trips to the license branch and various other government offices over a period of weeks just to renew my driver’s license.

But as Kim Strassel tells us in her interview with Mitch Daniels, this is only the very tip of the iceberg. In a state challenged by our reliance on the automobile industry in particular and manufacturing in general, instead of imploding like Michigan or profligate California, we had a governor whose common sense, hard work, business savvy, and courageous budgeting has left Indiana in a much better spot than many other states. Especially given our special old-economy obstacles.

Does Google Voice violate neutrality?

This is the ironic but very legitimate question AT&T is asking.

As Adam Thierer writes,

Whatever you think about this messy dispute between AT&T and Google about how to classify web-based telephony apps for regulatory purposes — in this case, Google Voice — the key issue not to lose site of here is that we are inching ever closer to FCC regulation of web-based apps!  Again, this is the point we have stressed here again and again and again and again when opposing Net neutrality mandates: If you open the door to regulation on one layer of the Net, you open up the door to the eventual regulation of all layers of the Net.

George Gilder and I made this point in Senate testimony five and a half years ago. Advocates of big new regulations on the Internet should be careful for what they wish.

End-to-end? Or end to innovation?

In what is sure to be a substantial contribution to both the technical and policy debates over Net Neutrality, Richard Bennett of the Information Technology and Innovation Foundation has written a terrific piece of technology history and forward-looking analysis. In “Designed for Change: End-to-End Arguments, Internet Innovation, and the Net Neutrality Debate,” Bennett concludes:

Arguments for freezing the Internet into a simplistic regulatory straightjacket often have a distinctly emotional character that frequently borders on manipulation.

The Internet is a wonderful system. It represents a new standard of global cooperation and enables forms of interaction never before possible. Thanks to the Internet, societies around the world reap the benefits of access to information, opportunities for collaboration, and modes of communication that weren’t conceivable to the public a few years ago. It’s such a wonderful system that we have to strive very hard not to make it into a fetish object, imbued with magical powers and beyond the realm of dispassionate analysis, criticism, and improvement.

At the end of the day, the Internet is simply a machine. It was built the way it was largely by a series of accidents, and it could easily have evolved along completely different lines with no loss of value to the public. Instead of separating TCP from IP in the way that they did, the academics in Palo Alto who adapted the CYCLADES architecture to the ARPANET infrastructure could have taken a different tack: They could have left them combined as a single architectural unit providing different retransmission policies (a reliable TCP-like policy and an unreliable UDP-like policy) or they could have chosen a different protocol such as Watson’s Delta-t or Pouzin’s CYCLADES TS. Had the academics gone in either of these directions, we could still have a World Wide Web and all the social networks it enables, perhaps with greater resiliency.

The glue that holds the Internet together is not any particular protocol or software implementation: first and foremost, it’s the agreements between operators of Autonomous Systems to meet and share packets at Internet Exchange Centers and their willingness to work together. These agreements are slowly evolving from a blanket pact to cross boundaries with no particular regard for QoS into a richer system that may someday preserve delivery requirements on a large scale. Such agreements are entirely consistent with the structure of the IP packet, the needs of new applications, user empowerment, and “tussle.”

The Internet’s fundamental vibrancy is the sandbox created by the designers of the first datagram networks that permitted network service enhancements to be built and tested without destabilizing the network or exposing it to unnecessary hazards. We don’t fully utilize the potential of the network to rise to new challenges if we confine innovations to the sandbox instead of moving them to the parts of the network infrastructure where they can do the most good once they’re proven. The real meaning of end-to-end lies in the dynamism it bestows on the Internet by supporting innovation not just in applications but in fundamental network services. The Internet was designed for continual improvement: There is no reason not to continue down that path.

A QoS primer

In case my verses attempting an analysis of Quality-of-Service and “net neutrality” regulation need supplementary explanation, here’s a terrifically lucid seven-minute Internet packet primer — in prose and pictures — from George Ou. Also, a longer white paper on the same topic:

Seven-minute Flash presentation: The need for a smarter prioritized Internet

White paper: Managing Broadband Networks: A Policymaker’s Guide

Leviathan Spam

Leviathan Spam

Send the bits with lasers and chips
See the bytes with LED lights

Wireless, optical, bandwidth boom
A flood of info, a global zoom

Now comes Lessig
Now comes Wu
To tell us what we cannot do

The Net, they say,
Is under attack
Stop!
Before we can’t turn back

They know best
These coder kings
So they prohibit a billion things

What is on their list of don’ts?
Most everything we need the most

To make the Web work
We parse and label
We tag the bits to keep the Net stable

The cloud is not magic
It’s routers and switches
It takes a machine to move exadigits

Now Lessig tells us to route is illegal
To manage Net traffic, Wu’s ultimate evil (more…)

A New Leash on the Net?

Today, FCC chairman Julius Genachowski proposed new regulations on communications networks. We were among the very first opponents of these so-called “net neutrality” rules when they were first proposed in concept back in 2004. Here are a number of our relevant articles over the past few years:

Political Noise On the Net

With an agreement between the U.S. Department of Commerce and ICANN (the nonprofit Internet Corp. for Assigned Names and Numbers, headquartered in California) expiring on September 30, global bureaucrats salivate. As I write today in Forbes, they like to criticize ICANN leadership — hoping to gain political control — but too often ignore the huge success of the private-sector-led system.

How has the world fared under the existing model?

In the 10 years of the Commerce-ICANN relationship, Web users around the globe have grown from 300 million to almost 2 billion. World Internet traffic blossomed from around 10 million gigabytes per month to almost 10billion, a near 1,000-fold leap. As the world economy grew by approximately 50%, Internet traffic grew by 100,000%. Under this decade of private sector leadership, moreover, the number of Internet users in North America grew around 150% while the number of users in the rest of the world grew almost 600%. World growth outpaced U.S. growth.

Can we really digest this historic shift? In this brief period, the portion of the globe’s population that communicates electronically will go from negligible to almost total. From a time when even the elite accessed relative spoonfuls of content, to a time in the near future when the masses will access all recorded information.

These advances do not manifest a crisis of Internet governance.

As for a real crisis? See what happens when politicians take the Internet away from the engineers who, in a necessarily cooperative fashion, make the whole thing work. Criticism of mild U.S. government oversight of ICANN is hardly reason to invite micromanagement by an additional 190 governments.

Exploring Optimums on Multiple Political Economic Axes

What is the optimal economic arrangement to produce innovation and growth? And what is the optimal political arrangement needed to encourage and sustain such an economic order? I spend a lot of time thinking about these questions (as here in a paper on the rise of China). And so I’d recommend this thoughtful blog post by economist Scott Sumner. Sumner’s been blogging a lot on his recent trip to China and on the macroeconomics of the financial crisis/recession/rebound.

I disagree with a number of Sumner’s conclusions on the macro and political-economy fronts, but it’s insights like the one below that keep me reading Sumner.

Switzerland’s high level of democracy doesn’t just come from referenda, it also comes from its extreme decentralization.  This makes it a highly successful multiethnic society, and not just when compared to places like Yugoslavia and Iraq, but even in comparison to Belgium or Canada.  Another advantage of decentralization is that small places are less likely to be protectionist, as the gains from trade are much more obvious.  In addition, it is much easier to monitor and root out rent seekers in a community where most people know each other.

Quote of the Day

“TANGIBLE COMMON EQUITY, n. unknown origin. Definition unknown; purpose unknown; how it’s calculated, unknown; what federal regulators think it means, unknown. Usages: “Macbeth,” Shakespeare, W., Act II, Scene (i): “Is this TCE which I see before me…I have thee not, and yet I see thee still.”

“TARP, n. acronym. 1. A synthetic device designed to cover up an unsightly mess, or to protect perishable goods (firewood, banks) from the ravages of the elements, typically costing somewhere between $12.99 and $700 billion. 2. Prime example of how governments use otherwise anodyne acronyms, abbreviations and sports metaphors to disguise matters of controversy. See also TALF, TLGP, TURF, FHFA, BACKSTOP, WRAP, OFHEO and SPECTRE.”

— example entries from the “Devil’s Dictionary: Financial Edition,” by Matthew Rose, The Wall Street Journal, September 15, 2009

What price, broadband?

See this new paper from economists Rob Shapiro and Kevin Hassett showing how artificial limits on varied pricing of broadband could severely forestall broadband adoption.

To the extent that lower-income and middle-income consumers are required to pay a greater share of network costs, we should expect a substantial delay in achieving universal broadband access. Our simulations suggest that spreading the costs equally among all consumers — the minority who use large amounts of bandwidth and the majority who use very little — will significantly slow the rate of adoption at the lower end of the income scale and extend the life of the digital divide.

If costs are shifted more heavily to those who use the most bandwidth and, therefore, are most responsible for driving up the cost of expanding network capabilities, the digital divergence among the races and among income groups can be eliminated much sooner.

« Previous PageNext Page »