How Did We Get Here? A Brief History of Cloud Computing, Paul Mockapetris, Chairman and Chief Scientist, Nominium, Inc.
Paul credits that his (Lessig styled) presentation is adapted from a work by Benjamin Black
1. Packet Switching, 1960-62, build a communications network, and how does that survive a nuclear war (DoD funded). How to organize networks for survivability, quick rebuild/reconfigure ==> decentralized network. Packet-switching was invented 3 times, 3 places and for 3 different reasons.
The 3 times, 3 places and for 3 different reasons is very relevant to cloud computing conversation. Government has different drivers and requirements than enterprise or startups.
Two takeaways: (1) Imperative to use new resources. (2) Don’t confuse the hat with the cattle; Hat: nuclear war, Cattle: Statistically share the infrastructure, in a way that is safe for participants
2. IP, Vint Cert and Bob Kahn, 1973, simple and open, separation of concerns (IP & TCP), IP glues together multiple networks (networks and network types), TCP takes care of reliable delivery; ISO stack model
Lessons: separate, the importance of not being important, layer for flexibility
3. Explosion of communication demand ==> Explosion of communication protocols
Lessons: separation of concerns, federated/hierarchical model. Now: IP over * and * over IP.
Inclusion leads to economies of scale, market is better place to select standards winners than standards committees
4. Partly cloudy – won’t take 40 years to standardize this. Predicts standards will come together in less than 10 years. What we can expect:
- wild proliferation of protocols
- features move up the stack into applications
- then, protocols will converge
Paul Baron’s diagrams — centralized, decentralized and distributed — will reappear. People want to be free to choose. Mentions a Google Server Farm on a ship that uses water flowing through for cooling. Cool. But, not everyone has economy of scale to do this.
Applications in the cloud will follow these models. Need simple and open protocols. Separate out the parts that matter. Decouple from hardware. Federated or hierarchical structure is where we want to go.
- things that sometimes make sense: distribution, centralization, standards, internet, intranet, extranet
- One thing that always makes sense: user is free to choose (Milton Friedman as Cloud czar)
— Different choices for different parts of problem
— user defined metrics (service levels)
Bottom line: don’t get trapped by wrong choices early [or, as I would harp on: architecture constructs / practices always matter]