Social Icons

Friday, August 29, 2014

How And Why Google Is Open Sourcing Its Data Centers

Meet the man behind the plan. Google's back in the open source game, and in a big way. In a world increasingly bent on opening up innovation outside the company firewall, Google just took a major step forward.

In June, Google made cloud headlines when it open sourced its Kubernetes project  for managing Linux application containers. (These containers are a sort of software "wrapper" that make it much easier to run a given program on any computer without a lot of laborious customization work.) In
effect, Google has offered the open source community an application architecture modeled after its own internal tools—in Greek, Kubernetes means literally “helmsman of a ship.”

See also: Docker Promises A Standard Way To Package Apps To Run Virtually Anywhere


Craig McLuckie
Craig McLuckie
Google isn't just releasing its code into the wild all by its lonesome, either: Microsoft, Red Hat, IBM and the open-source project Docker have also joined in support.

Google’s goal with Kubernetes is to allow developers to build applications and services at Web scale
using the same principles that Google uses within its own data centers. Kubernetes is built as a portable framework for managing Linux application containers. It lets customers manage their applications the way that Google manages hyper-scale applications like Search and Gmail.
Docker creates great developer experiences when building container based applications, and Kubernetes enables customers to operationalize those Docker applications.

By moving to Kubernetes, customers move away from a having to think about managing their application and instead let a smart system do it for them. Inside Google, this shift has had a
tremendous impact on developer productivity. Coders build their applications, then rely on internal systems to run, monitor and scale it for them. Kubernetes lets developers outside of Google work the same way.

Why Give The Store Away?

I spoke to Craig McLuckie (@cmcluck), a senior product manager at Google, about why Google is open sourcing the secret sauce for its cloud and to learn more about the recent string of industry collaborations around Kubernetes.

ReadWrite:
One of Google’s biggest assets is its homegrown datacenter technology.
Until now, Google has been pretty secretive about how it runs things.
What’s changed?


Craig McLuckie: We have actually been sharing technologies we built at Google with the open-source community for years—for example, [publications describing technologies such as] MapReduce and BigTable have been quite influential in shaping the world of distributed systems. More recently, we are
helping to power the container initiative through projects w've created and contributed, including cgroups, Go, lmctfy, and cadvisor.

We continue to share our work with the community because we benefit hugely from open-source software and the community support of our efforts. In addition, we are betting big on our Google Cloud Platform business, and as Urs Hölzle underscored earlier this year, Google has gone “all in” on this opportunity.

One of the ways we differentiate our offering is by being open with our customers and providing insight into the technologies we offer.

Beyond  that, we want our customers to reap the same benefits that we see internally—and Kubernetes can help them build and manage their applications using a portable, lean, open source system similar to the one we use internally.

RW: So why not go it
alone? Why have you collaborated with so many others in the
industry—including some cloud competitors—on Kubernetes?


CM:
There are two reasons. The first is that we have been incredibly impressed by the power of the open-source ecosystem. Time and again we have seen open-source projects evolve faster than closed alternatives as they benefit from the diverse perspectives and skillsets of the contributors.

We are already seeing fantastic activity on the Kubernetes GitHub repository driven by multiple members of the community. As an example, Red Hat brings deep experience in hardening open source technology for enterprise use.

The second is that our customers demand openness and flexibility. We want to win their trust by offering great infrastructure and services, not by locking them in. Working with the community means that strong alternatives will emerge that let our customers run in a multi-cloud world. That is an important prerequisite for many of them when adopting a new technology.

Cooperation With Competitors

ReadWrite:And it looks like there are still more announcements coming. You have been working with Mesosphere on Kubernetes since July, and there are more announcements today. What’s new?

McLuckie:
We’ve been working on a couple of things together. Mesosphere will be
making substantive contributions to Kubernetes and adopting it as part
of Mesos. Together, with the Kubernetes community, we are working to
bring the power of Mesos to applications that rely on Kubernetes
orchestrator. And it will of course be available everywhere.

See also: How To Make Data Services Scale Like Google
While some might jump to the conclusion that Mesos and Kubernetes are competitive solutions, I believe our customers will find them very complementary. We share a common vision for the dynamic management of applications and are working to create a framework that brings
sophisticated capabilities to the Kubernetes model for multiple cloud deployments.

It would take the community quite a while to evolve the basic Kubernetes system to the level that Mesos has already achieved in terms of offering things like resource and constraint-driven
scheduling, or high availability deployments. Having the ability to ‘drop in’ a richer framework really benefits our mutual customers.

The other really nice thing is that by combining Kubernetes and Mesos, users get the best of both worlds. Not just the richness of the Mesos distributed kernel, but a framework that lets you run existing workloads (including some of the very popular big data frameworks like Spark and
Hadoop) on the same shared physical resources as your new Kubernetes managed applications.

ReadWrite: So what does this mean for Mesosphere and Google Cloud Platform customers?

McLuckie: We are working together to make it simple for customers to benefit from
the combination of Kubernetes and Mesos, and deploy their Mesos workloads on Google’s container optimized infrastructure via Google Cloud Platform. For example, developers can visit Mesosphere.io’s website and in just a few clicks have a solution up and running on Google Cloud Platform. Or, if they prefer, they can soon use the Click-to-Deploy feature directly from the Google Cloud Platform Dashboard to get their Mesosphere solution running.

Either path gets customers going quickly, and the union of technologies—Mesosphere
and Google Cloud Platform— is quite natural.

Can't Contain The Container

ReadWrite:
If you think containers are the thing of the future, does your
investment in Kubernetes mean you're in the public game for the long
haul?


McLuckie: Yes, absolutely! Google has pivoted to focus a significant portion of its resources in terms of engineering man hours and infrastructure spending to Google Cloud Platform. We think this will be Google’s next big business. 

However, this is not going to happen overnight. This opportunity will require significant technical innovation and investment, both things that Google is good at and committed to.

ReadWrite: You have seen great community momentum with the initial release of Kubernetes. What’s next?

McLuckie: We are starting to see broad adoption of Kubernetes , such as in OpenShift v3 from Red Hat and by Clever.com‘s website, and we are excited about incorporating the learnings from these use
cases back into the Kubernetes code base. We’ve published a roadmap for Kubernetes on GitHub, and we invite the community to review and comment on what is needed.

Finally, once we get the Kubernetes API in good shape, we will offer a hosted version of the API on Google Cloud Platform that is more deeply integrated into our infrastructure.

Image courtesy of Shutterstock- ReadWrite

No comments:

Post a Comment

 

Sample text

Sample Text

Sample Text