Why Does Google Want to Retain Development Control of those Three Projects?

A piece in Protocol on Google’s desire to control some of its open-source projects’ directions made me wonder why Google was willing to release Kubernetes to the CNCF and wants to hold back control of the direction of Istio, Angular, and Gerrit.  What do a service mesh, an application platform and a code review tool have in common, if anything?  There might not be a common thread here, of course.  But Google is a strategic player, so is there something strategic about those particular projects, something Google needs to keep on track for its own future?  Could there even be one single, common, thing?  It’s worth taking a look.

To make things clear, Google isn’t making these projects proprietary, they’re just retaining the governance of the development.  To some, that’s a small distinction, and there were fears raised that popular projects might end up under the iron control of Google.  Why these three specific projects, though?

Istio is a fairly easy one to understand.  Istio is a service mesh, a technology that does just what the name suggests, which is to provide a means of accessing a community of service instances that expand and contract with load, balancing the work and managing instances as needed.

What makes service meshes critical to Google is cloud-native development.  Stuff that’s designed for the cloud has to have a means of linking the pieces of an application to create a workflow, even though the pieces will vary and even the instance of a given component will vary.

Service mesh technology is also the higher layer of implementations of serverless computing that are integrated with a cloud software stack and not part of cloud provider web services.  Google would certainly be concerned that open community interest could drive Istio in a direction that doesn’t fit the long-term Google cloud-native vision.

What’s the right direction?  One obvious candidate is “Istio federation”.  Recall that the current rage in Kubernetes revolves around means of “federating” or combining autonomous Kubernetes domains.  Service mesh technology, as an overlay to Kubernetes, might also benefit from the same kind of federating framework.  It would also create a bridge between, say, a Google Cloud with an Istio in-cloud feature set, and Istio software in a data center.

Another thing Google might be especially interested in is reducing Istio latency.  Complex service-mesh relationships could introduce a significant delay, and that would limit the value of service mesh in many business applications.  Improving service mesh latency could also improve the serverless computing applications, notably Knative.  Serverless doesn’t quite mean the same thing in a software-mesh framework because you’d still perhaps end up managing container hosts, but it does improve the number of services that a given configuration (cluster) can support.

We might get a glimpse of Google’s applications for Istio by looking at the next software package, Angular.  The concept of Angular evolved from an original JavaScript-centric tool (now popularly called “AngularJS”) to the current model of a web-based application platform built on TypeScript, an enhancement to JavaScript to allow for explicit typing.  Because Angular is portable to most mobile, desktop, and browser environments, it can be used to build what are essentially universal applications.

There are two interesting things about Angular, one organizational and one technical.  The organizational thing is that it’s a complete rewrite of the original AngularJS stuff, by the same development team.  That shows that the team, having seen the potential of their approach, decided to start over and build a better model to widen its capabilities.  The technical thing is that Angular’s approach is very web-service-ish, which means that it might be a very nice way to build applications that would end up running as a service mesh.

Angular was a part of a reference microservice platform that included Istio and builds an application from a distributed set of microservices in a mesh.  This would create a cloud-native model for a web-based application, but using a language that could take pieces of (or all of) the Angular code and host it on a device or a PC.

I have to wonder if Google is seeing a future for Angular as the way of creating applications that are distributable or fixed-hosted almost at will, allowing an application to become independent of where it’s supposed to be run.  If you could describe application functionality that way, you’d have a “PaaS” (actually a language and middleware) that could be applied to all the current models of application hosting, but also to the new cloud-native microservice model.  That would sure fit well with Istio plans, and explain why Google needs to stay at the helm of Angular development.

The connection between Istio and Angular seems plausible, but Garret is a bit harder to fit in.  Garret is a variant to github, a modernized repository model that’s designed specifically to facilitate code review.  Organizations used to github often find Garret jarring at first, and some at least will abandon it after the initial difficulties overwhelm them.  It’s best to do just a few (even one) main repository first and get used to the process overall before you try to roll Garret out across a whole development team.

Without getting into the details of either Garret or code review, can we say anything about why Google might see Garret as very strategic?  Well, think for a moment about the process of development, particularly rapid development, in a world of meshed microservices.  You are very likely to have multiple change tracks impacting some of the same components, and you surely need to make sure that all the code is reviewed carefully for compatibility with current and future (re-use) missions.

As I said up front, Google might have three reasons for three independent decisions on open-source direction in play.  The reasons might be totally trivial, too.  I think Google might also be looking at the future of software, to a concept fundamental to the cloud and maybe to all future development—the notion of virtualization.  If software is a collection of cooperative components that can be packaged to run in one place, or packaged to be distributed over a vast and agile resource pool, then it’s likely that developing software is going to have to change profoundly.

Would Google care about that, though?  It might, if the mapping of that virtual-application model to cloud computing is going to create the next major wave of cloud adoption.  Google is third in the public cloud game, and some even have IBM contending with Google for that position.  If Google wants to gain ground instead of losing it, would it benefit Google’s own cloud service evolution to know where application development overall is heading?

That’s what I think may be the key to Google’s desire to retain control over the direction of these three projects.  Google isn’t trying to stifle these technologies, it’s trying to promote them, but collaterally trying to align the direction of the projects (to which Google is by far the major contributor) with Google’s own plans for its public cloud service.

The early public cloud applications were little more than server consolidation onto hosted resources.  The current phase is about building cloud front-ends to existing applications.  Amazon has lost ground in the current phase because hybrid cloud isn’t its strength, and Microsoft and IBM are now more direct Google rivals.  IBM killed their quarterly earnings, and IBM Cloud made a strong contribution.  That has to be telling Google that if the hybrid-cloud game stays dominant, IBM and Microsoft will push Amazon and Google down.  They need a new game, one in which everything is up for grabs, and Google could come out a winner.  Real cloud-native could be that game, and these three projects could be the deciding factor for Google.