Tactics versus strategy for the HPC market
By joe
- 10 minutes read - 1978 wordsI have given the Microsoft entry into cluster computing a great deal of thought. I want to see if this is a force to be reckoned with, or something else. Will they matter in the long term?
A tactic is something you execute to further a long term goal. You may change tactics to achieve your goals. You may alter your tactical foci to adjust to market conditions. Individual tactics are not the important element, how they advance you towards your goals are. A tactic is not something you commit to.
A strategy is the big picture vision that you leverage in the process of achieving your goals. Strategies are often implemented by careful creation and execution of tactical efforts. You commit to a strategy, and create/change/drop tactics to implement the strategy.
Many people confuse these, refering to tactics as strategies and vice versa.
This is nice, but what does this have to do with HPC, supercomputing, or Microsoft?
Quite a bit I am afraid.
My initial thoughts were that Microsoft, a large multi tens-of-billions-of-dollars software house, with a few gamers thrown into the mix, had committed to high performance computing. That they saw this market for what it is, a vibrant, active, growing dynamic in which they would be able to offer value. My hope was that they would work to fit in and grow with the market, leveraging synergies, and exploiting the potential value they could bring. Rising tides, all boats, yadda yadda.
Then a few things happened. Words appeared in press. Conversations happened. Thoughts crystalized.
First, I had a talk with Kyril Faenov and Patrick O’Rourke of Microsoft. Both are nice people, and obviously Microsofties to the core. There is nothing wrong with that, and I admire any company that can engender such a strong bond among its employees. During our short conversation, Kyril implied that until Microsoft’s entry into the market, clustering was either hard, expensive, or impossible. I am not trying to misrepresent this, so if any of the people in that conversation wish to clarify/respond, please, be my guest. Their idea was that Microsoft’s entry would be the light-unto-clusters, showing us what could be done if Microsoft did it. That may be a somewhat flippant way to say it, but that is how it came across to me.
I’ll address this in a moment, but suffice it to say, that I found flaws in their premise.
Then I ran across a number of articles from various Microsoft PR folk, and others. Finally I ran into this. In this article, page 3, we see the plan.
Quoting “This is the year where we’re well-equipped to come back into the Linux strongholds and take some share. We have our Windows Compute Cluster Edition. We’ll get back into high-performance computing, which, at the end of the day, is something like 30 percent Linux servers.” This is in the section where Mr. Ballmer talks about how Linux isn’t growing, and how they are going to take share from something which isn’t growing.
One must wonder, if it isn’t growing, and hasn’t been growing, as he alleges, then where did that market share come from? Unix had been dying long before Linux became a serious player, Linux merely hastend Unixes demise. It couldn’t have been all Unix share that Linux took either. Since the server market was flat or slightly growing in this time, that market share must have come from somewhere. You can’t go from 0-X% without growing (for any X > 0).
This is beside the point. The point being HPC as a strategy in companies like Scalable Informatics, Basement Supercomputing, or Linux Networx, Panta, and many others, or HPC as a tactic, to ward off an encroaching competitor.
What we see is what Mr. Ballmer perceives to be a good tactic towards the strategy of how to deal with Linux as a threat. The tactic is to enter HPC and attack Linux market share. That is, HPC is not a core part of Microsoft’s business. It is at the periphery, at the fringes. They have left it before, as both RGB of the infamous Beowulf list indicated, as well as Mr. Ballmer himself alluded to in this article. HPC is a means to the end (their real goals and strategy). The end is to work on reducing Linux impact in the market, and the means to accomplish these goals are the tactics they will use. Microsoft’s HPC tactic.
HPC is not core to Microsoft. Not like it is at my company, or others with a similar HPC focus. We do HPC and anything that surrounds it. Little else. So when Microsoft, the worlds largest software company, says it is enteriing the market, we are hoping to see a strategy we can buy into. A grand vision. Something we can work with and add to.
Some of what we heard were things that we have been preaching for years (decades in some cases), using the same language we use. HPC must be easy to use. Must be transparent. And what our customers tell us again and again, it must be free from vendor lock-in. This is important. The first tuesday of every month, most of the corporate world deals with the aftereffects of a monocultural vendor lock-in. I haven’t met a single customer yet whom appreciates this.
Back to the point.
HPC is a tactic that Microsoft plans to use to get traction against Linux. An HPC strategy would include the realization that this market is going to be significantly larger over time, and that it is fueled by applications running on Linux clusters. A strategy would work within the existing market to grow it, enhance it, add value to it.
Remember that statement by Microsoft, that Linux clusters are too hard to use? If they are, then why are they growing as a market 60+% per year? Can’t be that hard. Or people wouldn’t buy them. What about their concern about integration? Again, this has not been an issue for our customers. They know how to use web pages, and the windows explorer to navigate. Therefore they can use the clusters we sell and support. What about their technologies such as integrated job schedulers? The market has a number of them out there, which had the features several years ago, that Microsoft talks about in their future.
That is, a strategy would not be one where you try to replace that which works well now, but to augment and grow that which works now. You want to hook in your scheduler? Here is a great DRMAA API to do it. You want to write “shell script code”? Fine, here is .Net on windows, and you know, you can do all the same stuff on the cluster with mono, so lets support that. Note that this would likely work well to help Microsoft defend itself against charges of monopolistic practices that it had been accused of in the past. You want to write web services APIs? Great here is the .Net/mono for it. Works across all platforms. Have nice front-ends on windows and linux, nice backends whereever. Lower the barriers.
Think of it this way. Java doesn’t run everywhere. There are no nice shiny new Java bits for Itanium2 boxen . You are stuck with older stuff if you have this. Use mono/.Net and the argument for Java pretty much vanishes overnight. But you have to support mono, and not merely grudgingly admit its existence until you bring your patents to bear. The nice thing is that with a blessed/supported mono, .Net now does run everywhere. Even architectures that .Net never anticipated. And the development and support are free. Lower the barriers.
These are things they could have done to show that they are serious about wanting to work with the exploding cluster market. Instead, they went in the raised barrier model. Proclaiming that all that the Linux clustering efforts were not reasonable, and theirs was reasonable. If Linux clustering were unreasonable for end users to implement, why then is it the engine pushing the growth in the HPC market? Seems to me that this argument fails in face of market data. No strawmen needed, their argument is decimated perfectly well by 3 years of IDC and others data.
They are a late-comer to this market. The market was in high gear before they arrived. Shows no signs of slowing down. Their product is not cost competitive with the current market leaders. You can load a 32 node cluster with Linux for $0USD software/tools acquisition cost, and get some of the highest quality scheduling, interoperability, web services, development, and other tools. Your high performance computing applications will work out of the box, as most are targetted at Linux and Linux clusters. You can be productive in many cases in under an hour from starting if you use default or preset configurations. This is what they are competiting with. Whether or not this was a good idea will be answered in the future. Since it appears to be a tactical path rather than a long term strategy, who knows how long Microsoft will be committed to this path. If it fails to pan out, they could drop it. Or not.
I like telling people that systems designed to fail, often do. I do not the Microsoft HPC-as-a-tactic approach as being designed to succeed. It alienates the community of users whom have been and will continue to successfully drive HPC forward.
I do hope that they revisit their thinking, and decide that HPC as a strategy is a good thing, and abandon the HPC as a tactic to attack Linux. The market would be much more interesting if they were going to work with the existing community. There is much that could be done together in this case.
Microsoft may be banking on creative destruction, to replace the existing market with one of their own making. This would be good if they could demonstrate better/cheaper/faster. Where I sit now, I see more costly, and slower due to all the anti-viri and firewalls you must run (in corporate America anyway) on every windows machine, regardless of its function. Better? I disagree that this model is better. YMMV. Today you can do quite a bit with your Linux platform. From laptop to supercomputer. Easily. No patch-tuesday. It just works, it is easy to setup and manage.
With regards to HPC strategies, this requires a vision of where you want to be. Mr. Gates did a good job of articulating what we and many others have. You need HPC to be integral and invisible. Has to just work.
Briefly, my vision on HPC is that I would like to put on a data glove, enter a virtual world, adjust my atoms, and start my molecular dynamics simulation. Stop it, move some things, measure stuff. All in this virtual immersive world. A world where I can stop thinking about how to code up an FFT or xGEMM to get the optimal performance out of the silicon, and where I can start thinking about the science I can do with the tools.
This is what I have been talking about since at least 1990, if not a little before. We are still a ways away. I believe that before I retire that I may get close to being able to do this.
Today I can run the same code that I run on my supercomputers on my laptop. And on some PDAs. And cell phones. And game consoles. All of them run Linux. I suspect the system that will enable me to do that sort of immersive molecular dynamics will also likely be running Linux. Maybe something else, but it would need to be much better. Can’t get cheaper.
Its not the platform that matters, its what you do with it.
``