Tuesday, May 29, 2012

What'e next int the Cloud Foray - UCaaS

In past posts I have talked about how there would be changes coming to  in the cloud space.  I wrote several post on Infrastructure as a Services.  One of the big things that we are able to see is that the Infrastructure as a Service has rapidly become a commodity service where prices are dropping on what the services is.  The next wave of cloud would be an expansion of the SaaS model.  It is very hard to differentiate hardware in the market.  It is all about who is fastest cheapest and has the best support offering for the money.   There is also a proliferation of vendors that are able to provide that service.  At some point the market will start to consolidate some of the services and this business model becomes unprofitable.  SaaS offering provide a means for differentiating the services.

One of the recent directions in the last year or so is the move to push voice traffic out to the cloud.  This type of computing is called Unified Communications as a Service.  I know a lot of users would say OK isn't that where it came from originally.  AT&T and Verizon and others have been providing this services for years.  What has changed.

Several years ago businesses were paying a lot of money to have someone manage their telecommunications equipment of provide dial tone.  Users were charged by the minute for what the got.  In the early 90's with the advent of the Internet and the increasing bandwidth available enterprises started to look at hosting their own PBX and Voice equipment and managing it on their own with VOIP.  This help save some money but the complexity shifted from the carriers to the enterprises where they had to have the expertise in-house and had to deal with refresh cycles.

So why is this important?  UCaaS is a way to reduce some of that complexity by having other organizations focus on it.  Is it bound to take the market by storm?  Maybe.  Some people see this as stuck in a rut and not growing with the potential that it could.  One of my colleagues from Australia wrote a post about the early adopters of the technology and where this movement may be going.   http://gavinhill.dimensiondata.com/

If we look at many economic curves this seems to be consistent with product life cycles in general.   For successful products there is typically a bell curve of early adopters that move into a growing market segment and then potentially to decline as new technologies become stale or other ones emerge.  I see the chasm that we are seeing right now as the market deciding what they want to do.  As more vendors jump in and convince users to move to the next technology that their will be a good growth rate followed by new innovations.   Time will tell what model will prevail but we will be hearing a lot more about UCaaS in the coming years.

Wednesday, March 28, 2012

The Forgotten Cost of New Hardware

In recent days I have been thinking a lot about the costs of having hardware.  In my household we six kids.  On a daily basis we have at least one or two of the kids say they have some homework assignment that involves using a computer.  It has become painfully apparent to the kids that a new PC might be needed.  Right now I have two work laptops and my wife has another laptop and then we have a castoff PC that I reimaged so the kids could use that.  Unfortunately the cast off PC is at the end of its useful life and over the last 6 months I have replaced the power supply in my wife's PC twice.  So it is time for me to start thinking about something new.   The PC that I bought for my wife is probably only 3 years old which doesn't seem that old to me but it is having its issues. 

So what does my family issues have to do with a forgotten cost. This same issue that I am experiencing at home is the same one IT departments are grappling with on a daily basis.  Most electronic equipment is meant to have a useful life of around 3-5 years.  By the time you hit the 2 to 3 year mark you start seeing that the applications that you have running on them just aren't performing like they used to.  Also things just start to break.  Fans get noisy, hard drives crash, strange errors start to appear, etc. 

All these things bring on this desire to replace infrastructure every three years or so.  In a home setting that typically is not that big a deal but it can cause some disruption.  You have to start going through the old computer and decide which stuff you need and which stuff is worth keeping.  You need to start looking at the applications that you have on PC and moving them to the new PC.  You have to find all the software that you loaded on the old PC and hope that it will work on the new PC (and hope that the license keys will still be good).  More than likely there is a new version of the operating system so you need to learn where Microsoft moved everything so you can do the most mundane tasks.  There are also 100 other things that I home user needs to do get up an running and it all takes time.

This same thing happens for business when they have to refresh their technology.  They need to go through process of figuring out what is important on that computer and if what they are using is still going to work on a new server.  Many organizations put the storage in external disk drives so that the migration from one server to another becomes easier.  Also virtualization technology has helped make the move more seamless.  New software may need to be purchased and the new server may need to reloaded and multiple groups may need to be involved in the configuration process.  Many organizations have become quite efficient in migrating and standing up new environments but migrations have always been high the risk meter.  Lets also assume that you are back revisions on the software and you need to do upgrades to the software. That can lead to enormous amounts of work. 

A lot of people when considering moving the equipment get caught up with how much the equipment is going to cost them and they will pit vendors against each other in a death match to provide the best price for the equipment.  This however is one of the smaller costs of this transaction.  The true costs like in getting on old system moved onto new hardware.  There is planning, and testing and change windows that need to be set up.  There is potentially a reconfiguration process that may need to be done and a cost of labor to set everything up again.  Most people may look at these costs as sunk costs because they have to pay their employees anyway but the time that it takes to get everything up an running and running on the new versions of the software can be significant.  Add on upgrades to software or operating systems and you start talking some money.

This is why I have become an advocate for more cloud based approaches to computing.  Instead of footing the bill for new hardware every 3 years you leave someone else to worry about that.  They also are responsible for making sure your systems still work at the end of the day.  I am particularly fond of SaaS services because they are in charge of not just the hardware but making sure the application continues to work after an upgrade.  If you do a comparison of the costs of consistently buying new hardware every three years you are probably going to come up with a small ROI.  If you think about it as a way to avoid the 3-5 migration project that disrupts a lot of people then benefits become a lot more obvious.

Well,  that still leaves me with my old PC at home that I need to replace.  Wish me luck as I go through my triennial pilgrimage to the land of laptop migrations.   Hopefully I don't hit many of those forgotten costs. 

Wednesday, March 14, 2012

Disruption in the IT Landscape


Computing the way we know it will disappear.  Right now there is so much innovation in the compute space that it is hard to avoid.  There are many things that are driving this but the main cause is a lot of creative people all over the world are looking at the way business is done and looking for the inefficiencies in the system.  They are looking for the areas where corporations have extracted large amounts of profits and looking for disruptive ways that they can get a piece of the pie (or even make a new pie). 

For example, I recently ran across startup named the The Currency Cloud  that is looking to disrupt the currency exchange market (FX).  The (FX) market is a trillion dollar business.  They make enormous amounts of money by playing on the swings in currency values between countries.  Any business that deals with this is at the mercy of the big banks and clearing houses that convert currency each day.  It has been a very profitable sector for many years.  Many businesses hate it because it introduces uncertainty into the business of doing global transactions.  How do you set prices for products and services and still remain profitable in different parts of the world.  A startup named has created a quicker faster cheaper alternative to the traditional structure.  If they are successful the traditional Forex markets will be disrupted.  

A second example of this is related to investments.  People have been trading stocks, bond and commodities for years.  There has been a small group of companies and investors that have had the capability to get in on the big ideas before they become big.  The rest of the people had to wait on the sidelines until those companies went public.  Second Market has started to change that by offering a way for regular investors to get into the emerging markets before it is too late.  Government regulations make it a little bit difficult for the small investor to use Second Market (you need to have 1 million dollars of net worth which rules out a lot of people), but it is an example of another disruptive technology.

While attending an entrepreneurs group in NYC I ran across a company that was creating a new paradigm for providing wireless connectivity to underserved communities. In poorer areas many people cannot afford to have their own internet so a company called Keywifi has come up with a way to share the bandwidth of your wireless network.  So in poorer neighborhoods a couple of people can sign up to be “hotpots” and people can rent bandwidth from them at a lower cost.  This could have huge implications in developing regions of the globe.

These three examples are just scratching the surface how a difficulty with the way technology is used today opens up new markets in the future.  Right now there is an enormous amount of change going on in IT.  Cloud Computing, Tablet Devices, Video, Social Networking, Converged Networks, fully integrated platforms (pre-staged networking, storage, and compute resources), etc.  All these things all have a common string to them.  Someone looked at what we were doing and said “there has to be an easier more efficient, more effective way to do this.” 

Let’s take cloud computing for instance.  In the past every company that had some computational needs were forced to create or buy their own infrastructure.  Even small businesses would buy servers and some networking gear and maybe some storage and backup gear to carry out the day to day operations.  The problem with that is they had to hire a staff that would support that environment.  This is great for employment because lots a people have jobs.  But for a business this added additional cost and complexity to their businesses.  Supporting that environment meant a lot more than just buying some hardware.  There needed to be 
  • space for the equipment 
  • power and cables 
  • cooling to the equipment.  
  • those cables also needed to be run to all the users’ desks so they could get access to the equipment.  
  • Operating Systems and software and each of those systems needed to be loaded
  • systems needed to be secured, backed up and patched.    
When it is all said and done the amount of work and expense that is needed to run even a small environment could be significant.  Many companies actually don’t realize the true cost and scale of running and IT department. 

This changed in the early 2000s when some companies started to offer subscription services to their software.  So instead of running you applications on your own hardware it would run in someone else’s datacenter and you would just add your data.  This was not a new concept.  IBM had pioneered this with their mainframe software.  You would pay for what you used.  The early innovators in the cloud space were able to say it is easier for me to run the entire infrastructure for all of these companies as opposed to all of the companies having to do it for themselves.  There were some factors that accelerated this move.  Virtualization (once again stealing from the IBM mainframe concept) enabled companies to take more advantage of less expensive hardware to make this sharing model more affordable to the end users.  Increases in network bandwidth also made this more palatable.  If everyone was running on 10mb Ethernet we would see less people willing to do this.  Now most desktops have 1 gb connections and 40 and 100gb networks are just starting to ship.  The wireless space has also experienced this bump.  With 4G networks I can get my data almost as fast as a Wireless N device.  Companies like Google and Amazon have been pioneers in this space.  And it was all because someone finally figured out the easier way for the consumer to do this. 

Converged Networking is also a good example of this metamorphosis.   In the past there were multiple ways to attach to networking devices:  ethernet, token ring, FDDI etc.  In the end one standard won out - Ethernet.  Not necessarily because it was a best technology.  It was just easier.  In the late 90’s attaching to external storage arrays became a new way to do business.  The reasoning was that it was more cost effective to share disks with multiple servers that to let them go under-utilized inside the servers.  This led to new ways to attach to storage, and new network-like infrastructures.  Fibre Channel and SCSI cables were introduced initially to attach the disk to servers.  As time went on a separate infrastructure of switching equipment was built with its own separate connectors and cabling.  The complexity of the environment was increased.  

In this growing mess the bandwidth that could be achieved by Ethernet and FC interfaces increased to an extent the outstripped the computer’s ability to move that data.  Somewhere some engineer said “what would happen if we ran the storage traffic over the same cable and adapters".     That would mean one less component in the servers and one less cable and one less switch to deal with.  Converged networks were born. Cisco was one of the innovators in this field.  The reasoning is obvious.  They benefit the most by moving things into a a converged network infrastructure. Now Cisco, HP, IBM, Dell and a host of other companies all have their converged networking platforms.   
 All of this came about because existing networks were getting too complex.

The old guard will continue to fight the erosion of their margins by the competitors that embrace the new changes.  I recently read an article entitled Oracle has a cloud computing secret about Oracles dilemma in regards to pricing of on-demand instance of their software.  They stand to lose a significant chunk of revenue if they use the model.  The main problem with this philosophy is that train may have already left the station.  In numerous customer calls over the years I have dealt with people who are looking for ways around Oracles licensing. So much so the tools like MySQL (which is now owned by Oracle) and Microsoft SQL server and alternatives that are having a great deal of success. The cable companies are starting to get this as well where people are starting to say that they just want specific channels not every channel.  Subscribers are shutting off traditional cable and our getting a lot on of the same content from the Internet.  Just last night I watched the NCAA tournament on my computer.  Being the vendor that is living in the past model is a precarious state. 
 
Right now is a very exciting time for IT.  It is changing very rapidly.  Barriers for innovation and costs of innovating have been reduced.  It makes for a wild ride. 

Tuesday, February 28, 2012

Private Cloud vs. Public Cloud


I recently attended a seminar put on by a consortium of hardware vendors extolling the values of private cloud architecture.  They were introduced by a reputable industry analyst group that had done surveys of businesses in regards to their adoption of private cloud technology.  Not surprisingly the analysts and the vendors talked about how many organizations are looking to build a private cloud environment of their own.  After listening to the pitch I thought it would be a great opportunity to compare the two methodologies. 

Just so I can lay all my cards on the table up front, I have spent the last 15 years as a presales architect in in the business of hardware sales.  I have recently been working with NTT America in the public cloud services arm so I think that I have a pretty good look at both options. My personal opinion is that most technology will end up in a public cloud (as SaaS, PaaS or IaaS).  Having said that however, I think there is room for discussion on the subject.

For the most part the vendors that put on the webinar were probably right about businesses taking their first cloud step with their own private clouds.  The reasons for this are pretty obvious.  Many organizations already have at least a portion of the infrastructure in place and they many of them have done a certain amount of virtualization so the next logical step for the enterprises would seem to be making their current infrastructure into a private cloud.  There may also be so political motives as well for maintaining the status quo and just extending the existing infrastructure that they own.   

Fear and Politics 
  •   Jobs - People get nervous about jobs when they start thinking about their hardware going somewhere else.  There is a whole group of people in most organizations that do the sys admin work and the cabling and racking and stacking.  There is also a group that manages the physical infrastructure that would also be affected by a movement to a public cloud provider. 
  •   Loss of control.  For many years I have worked in data centers and there was nothing like the happiness that you felt from the brightly flashing lights and the sound of the spinning disks and the warmth that you felt from the back of the servers as you huddled behind them in a datacenter to try to stay warm.  It was like a security blanket.  If I sound like I am being sarcastic I am.  Being inside of datacenters I a pain in the butt, but having it close by makes people feel better.  They feel that if something were to go wrong they can run right into the datacenter and fix it.  

For example I recently was working with a customer that was looking to create a virtual desktop environment. When we went through some of the basic sizing questions we found out that the environment would never scale beyond 50 to 100 users. If you design VDI solutions for full redundancy you are looking at typically a minimum of 3-5 servers and backend storage for redundancy. This was a small company and in my mind the numbers didn’t make sense to even bother building something out. When I suggested a more appropriate solution may be to look at a cloud offering for building a VDI environment the customer wouldn’t entertain the idea because of the control factor.
  • Security - If it is behind my firewall then it is protected.  If you go back to my staffing argument from above there are whole teams of security engineers that are making each corporations safer.  They also have the fear of job displacement.    
  • Hardware vendors have less of an incentive to push public clouds because that makes the amount of people that they can sell to diminish.  If all computing were to be consolidated to a handful of vendors then in essence the hardware teams that sell to SMB and other businesses would be out of work or would be chasing the big guys.  So for in the most part it is not in their best interest to push a public cloud model. 
So what keeps organizations in the safe place of creating a private cloud is fear.  Fear of losing jobs, losing control, security breaches, etc.  I would submit that much of the fear is based on the newness of cloud technologies and may not be completely warranted.  Showing a survey that says that because companies are looking to create a private cloud and that is reality may miss the point. Asking the question of whether or not a customer is looking at private clouds may be a leading question.  The question should have been what the better option for the customer is.  But that is hard to ask in a survey.

Public Clouds

Public Clouds offer the users and alternative to the traditional way of doing things.  There are many advantages to creating a public cloud over building your own.  However, based on some of my previous comments it may not seem like as an obvious choice for some organizations.  For example if you have a datacenter that is already built out with lots of growth built into the design and you have done your best to consolidate your infrastructure the public clouds may seem more risky. 
There are some factors that you do get from using a public cloud that you wouldn’t get if you built your own 
  • Infrastructure - You no longer have to be concerned with managing and monitoring the infrastructure.  So if a new series of servers comes out with different types of power supplies or requires more wattage it is no longer your problem.  You also do not have to worry about maintaining the space from a facilities perspective.  No diesel storage, no backup generation, no running of fiber into the building so you can have high speed connections.  That is all taken care of by your vendor. 
  • Migrations - You now no longer need to worry about the cost of upgrading your environment.  Every 3 years or so most organizations refresh their equipment.  That means that there is a capital outlay every few years or so.  And during that time frame you need to have a group of people that will manage the migrations. 
  • Jobs - It is true that the job qualifications will change for your organization.  There will be less people that will be needed for maintaining the basic infrastructure.  Those people could be redeployed in a way the provided higher value to your organization.   
  •  Location independence – I highlighted in my Private cloud discussion the warmth I had for spending my day in the datacenter.  Those days are a thing of the past.  Remote access is expected and with the capabilities of spinning up new machines and creating fault tolerant environments in the cloud I think this argument is a moot one.  There are additional security features that you are going to want to ensure within the environment so this is not completely a win for public clouds. 
Public clouds do have their downsides as well. 
  • Virtualization type – If you have already invested in virtualization there may be some additional steps to getting your environment to the cloud.  Some vendors base their solution off of different hypervisors that may mean that a conversion is needed. 
  • Ability to migrate in and out of the environment.  It takes time and expertise to move to and between cloud vendors.  As mentioned above their isn’t one standard for virtualization so it may take more effort to get there.  It generally is easier to start from scratch.  
  • Storage options - You will be using whatever the vendor decided to by so if you don’t like what you have there aren’t any ways to change it.     
  • Security – While this fear may not be completely founded in reality sharing infrastructure with other companies make people nervous
I could go on with a list of 100 different reasons why you would choose one technology over the other.  I personally think that in the next 20 years the ROI for Public Cloud providers will be too compelling to build your own private cloud.  And why would you want to deal with the headache.  Right public cloud has some good uses and other places where infrastructure is in place it does not.  So who wins the debate?  For the moment being I am calling a jump ball. There are use cases for both and each organization should evaluate their own situation.