Archive for March, 2016

Bare Metal Cloud vs Dedicated Servers

Posted by Adrien Tibi

Simply put, a bare metal cloud is made up of dedicated servers but is automated for near-instant provisioning.

  1. What is a bare metal cloud and what is a dedicated server?

There’s a lot of confusion around the bare metal cloud concept, largely because it’s still a relatively new term, only really coming into use since 2014. Even though the popularity of dedicated servers is in decline, it’s a well-recognised term, so it’s easy to see how people start to get confused when they hear “bare metal cloud” and “dedicated server” used interchangeably.

Ready to deploy on bare metal? Create your free account and start configuring your bare metal servers here.

 

The reality is that bare metal cloud and dedicated servers are related, because one is part of the other. A bare metal cloud environment is made up of 1 or more single-tenant/dedicated servers. A dedicated server is a standalone server, with a given specification, that the customer has complete administrative access to.

  1. So why not just call it dedicated or single-tenant cloud?

Probably because of the negative connotations. Dedicated servers have historically had the drawback of being manual or time-consuming to provision. This meant that if you needed an instance spun up quickly, you would want to avoid a dedicated environment as a matter of principle.

A bare metal cloud environment avoids this issue, as it automates provisioning. You can buy your kit online and get it up and running in a matter of minutes, as easily as you would with a public cloud provider like AWS. Unlike virtual machines, a bare metal cloud lets you control everything from the infrastructure upwards, but without owning or operating the wider public internet or datacentres.

What can a bare metal cloud include?

So who is bare metal cloud right for?

You’ll always need to analyse your hosting requirements to determine which environment will best support your workload. But for a great number of workloads, a bare metal cloud represents the most flexible method of running a hosted application as it is customisable, cost-effective, and scalable.

  1. Ok, but is bare metal cloud right for me?

Maybe, try asking yourself these questions:

 

If your answers are generally positive, then you may want to look into controlling a bare metal environment. If there are negatives you’ll need to weigh up your technical considerations against each other or against your budget objectives.

When is Bare Metal the Right Choice for ECommerce?

Posted by Adrien Tibi

For the majority of eCommerce businesses, workloads are quite predictable, making bare metal cloud a much more cost effective infrastructure choice than public cloud in many cases.

Swathes of eCommerce founders are understandably drawn to public cloud vendors, such as AWS – enticed by low entry point pricing and offers, as well as the success stories of prominent names. But there are two sides to the public cloud pricing equation, and low entry point prices are invariably offset by premiums elsewhere in the portfolio. Typically, this impacts businesses who do not make use of the scalability aspects of the platform.

Ready to deploy on bare metal? Create your free account and start configuring your bare metal servers here.

 

Don’t Mistake Yourself for Amazon

If an eCommerce store experiences massive peaks and troughs in its visitor numbers, the ability to access burstable capacity, and to avoid the need to over-provision to cope with those peaks, represents a very real financial benefit. The reality is, however, that most eCommerce businesses, even the most successful, are quite unlike Amazon, which built AWS to meet its own need for scalability first.

Amazon’s eCommerce business faces unique challenges that arise from its size and success – massive fluctuations in visitor numbers are only the beginning. Amazon also contends with a truly global audience, calling for infrastructure in every region in order to cater for demand while delivering a satisfactory user experience.

On top of this, Amazon’s Big Data collection and storage, analytical power and back office systems all need to scale in line with customer demand too. Their need for burstable capacity is unprecedented.

Add to this, its marketplace offering, which accommodates millions of independent sellers, and the magnitude of these infrastructure challenges becomes even more impressive.

But Amazon is a one-off. The vast majority of eCommerce businesses have very different infrastructure requirements.

 

The Right Infrastructure for Your ECommerce Business

Fluctuations in visitor numbers to eCommerce websites rarely reach the point where the difference between the average and peak demand for computation power makes paying a premium for public cloud scalability worthwhile.

And while visitor numbers fluctuate, the volume of data being stored for transactions and other processes grows linearly – predictably and slowly in relative terms. The demand for analytical power and business intelligencemeanwhile, though driven by visitor numbers, is in many cases outsourced to third parties and so does not impact the core infrastructure requirements at all.

This predictability in demand, and therefore infrastructure requirements, is further supported by the tendency for successful eCommerce businesses to serve specific markets and geographies well, rather than taking the multi-national broad-line approach.

It rarely makes sense for an eCommerce business to host all of its infrastructure, if any, on the public cloud.

 

The Business Case for Bare Metal

Unless your workloads are like Amazon’s and you can benefit from access to premium, burstable, pay-as-you-go capacity, bare metal cloud will give you better ROI. It’s well known that, for most examples of always-on instances, a dedicated server within a bare metal cloud environment will cost a fraction of what the public cloud alternative will – once attractive new customer deals have expired, of course.

In addition to the cost advantages, bare metal instances always deliver better performance over time, thanks to their being single tenant and not sharing resources with any other users in any way. Given how vitally important page load and server response times are to the customer experience in eCommerce, this is a distinct advantage in bare metal’s favour.

 

In conclusion, the low-cost ticket price of public cloud is not what you will end up paying if your workloads are steady or your instances always on. In every case, the best way to ensure you get maximum ROI from your infrastructure, both in terms of cost and performance, is to match it closely to your workloads. Capacity planning is an essential step in cloud deployment of ecommerce businesses and should not be overlooked.

Amazon AWS is a Bare Metal Cloud – for Amazon

Posted by Adrien Tibi

For its users, AWS is a public cloud; for Amazon it’s their own cost-effective bare metal cloud.

Understanding how Amazon’s AWS supports the needs of its Amazon.com business is useful when considering the economics of building a cloud environment for your eCommerce business.

Undeniably the most popular and successful eCommerce business of our time, Amazon is unique. And with this success and scale have come challenges that no other eCommerce company has faced previously. Overcoming these challenges is what led to the birth of AWS and is why Amazon.com is now built fully upon it.

Ready to deploy on bare metal? Create your free account and start configuring your bare metal servers here.

Amazon’s Unique eCommerce Challenges

Perhaps the easiest component of the eCommerce business is providing an online store on which customers can browse and purchase products. Pages are highly templated, while images and content are driven by a database of products. The real challenges for an e-tailer with Amazon.com’s success all revolve around scaling this online store while meeting the demands of customers.

Delivering the best possible user experience, no matter where or when customers are using the site, requires infrastructure with enormous scalability. In order to maximise sales and minimise abandonment, product images and content need to load lightning fast, categories must be searchable and filterable and the site responsive when customers add products to baskets and move through the checkout process. Slow sites leak orders.

Amazon.com’s visitor numbers also vary massively over time. For instance, the number of site users on an average Wednesday will be dwarfed by the Black Friday rush. Provisioning an infrastructure to cater for the peaks in demand, while simultaneously meeting customer expectations, would simply be economically unfeasible. The only option for Amazon was to build on a shared platform that can afford the kind of scalability required.

The issue is further complicated by Amazon.com’s global reach. The same kind of scalability and performance is needed in every region – images can’t be transferred across the globe as customers try to browse products. Instead, images and content need to reside near the user in order to provide a satisfactory customer experience. The only way to achieve this is with infrastructure close to the end-user, meaning huge estates at multiple points across the globe.

 

AWS = Bare Metal Cloud

Amazon will have very quickly realised that, while third party public clouds offer the scalability it needs, the economics are not in its favour.

In order to be able to offer massive scalability, a public cloud vendor needs to be able to sell unutilised capacity quickly and easily. This means charging low entry points to bring customers on-board for short-term or low-level usage but then charging a premium in other areas to offset the potential cost of under-utilisation. The result is that larger businesses end up paying more for their resources.

For Amazon, building its eCommerce business on someone else’s public cloud would be too expensive. So it built its own.

But when you build your own cloud you build a bare metal cloud, a collection of physical machines, networked and at your disposal, whether it’s for use in dedicated format for databases and containerised apps, or in virtual form for webservers etc. In AWS’s case, this bare metal cloud consists of more than two million Linux servers.

Public cloud is simply an economic model applied to such bare metal infrastructures. By building its own bare metal cloud, Amazon was able to remove the cost premium of building Amazon.com on someone else’s public cloud, and create an additional revenue stream selling the public cloud model to others.

The lesson for eCommerce businesses is to never make assumptions about which type of infrastructure is right for your needs and to always look beyond attractive new-user pricing. The right infrastructure for you always depends upon the type of workloads you will be running, the patterns in demand, locations of users and the utilisation of resources.

The right solution may well be a combination of public cloud and other platforms, like private or bare metal clouds and dedicated servers. The only way to discover this is to thoroughly understand your requirements or consult with experts who can advise you on all the available options.

Is the Size of Your Dev Team Harming Your Productivity?

Posted by Adrien Tibi

Jeff Bezos, Amazon CEO, famously claimed that if a team couldn’t be fed with two pizzas it was too big. Of course, he also declared that “communication is terrible!” – so, was he making a valid point or just grandstanding?

It can seem counterintuitive to suggest that two heads are worse than one, but there’s actually a good body of evidence to suggest that Bezos knows what he is talking about.

A number of studies show a variety of effects that make create sluggishness and lower productivity in large teams, regardless of who’s in them.

 

Sluggishness in Large Scrums – Jeff Sutherland

The Scrum Guide has evolved its “7 ± 2” people rule for team size into “3-9 people” over the years, showing a growing recognition of the value of even the smallest teams.

But Jeff Sutherland, one of the inventors of Scrum and writer of The Scrum Guide, is unequivocal on the matter – keep it at 7 people or fewer.

In an experience report he produced for Craig Larman’s book Agile and Iterative Development: A Manager’s Guide, Sutherland described a situation he observed within a 500 person dev group:

A few teams within the group were generating production code five times over the industry average rate. But most only doubled the production average, despite good Scrum execution. All the teams that were able to work hyper-productively consisted of 7 people or fewer, and Sutherland surmises that the larger group numbers (usually of around 15) were the reason behind their relative sluggishness.

He also points out that (at the time of writing) Rubin’s Worldwide Benchmark database gives the average cost per function point, across over 1000 projects, as $2970, but for teams of 7 people, the average was just $566.

Any team over 7 in size should be split up into multiple Scrums.” – Jeff Sutherland

 

 

Social Loafing–Ringelmann, Latané et al

A much earlier case for small teams was made by Maximilien Ringelmann, with his 1913 findings now referred to as the Ringelmann Effect.

Essentially, the Ringelmann Effect suggests that the size of a team is inversely proportional to its productivity. The more bodies in a group, the more difficult coordination becomes, with teamwork and cooperation suffering. Ringelmann highlighted this with his renowned “rope pulling experiment” – he found that if he asked a group of men to pull on a rope together, each made less effort when doing so as part of the group than when tugging alone.

Ringelmann’s findings are backed up by the experiments of Bibb Latané et al, who studied the phenomenon known as social loafing.

 

Social psychologist Latané demonstrated the social loafing effect in a number of ways. A key experiment that showed that, when tasked with creating the loudest noise possible, people in a group would only shout at a third of the capacity they demonstrated alone. Even just (mistakenly) believing they were in a group was enough to make a significant impact on the subjects’ performance.

When groups get larger, you experience less social pressure and feel less responsibility because your performance becomes difficult, or even impossible, to correctly assess amidst a crowd. It’s no wonder that when credit and blame get harder to assign, you start to feel disconnected from your job.” – Bibb Latané

The Pain of Link Management – J. Richard Hackman

It was published a number of years ago now, but Diane Coutu’s interview with J. Richard Hackman on Why Teams Don’t Work in the Harvard Business Review is worth a read for a further look at the many reasons that teams can hinder more than help.

To Hackman, one of the key stumbling blocks for teams is link management. As groups grow, the accumulating links between everyone within the team rises steeply. The number of links created by a team can easily be calculated with the equation:

# of links = n(n-1)/
(where n = number of people in the team)

So a team of six will create 15 links, but a team of twelve racks up 66.

The more links needing maintenance, the higher the potential for mismanagement and miscommunication. Keeping everyone in the loop and coordinated can eat into productive time. Or, as Hackman bluntly puts it, “big teams usually wind up just wasting everybody’s time”.

 

Relational Loss – Jennifer Mueller

Racking up links can also incur a more personal toll. Psychologist and Professor of Management, Jennifer Mueller proposed “relational loss” – the feeling that you are receiving diminishing support the larger your team grows – as another issue created by a larger team.

Mueller studied 212 knowledge workers across a number of companies in teams ranging in size from three to nineteen. Across data derived performance evaluations and questionnaires on motivation, connectedness, and coordination, she found “compelling evidence for relational loss.” The larger the team, the less supported people felt and the more their performance suffered.

Software Project Teams – Brooks’s Law

“Adding human-power to a late software project just makes it later.” – Fred Brooks

Most developers will be familiar with Brooks’s Law for software project management and the idea that for every project there is an incremental person who will make a project take longer, if added to it, along with those arguments that refute it.

Whether the law is gospel or “an outrageous simplification” as Brooks himself claimed, three sound factors underpin his point:

Firstly, new team members are rarely immediately productive – they need what Brooks refers to as “ramp up” time. During ramp up time, existing members of the group may lose focus as they dedicate time and resources to training the newcomer. Far from creating an immediate improvement, the new worker may even make a negative contribution, for instance introducing bugs.

Secondly, personnel additions increase communication overheads – everyone needs to keep track of progress, so the more people in the team the longer it takes to find out where everyone else is up to.

Thirdly, there’s the potential issue of limited task divisibility. Some tasks are easily divided but others are not, as illustrated by Brooks’s charming example that, while one woman needs nine months to make one baby, “nine women can’t make a baby in one month”.

 

Larger Teams Breed Overconfidence and Under-performance – Staats, Milkman and Fox

Not only does larger team size seemingly make people more complacent and less productive, but it also breeds overconfidence. There’s a tendency “to increasingly underestimate task completion time as team size grows,” say researchers Bradley Staats, Katherine Milkman, and Craig Fox. One of their experiments showed that, in a task to build uniform Lego figures, teams of four people were almost twice as optimistic about how quickly they could construct it as teams of two, but they actually took over 44% longer.

If four people struggle to work together to build some Lego, then the outlook doesn’t exactly look great for a complex development project.

 

Back to Bezos (and Pizza)

In practice, the two-pizza rule translates to splitting your personnel into autonomous task forces of five to seven people, which sits comfortably alongside the advice of Sutherland and Hackman and should minimise social loafing and the sense of relational loss.

And when Bezos said “communication is terrible”, he was really saying that cross-team exchange gets in the way of team independence and creates group-think. Not everywhere has a culture that relies on creative conflict like Amazon, but limiting dysfunctional decision-making and being discerning about how and where communication is needed would be beneficial to all.

 

 

If you’re mystified by a lack of productivity from a highly skilled and talented dev team, it may be that you’ve just got too many talented people trying to work together. Splitting your group into two or more two-pizza sub-teams might be all it takes to radically change your output.

 

Build your bare metal cloud

Speak to an advisor for a completely free consultation or create a free account and start configuring servers now