Blog

Product-focused DevOps mustn’t forget the customer

Synopsis

As an organisation becomes product-focused and teams begin to focus their attention on their suite of microservices the organisation must find a way to ensure there is still appropriate focus on the quality of the service being provided to customers.

The power of a product focused approach

I became convinced of the power of a product-focused approach to DevOps when I was working at EA Playfish. Our games teams were product-focused cross-functional and, to an extent, multi-discipline. They were incredibly successful. They had their problems but they were updating their games weekly with a good cadence between content updates and game changes. They reacted to changing situations very quickly and they even pulled together on programme-wide initiatives well. Playfish’s games teams were my inspiration for wanting to scrap the old development, test and ops silos and create teams aligned behind building and supporting products and services.

The insidious risk of having a product-focus

In Next Gen DevOps I examine the implications of taking a product-focused approach to a DevOps transformation. The approach has since been validated by some high profile success stories that I examine in the second edition of the book. Some more recent experiences have led me to conclude that, while the approach is still the right one I, and several others, have missed a trick.

In breaking down a service into microservices or even starting out by considering the bounded contexts of a problem it’s easy to lose sight of the service from the customer perspective.

Background

One of the functions the Ops teams of the past assumed was maintaining a holistic view of the quality and performance of the service as a whole. It’s this trait more than any other that led to friction with development teams. Development teams were often tasked with making changes to specific aspects of a service. Ops would express their concerns about the impact of the proposed changes on performance or availability. The Developers were then caught between an Ops team expressing performance and reliability risks and a Product Manager pushing for the feature changes they need to improve the service. None of the players had the whole picture, very few people in those teams had much experience of expressing non-functional requirements in a functional context and so the result was conflict. While this conflict was counter-productive it ensured there was always someone concerned with the availability and performance of the service-as-a-whole.

Product-focused DevOps is now the way everyone builds their Technology organisations. There are no Operations teams. Engineers are expected to run the services they build. The most successful organisations hire multi-discipline teams so while everyone codes some people are coding tests, some infrastructure and data structures and some are coding business logic. The microservices movement drove this point home as it’s so much easier to deliver smaller, individual services when teams are aligned to those services.

Considering infrastructure configuration, monitoring, build, test execution, deployment and data lifecycle management as products is a logical extension of the microservices or domain-driven development pattern.

However here there be dragons!

When there’s clear ownership of individual microservices who owns the service-as-a-whole? If the answer to that question is everyone then it’s really no-one.

Key Performance Indicators

In many businesses Key Performance Indicators (KPIs) are defined to determine what success looks like. Common KPIs are sales figures or conversion rates, infrastructure costs as a percentage of revenue, Net Promoter Scores (NPS),  Customer Retention Rate, Net profit etc…

A few years ago the DevOps community got into a discussion about the KPIs that could be used to measure the success of a DevOps transition. As with most such discussions we ended up with some agreement around some common sense measures and a lot of debate about some more esoteric ones.

The ones most people agreed with were:

  • Mean Time To Recovery.
  • Time taken to deploy new features measured from merge.
  • Deployment success (or failure) rate.

Due to the nature of internet debate most of the discussion focussed on what the KPIs should be and very little discussion was had about how the KPIs should be set and managed.

This takes us to the trick I missed when I wrote Next Gen DevOps and the trick many others have missed when they’ve tackled their DevOps transitions.

In our organisations we have a group of people who are already concerned with the whole business and are very focussed on the needs of the customers. These people are used to managing the business with metrics and are comfortable with setting and managing targets. They are the c-level executives. Our COOs are used to managing sales targets and conversation rates, our CFOs are used to managing EBITDA and Net Profit targets. CMOs are used to managing Net Promoter Scores and CTOs are used to managing infrastructure cost targets.

I think we’re making a mistake by not exposing the KPIs and real-time data we have access to so that our executives can actively help us manage risk, productivity and quality of service.

The real power of KPIs

In modern technology organisations we have access to a wealth of data in real-time. We have tools to instantly calculate means and standard deviations from these data points and correlate them to other metrics. We can trend them over time and we can set thresholds and alerts for them.

Every organisation I’ve been in has struggled to manage prioritisation between new feature development and non-functional requirements. The only metric available to most of the executives in those organisations has been availability metrics and page load times if they’re lucky.

Yet it’s fairly logical that if we push new feature development and reduce the time spent on improving the performance of the service service performance will degrade. It’s our job as engineers to identify, record and trend the metrics that expose that degradation. We then need to educate our executives on the meaning of these metrics and give them the levers to manage those metrics accordingly.

Some example KPIs

Let’s get right into the detail to show how executives can help us with even the most complex problems. Technical debt should manifest as a reduction in velocity. If we trend velocity then we can highlight the impact of technical debt as it manifests. If we need to reduce new feature production to resolve technical debt we should be able to demonstrate the impact of that technical debt on velocity and we should be able to see the increase in velocity having resolved the technical debt.

For those organisations still struggling with inflexible infrastructure and software consider the power of Mean time between failure (MTBF). If you’re suffering reliability problems due to under-scale hardware or older software and are struggling to get budget for upgrades MTBF is a powerful metric that can make the point for you.

A common stumbling block for many organisations in the midst of their DevOps transformations is the deployment pipeline. Two words that sum up a wealth of software and configuration complexity. Often building and configuring the deployment pipeline falls to a couple of people who have had some previous experience but no two organisations are quite the same and so there are always some new stumbling blocks. If you trend the Time taken to deploy new features measured from merge. You can easily make the point for getting some help from other people around the organisation to help build a better deployment pipeline.

The trick with all of this is to measure these metrics before you need them so you can demonstrate how they change with investment, prioritisation and other changes.

Implementation

Get your technical leadership team to meet with the executive team, discuss the KPIs that matter to you and some that don’t matter yet that might. Educate everyone in the room about the way the KPIs are measured so the metrics have context and people can have confidence in them. Create a process for managing the KPIs and then start measuring them, in real-time and display them on dashboards. Set up sessions to discuss the inevitable blips and build a partnership to manage the business using the metrics that really matter.

Article image courtesy of: http://maxpixel.freegreatpicture.com/Stock-Finance-Monitor-Desk-Trading-Business-1863880

DevOps Journeys 2.0

Last year Linuxrecruit published an ebook called DevOps Journeys. Over it’s 30 pages various thought leaders and practitioners shared their thoughts and experiences of implementing DevOps in their organisations. It was a great read for anyone outside the movement to understand what it was all about. For people inside the movement it presented an opportunity to learn from the experiences of some of the UK’s foremost DevOps luminaries.

Linuxrecruit have recently published the follow-up: DevOps Journeys 2.0. This one’s even better because I’m in it!

A year on we’re in a different place, most organisations now have DevOps initiatives, we’re in the midst of a critical hiring crisis, new technologies are on the hype train and large companies are jumping on last years band wagons. More companies are now starting to encounter the next generation of problems that arise from taking a product focussed approach to DevOps.
I’ve worked with several of the contributors to DevOps Journeys 2.0 and they are some seriously capable people. If you’re interested in the challenges facing organisations as they embark on or progress along their DevOps Journey’s DevOps Journeys 2.0 is a great read.

Enterprise DevOps Lessons Learned: TDA

I’ve been working at the Department for Work and Pensions (DWP) for the best part of a year now. If you’re not aware the DWP is the largest UK Government department with around 85,000 full time staff augmented by a lot of contractors like myself.

DWP is responsible for:

  • Encouraging people into work and making work pay
  • Tackling the causes of poverty and making social justice a reality
  • Enabling disabled people to fulfil their potential
  • Providing a firm foundation, promoting saving for retirement and ensuring that saving for retirement pays
  • Recognising the importance of family in providing the foundation of every child’s life
  • Controlling costs Improving services to the public by delivering value for money and reducing fraud and error

Taken from A Short Guide to the Department for Work & Pensions published by the National Audit Office June 2015.

The DWP paid more than 22 million customers around £164 billion in benefits and pensions in 2013-14.

After decades of outsourcing it’s Technology development and support the government decided that it should provide it’s own Technology capability. Transforming such a large Technology organisation from, what was primarily, an assurance role to a delivery role Is no mean feat.

Having been a part of this journey for almost a year I thought it might be useful if I shared some of things that have worked well and some of the challenges we haven’t yet overcome.

Today I want to talk about the Technical Design Authority (TDA). I’ve never worked anywhere with a TDA before and I didn’t know what to expect. Established by Greg Stewart CTA / Digital CTO at DWP not long after he joined the DWP the TDA has a dual role.

The TDA hold an advisory meeting where people can introduce new projects or initiatives and discuss them with peers and the Domain Architects. In an organisation as large as the DWP this really helps find people with similar interests and requirements. It reduces the chance of accidental duplication of work and introduces people who are operating in similar spaces. Just finding out who people are who are working in a similar space has been tremedously valuable.

The TDA also hold a governance session where they review project designs. The template they provide for this session is really useful. It forces the architect or developers to consider data types stored, data flows including security boundaries and high-availability and scaling mechanims. That;s not to say every project needs those things but the review ensures that a project that does need them has them.

I can’t list the number of projects I’ve been involved with over the years that would have benefitted from a little forethought about non-functional requirements.

A TDA is a must have for an enterprise DevOps transformation. It makes sure Technology people working on similar projects in different parts of the organisation are aware of and can benefit from each other’s work. It ensures that projects pay adequate attention to the non-functionals as well as the functional requirements and it ensures that where standards are required they are promoted and where experiments are needed they are managed appropriately.

NEXT GEN DEVOPS Second Edition!

NEXT GEN DEVOPS: Creating the DevOps Organisation is getting a second edition!

I’ve been working on it for a while but it’s been my sole focus since I published the NEXT GEN DEVOPS TRANSFORMATION FRAMEWORKPaperback_CoverThe first edition came out around a year ago and a lot has changed since then.

The conversation now seems to be how organisations should approach DevOps rather than whether they should consider it. Friends and I are now talking about dropping the term DevOps because we feel it’s just good software engineering practice. Patterns that I dimly glimpsed two years ago are now clearly defined and have several supporting case studies.

The core theme of the book hasn’t changed. In fact none of the existing content has changed at all. I’ve corrected a few formatting mistakes here and there and I’ve been able to add some great photo’s that I think really bring the history chapter to life. Everyone who’s spoken to me about the book has commented that it’s their favourite chapter and now it’s even better!

All new content!

Paperback_Cover_2nd_edition_smApart from a redesigned cover I’ve added several new chapters the first of which is entitled The only successful DevOps model is product-centric which looks at the four organisations that are most frequently held up as DevOps exemplars Etsy, Netflix, Facebook and Amazon to see what they have in common and what lessons other organisations can learn from their successes and failures. I wrote this chapter to address a comment I’ve heard from several readers that they wanted more explicit instructions about how to transform their teams and organisations.

That’s also the reason I added the next new chapter: The Next Gen DevOps Transformation Framework this chapter provides explicit instructions describing how my DevOps Transformation Framework can be used to transition a business towards DevOps working practices. It impossible to re-format a framework designed to be used interactively on an HD screen to a 6×9″ paperback but I’ve been able to provide some supporting contextual information as well as providing some example implementations. This combined with the Playfish case-studies I’ve published here on the blog should provide people with everything they need to begin their journey to DevOps.

The final bit of new content is an appendix to the history chapter. I learned a lot while I was researching the history chapter, far more than I could include without completing losing the thread of the chapter. What interested me most of all was the enormous role played by women in the development of the IT profession. I’ve worked with some great men and women in my 20 years in IT but I’ve only met two female Operations Engineers. Where are the rest? At Playfish I worked with a lot of female developers but whenever I was hiring I never met any women interested in careers in Operations. I couldn’t shake the thought that something was wrong with this situation. Over the past year I’ve read a lot about the declining numbers of women in IT so I decided to share what I learned while writing the history chapter and do a little research of my own.

Reduced price!

I need to eat some humble pie now. I think I made a mistake when I initially priced the book. When I was writing the book my focus was not on book sales. I know a couple of people who have authored and co-authored books and read numerous articles about how writing will not make you rich so I was under no illusions about my future wealth. I chose the price because I felt that it would lend credibility.

I didn’t factor in that ebooks and self-publishing has changed the market. When I published the book Amazon displayed a little graph demonstrating that $9.99 was a sweet spot for pricing and that I’d make more money publishing at that price. I’m not doing this for the money so what do I care?

That’s where I made a mistake. I don’t care about the money but I do want my message to get out. I think my book is unique because very few authors have spent 17 years operating online services and very few authors had the unique opportunity to work on one of the first examples of continuous delivery.

So the 2nd edition will be priced at $9.99. I understand that people who have paid the higher price for the first edition will quite rightly feel a little put out by this so I intend to publish the second edition as an update to the first this means that those people who bought the book on Kindle can just update their copy to get the second edition.

I can’t update the paperback version and I can’t give them away for free but I do have a plan. I’m going to be publishing a PDF edition of the second edition and selling it through my own online store. I can’t get details of who bought my book so I’m going to do my best to operate an honour system. If you bought a paperback version of Next Gen DevOps and want a PDF copy of the second edition email grant@nextgendevops.com and I’ll send you the PDF version.

While I don’t know who has bought paperbacks I do know how many paperbacks I’ve sold so once I’ve given away that many PDF copies the giveaway will be over so email me asap to ensure you get your free copy.

The second edition will be published in the next couple of weeks and will be accompanied by a formal press-release.

Next Gen DevOps Transformation Framework: A case study of Playfish circa 2010 pt. 2

Recap

Last week I used Playfish, as it was in 2010, as a case study to show how the Next Gen DevOps Transformation Framework can be used to assess the capability of an organisation. We looked at Playfish’s Build & Integration capability, which the framework classified at level 0, ad hoc and 3rd Party Component Management which the framework classified at level 4.

This week

We’ll take a look at what the Next Gen DevOps Transformation Framework recommends to help Playfish improve its Build & Integration capabilities and we’re going to contrast that with what we actually did. We’ll also look at why we made the decisions we made back then and talk a little about where that took us. Finally I’ll end with some recommendations that I hope will help organisations avoid some of the (spring-loaded trapped and spiked) pitfalls we fell into.

Build & Integration Level 0 Project Scope

Each level within each capability of the Next Gen DevOps Transformation Framework has a project scope designed to improve an organisations capability, encourage collaboration between engineers and enable additional benefits for little additional work.

The project scope for Build & Integration Level 0 is:

Create a process for integrating new 3rd party components.
Give one group clear ownership and budget for the performance, capability and reliability of the build system.

These two projects don’t initially seem to be related but It’s very hard to do one without the other.

Modern product development demands that developers use many 3rd party components, sometimes these are frameworks like Spring, more often than not they’re libraries like JUnit or modules like the Request module for Node.js.

Having a process for integrating new 3rd party components ensures that all engineers know the component is available. It provides an opportunity to debate the relative merits of alternatives and reduces unnecessary work. It also, crucially, provides the opportunity to ensure that 3rd party components are integrated in a robust way. It’s occasionally necessary to use immature components, if there’s a chance that these may be unavailable when needed then they need to be staged in a reliable repository or an alternative needs to be maintained. Creating a process for integrating 3rd party components ensures these issues are brought out into the open an can be addressed.

Having talked about a process for integrating 3rd party components an organisation is then in a great place to decide who should be responsible for the capability and reliability of the build system. Giving ownership of the build system to a group with the skills needed to manage and support it enables strategic improvement of the build capability. Only so much can be engineers, no matter how talented and committed they are, without funding, time and strategic planning.

How Playfish improved it’s build capability

I don’t know how Playfish built software before I joined but in August 2010 all the developers were using an instance of Bamboo hosted on JIRA Studio. JIRA Studio is a software-as-a-service implementation of a variety of Atlassian products. I have’t used it for a while but back in 2010 it was a single server hosting whatever Atlassian components you configured. Some Playfish developers had set up Bamboo as an experiment and by August 2010 it had become the unofficial standard for building software. I say unofficial because Operations didn’t know that it had become the standard until the thing broke.

Playfish Operations managed the deployment of software back then and that meant copying some war files from an artefact repository updating some config and occasionally running SQL scripts on databases. The processes that built these artefacts was owned by each development team. The development teams had a good community and had all chosen to adopt Bamboo.

Pause for context…

Let’s take a moment to look at the context surrounding this situation because we’re at that point where this situation could have degenerated into one of the classic us vs. them situations that are so common between operations and development.

When I was hired at Playfish I was told by the CEO, the Studio Head and the Engineering Director that I had two missions:

  1. Mature Playfish’s approach to operations
  2. Remove the blocks that slowed down development.

Playfish Operations were the only group on-call. They were pushing very hard to prevent development teams from requesting deployments on Friday afternoons and then running out of the building to get drunk.

I realised straight away that the development teams needed to manage their own deployments and take responsibility for their own work. That meant demystifying system configuration so that everyone understood the infrastructure and could take responsibility for their code on the live systems. I also knew that educating a diverse group of 50-odd developers was not a “quick-win”.

This may explain why I didn’t want operations to take ownership of the build system even though that’s exactly what some of the my engineers wanted me to do. Operations weren’t building software at the level of complexity that required a build system back then. Operations code, and some of it was quite complex, wasn’t even in source-control, so if we’d taken control of the build system we’d have been just another bunch of bureaucrats managing a system we had no vested interest in.

…Resume

When the build system reached capacity and everyone looked to operations to resolve it I played innocent. Build system? What build system? Do any of you guys know anything about a build system? It worked, to a certain extent, and some of the more senior developers took ownership of it and started stripping out old projects and performance improved.

While this was happening I was addressing the education issues. Everyone in the development organisation was telling me that the reason deployments were so difficult was because there were significant differences between the development, test and live environments. Meanwhile the operations engineers were swearing blind that there were no significant differences. Obviously there will always be differences, sometimes these are just difference access control criteria but there are differences. As the Playfish Operations team were the only group responsible for performance and resilience they were very protective about who could make changes to the infrastructure and configuration. This in turn led them to being unwilling to share access to the configuration files and that led to suspicion and doubt among the development teams. This is inevitable when you create silos and prevent developers from taking responsibility for their environments.

To resolve this I took it on myself to document the configuration of the environments and highlight everywhere there were differences. This was a great induction exercise for me (I was still in my first couple of weeks). I discovered that there were no significant differences and all the differences were little things like host names and access criteria. Deployment problems were now treated completely differently. Just by lifting the veil on configuration changed the entire problem. We then found the real root cause. The problem was configuration it just wasn’t system configuration it was software configuration. There was very little control on the Java properties and there were frequent duplications with different key names and default values. This made application behaviour difficult to predict when the software was deployed in different environments. There was then a years long initiative to ban default values and to try and identify and remove duplicate properties. This took a long time because there were so many games and services and each was managed independently.

Conclusion

I won’t head into the subject of integration for now as there’s a whole other rabbit hole and we have enough now to contrast the approach we took at Playfish with the recommendation made in the framework.

The build system at Playfish had no clear ownership. Passionate and committed senior developers did a good job of maintaining the build system’s performance but there was no single group who could lead a strategic discussion. That meant there was no-one who could look into extending automated build into automated testing and no one to consider extending build into integration.

This in-turn meant that build and integration were always seperate activities at Playfish. This had a knock-on effect on how we approached configuration management and significantly extended the complexity and timescales of automating game an service deployment.

The Next Gen DevOps Transformation Framework supports a product-centric approach to DevOps. In the case of the Build & Integration it recommends an organisation treats it’s build process and systems as an internal product. This means it needs strategic ownership, vision, roadmaps, budget and appropriate engineering. At Playfish we never treated build that way, we assumed that build was just a part of software development, which it is, but never sufficiently invested in it to reap all the available rewards.

Next Gen DevOps Transformation Framework: A case study of Playfish circa 2010

Introduction

This is going to be part one in a two part series. In this article I’m gong to run a case study capability assessment using my newly published Next Gen DevOps Transformation Framework: https://github.com/grjsmith/NGDO-Transformation-Framework.

I’m going to use Playfish (as it was in August 2010 when I joined) as my target organisation. The reason for this is that Playfish no longer exists and there’s almost no one left at EA who would remember me so I’m not likely to be sued if I’m a bit too honest. I’m only going to review two capabilities, one that scores low and one that scores high otherwise this blog post will turn into a 20 page TL:DR fest.

Next week I’ll publish the second part of this series where I’ll discuss the recommendations the framework makes to help Playfish level it’s capabilities up and contrast that with what we actually did and talk about some of the reasons we made the choices we did and where those choices led us.

But first a little Context

Playfish made Facebook games that was just one of many things that made Playfish remarkable. Playfish had extraordinary vision, diversity, capability but coming from AOL the thing that most stood out for me was that all Playfish technology was underpinned by Platform-As-A-Service or Software-As-A-Service solutions.

Playfish’s games were really just complex web services. The game the player interacts with is a big Flash client that renders in the players browser. The game client records the player actions and sends them to  the game server. The server then plays these interactions back against it’s rule set to first see if they are legal actions and then to record them in a database of game state. The game is asynchronous in that the client allows the player to do what they want and then validates the instructions. This meant that Playfish game sessions could survive quite variable ISP performance allowing Playfish to be successful in many countries around the world.

The game server then interacts with a bunch of other services to provide CRM, Billing, item delivery and other services. So we have a game client, game service, databases storing reference and game state data and a bunch of additional services providing back-office functions.

From a technical organisation perspective Playfish was split into 3 notional entities:

The games teams were in the Studio organisation. The games teams were cross-functional teams comprised of Producers, product-managers, client developers, artists and server developers all working together to produce a weekly game update.

The Engineering team developed the back-office services, there were two teams who split their time between something like 10 different services. The team weren’t pressured to provide weekly releases rather they developed additional features as needed.

Finally the Ops team managed the hosting of the games and services and all the other 3rd party services Playfish was dependant on like Google docs, JIRA Studio and a few other smaller services.

External to these organisation there were also marketing, data analysis, finance, payments and customer service teams.

Just before we dive in let me remind  you that we’re assessing a company that released it’s first game in December 2007. When I joined it was really only 18 months old and had grown to around a 100 people so this was not a mature organisation.

Without further ado let’s dive into the framework and assess Playfish’s DevOps capabilities as they were in August 2010.

Build & Integration

We’ll choose to look at Build & Integration first as this was the first major problem I was asked to resolve when I took on the Operations Director role at Playfish.

Build and Integration: ad-hoc, Capability level 0, description:

Continuous build ad hoc and sporadically successful.

This seems like an accurate description of the Playfish I remember from 2010.

Capability level 0 observed behaviours are:

Only revenue generating applications are subject to automated build and test.

Automated builds fail frequently.

No clear ownership of build system capability, reliability and performance.

No clear ownership of 3rd-party components.

This isn’t a strict description of the state of Playfish’s build and integration capabilities.

The games and the back-office services all had automated build pipelines but there was only very limited automated testing. The individual game and service builds were fairly reliable but that was due in part to the lack of any sophisticated automated testing . The games teams struggled to ensure functionality with the back-office components. Playfish had only recently transitioned to multiple back-office services when I joined and was still suffering some of the transitional pains. There was no clear ownership of the build system. Some of the senior developers had set one up and began running it as an experiment but pretty soon everyone was using it, it was considered production but no-one had taken ownership of it. Hence when it under-performed everyone looked at everyone else. 3rd party components were well understood and well owned at Playfish. Things could have been a little more formal but at Playfish’s level of maturity it wasn’t strictly necessary.

Let’s take a look at the level 1 build and integration capabilities before we settle on a rating. Description:

Continuous build is reliable for revenue generating applications.

Observed behaviours:

Automated build and test activities are reliable in the build environment.

Deployment of applications to production environment is unreliable.

Software and test engineers concerned that system configuration is the root cause.

Automated build and test were not reliable. Deployment was unreliable and everyone was concerned that system configuration was to blame for the unreliability of deployments.

So in August 2010 Playfish’s Next Gen DevOps Transformation Framework Build & Integration capability was level 0.

Next we’ll look at 3rd Party Component management. I’m choosing this because Playfish was founded on the principles of using PAAS and SAAS solutions where possible so it should score highly but I suspect it will be interesting to see how.

3rd Party Component Management

Capability level 0 description:

An unknown number of 3rd party components, services and tools are in use.

This isn’t true Playfish didn’t really have enough legacy to lose track of it’s 3rd party components.

Capability level 0 behaviours are:

No clear ownership, budget or roadmap for each service, product or tool.

Notification of impacts are makeshift and motley causing regular interruptions and impacts to productivity.

All but one 3rd party provided service had clearly defined owners. Playfish had clear owners for the relationship with Facebook and the only tool in dispute was the automated build system. There were a variety of 3rd party libraries in use and these were never used from source so they never caused any surprises. While there were no clear owners for all of these libraries all the teams kept an eye on their development and there were regular emails about updates and changes.

There were no formal roadmaps for every product and tool but their use was constantly discussed.

So it’s doesn’t seem that Playfish was at level 0.

Capability level 1 description:

A trusted list of all 3rd party provided services, products and tools is available.

There was definitely no no list of all the 3rd party services, products and tools documented so it may be that Playfish should be considered t level 0 but let’s apply some common sense (required when using any framework) and take a look at the observed behaviours.

Capability level 1 observed behaviour:

Informed debate of the relative merits of each 3rd party component can now occur.

Outages still cause incidents and upgrades are still a surprise.

There was regular informed debate about the relative merits of almost all the 3rd party services, products and tools. No planned maintenance of 3rd party services, products or tools caused outages.

So while Playfish didn’t have a trusted list of all 3rd party provided services, products and tools they didn’t experience the problems that might be expected. This was due to the fact that it was a very young organisation with very little legacy and a very active and engaged workforce. If we don’t see the expected observed behaviour let’s move on to level 2.

Description for Capability level 2:

All 3rd party services, products and tools have a service owner.

While there was no list it was well understood who owned all but one of the 3rd party services, products and tools.

Capability level 2 observed behaviour:

Incidents caused by 3rd party services are now escalated to the provider and within the organisation.

There is organisation wide communication about the quality of each service and major milestones such as outages or upgrades are no longer a surprise.

There is no way to fully assess the potential impact of replacing a 3rd party component.

Incidents caused by 3rd party services, products and tools were well managed. There was organisation wide communication about the quality of 3rd party components and 3rd party upgrades and planned outages did not cause surprises. The 3rd party components in use were very well understood and debated about replacing 3rd party components were common. We even used Amazon’s cloud services carefully to ensure we could switch to other cloud providers should a better one emerge. We once deployed the entire stack and a game to Open Stack and it ran with minimal work (although this was much later). The use of different components were frequently debated and it wasn’t uncommon for us to use multiple alternative components on different infrastructure within the same game or service to see real-world performance differences first-hand.

So while Playfish didn’t meet the description of Capability Level 2 it’s behaviour exceeded those predicted in the observed behaviours.

Let’s take a look at Capability level 3:

Strategic roadmaps and budgets are available for all 3rd party services, products and tools.

There definitely weren’t roadmaps and budgets allocated for any of the 3rd party services. To be fair when I joined Playfish it didn’t really operate budgets.

Capability level 3 observed behaviour

Curated discussions supported by captured data take place regularly about the performance, capability and quality of each 3rd party service, product and tool. These discussions lead to published conclusions and actions.

Again Playfish’s management of 3rd party components doesn’t match the description but the observed behaviour does. Numerous experiments were performed assessing the performance of existing components in new circumstances or comparing new components in existing circumstances. Debates were common and occasionally resolved into experiments. Tactical decisions were made based on data gathered during these experiments.

Let’s move on to capability level 4:

Continuous Improvement

There was a degree of continuous improvement at Playfish but let’s take a look at the observed behaviours before we draw a conclusion:

3rd party components will be either active open-source projects that the organisation contributes to or they will be supplied by engaged, responsible and responsive partners

This description fairly accurately matches Playfish’s experience.

So in August 2010 Playfish’s 3rd Party Component Management capability was level 4.

It should be understood that Playfish was a business set up around the idea that 3rd party services, products and tools would be used as far as possible. It should also be remembered that at this stage the company was about 18 months old hence the behaviours were good even though it hadn’t taken any of the steps necessary to ensure good behaviour.

Conclusion

Using the Next Gen DevOps Transformation Framework to assess the DevOps capabilities of an organisation is a very simple exercise. With sufficient context it can be done in a few hours. If you want someone external to run the process it will take a little longer as they will have to observe your processes in action.

Look out for next week’s article when I’ll examine what the framework recommends to improve Playfish’s Build & Integration capabilities and contrast that with what we actually did.

The First Blue/Green Production Deployment circa 2005

Just a short post from me this week as I focus on publishing the Next Gen DevOps Transformation Framework. My friend and former-colleague Phil Hendren, now of Mind Candy fame, has just published an article about our first experience with Continuous Delivery back at AOL in 2004/5. This story has been told many times by some of the amazing people we were privileged to work with on that project but they have usually told the story from a software engineering perspective. This is the first time the tale has been told from an operations perspective. There are some interesting nuances to this story, I won’t share them now because for now you should just go and read Phil’s article. However I’d like leave you with one thought before you click away. 6 weeks after we went live with the new system and it’s new deployment mechanism Phil writes about we stopped getting bugs in the live environment. 6 weeks after that everyone stopped caring about when the new update would be deployed. If we could achieve that a decade ago imagine what we can do now.

Photo courtesy of Kylie who delights in taking close up photo’s while we’re on holiday that annoy the hell out of me but make good featured images on blog posts 🙂