Thursday, June 28, 2018

Caution: The DevOps explosion

Everything in moderation - Let's not overdo the DevOps !

Recently, working as a startup CTO and adviser, I have seen what I call a "DevOps" craze. Developers, especially younger ones seem totally swept up in the DevOps revolution. Continuous deployment, auto-scaling groups, hot back-ups, containers etc etc.

Many times these implementations, whilst impressive, are unnecessary for the stage of the company or the product development. Many times, they are difficult to document, relatively expensive, difficult to maintain and can be hard to debug. In many circumstances, it would be far easier to have some old-fashioned deployment or back-up scripts until the company or product has some traction in the market and there are the resources available to properly administer the complexity of the DevOps setup.  This is somewhat of a paradox, initially it would seem that by automating many of the mundane operational tasks that the development team face, you would be saving not only time and resource but also money. However, this is not always the case. Like anything you should implement exactly what is needed and not more. Many service providers make some of these DevOps tasks almost trivial to implement (individually) that the temptation exists to over-engineer the DevOps. This should be avoided.

The main reason to avoid over-engineering of DevOps is not the potential to loose control of cost, it is not the risk of spending much needed development time on DevOps tasks. The main risk is creating an overall system that is reliant on a number of DevOps tool providers, which when used in conjunction create systems that are unnecessarily complex to understand and maintain. For example you push your code to source control, which integrates with another 3rd party, this runs some unit tests and pushes to production servers, these servers have a number of containers that are "networked" together and there servers themselves have various firewall rules and scaling rules etc etc. One can see that this can get very complex, very quickly. Yes, it's wonderful to have these tools and in principle it is wonderful conceptually. But is it really needed ? How often do you really need to deploy to production ? How difficult is the deployment ? How many people take the time to properly document the DevOps configuration and setup ? Is DevOps becoming a risk to your business ?

If you are not carefully managing this, DevOps can become more of a risk than a asset to the business.

Sometimes, less is more. Everything in moderation - especially when it comes to software engineering. 

Saturday, June 2, 2018

Hire for Engineers not for Frameworks

I have been faced with hiring great engineering teams on a number of occasions, I am constantly amazed at home many hiring managers and recruiters seem to focus on particular technologies and experience in those technologies.

My experience has taught me to hire engineers and I have always viewed the technology as a secondary consideration. I always focuses on a grasp of engineering fundamentals and not on information about some current or past technology. In our current world these technologies are out of date by the time the ink on the contract is dry.

Recently, I was approached by a leading company for a very senior role. The in-house recruiter laid on his wonderful resume with world leading software companies before affording me the chance to explain my career-to-date. He then proceeded to ask me a number of questions on specific Java libraries and how they were used. This is all information that is readily available on reference sites and in API documentation and does not  highlight any engineering ability whatsoever.

Unfortunately, this practice is common place. Hiring tests are geared to determining how good someone is with a particular language or technology and not how good they are at thinking or learning a new framework or technology.

I have never hired teams this way. The best engineering or software teams are made up of good solid engineers, who can solve problems and can adapt to a very fast paced and moving technological landscape. 

Wednesday, May 16, 2018

The issues with GraphQL

I recently wrote a little about some of the issues with a technology like GraphQL. Whenever a new technology emerges that claims to solve age old problems, one has to look closely at the assertions made.

In the case of GraphQL the following is listed on their website:


  • Ask for what you need get exactly that
  • Get many resources in a single request
  • Describe what is possible with a type system
  • Move faster with powerful developer tools
  • Evolve you API without versions
  • Bring your own data and code
The How to GraphQL describes GraphQL as "the better REST"

  • No more Over and Under fetching
  • Rapid product iterations on the frontend
  • Insightful Analytics on the Backend
  • Benefits of a schema and type System
Whilst many of these assertions may be true, the reality is that they also can cause significant problems. Like anything in engineering, there are always trade-offs.

  1. Asking for what you need : The backend or API has to be able to generate the queries that will give back the data as requested. As your project grows, if not carefully managed, the queries get bigger and more convoluted - eventually slowing performance and becoming unmanageable.
  2. Getting many resources in a single request: Produces similar issues to the above, furthermore if not managed, can yield a system with many points of failure. If for example one of the sub-queries is not returning valid results or performing - the entire system can suffer (unlike REST).
  3. Evolve your API without versions: Whilst this would seem like a great idea, the reality is that in a complex graphQL implementation, if something goes wrong on the backend - it is much more difficult to debug and has the potential to significantly impact the system. You have to ensure that any changes to the API are backward compatible and don't break the system. You might once again revert to having versions.
  4. Bring your own data and  code: This alludes to the fact that the caller is oblivious to the backend data-source. No different to REST.
In the How To Blog Post mentioned above:

  1. No more Over and Underfetching: This is simply not true, front-end developers will always take the path of least resistance - often not considering performance. In a REST world they may have had to make multiple queries to populate a web page (for example). Since you are now providing a mechanism where this can be done all at once, it will happen. Over-fetching is a big problem in GraphQL implementations !
  2. Rapid Iterations: Yes, this is true - but the quality of the overall system is impacted.
  3. Insightful analytics: I found it much harder to measure the API performance in a GrpahQL world than in a REST world. Afterall, in a REST world one can simple analyse each route and easily find bottlenecks. This is not the case when there is single end-point.

In summary, whilst GraphQL is another tool in an engineering toolbox, it is not a miracle solution. If not managed carefully, it can become a problem in complex projects for the reasons mentioned above. Claims that are made by the creators of new technology must always be evaluated and not taken an face value.