Tuesday, September 1, 2020

Insights & Suggestions For Effective Software development


I decided to note a few observations on software development based on my experiences with a number of clients over a long career, this is work in progress and would appreciate any further input or comments.

Topics

  1. Data science needs systematic approach
  2. Without Specification you are in trouble
  3. Interview for excellence
  4. Agile does not mean "do nothing"
  5. Devops first is not always productive
  6. Too many resources is as bad, or worse than too little
  7. Design before Develop, before Deploy

1. Data Science needs a `Systematic Approach`


We live in a `Data Science` age, it's a buzz word. We have always had data science in some form or another, it's just that the availability, techniques, compute power and volume has exploded over the last few years. Principles are not new, they are standard engineering practises, however, in the new generation of `Data Science` these standard practises are often overlooked.
Here are some general rules-of-thumb I have noted down :
  • It makes no sense to use machine learning to correct bad data that can be corrected by improving the data sources themselves. Machine learning in general should be used to clasify data to gain insights.
  • Machine learning or other `learning` techniques are not guaranteed to converge, you should use these techniques with caution, certainly should not rely on them as a primary source of truth.
  • Models work under certain conditions, you have to take the superset of possibilities into account as well.
  • Developing an algorithm that works on a sample clean data set, is only the beginning of the problem. Running it in production with real data (maybe real-time data) is another matter all together.

2. Without Specification you are in trouble


Somehow the message behind the [Agile Manifesto](https://www.agilealliance.org/agile101/12-principles-behind-the-agile-manifesto/) has been lost in translation. Most often I see organisations where the term `agile` is used to mean `no process`. This includes starting projects without any `specification` of what the project is or what is being built. It's `agile` and so it will be worked out as the project evolves. It's easy to reason away this lack of process, especially in an environment with rapidly changing or complex requirements. Further if you throw enough resources at something, you might even produce an output.

You should always specify what you are doing, even if you are completely and utterly wrong !

This seems to be the most obvious statement of all time, however, time and time again I find software projects without specification. This then leads to a host of secondary problems; poor or no estimation; arguments or organisational confusion; lack of accountability; poor performance and others.

If you can't specify what you are doing, you can't estimate how long it will take and you therefore can't tell what resources you will need. Your standard project management triangle (scope, time, resource) is thus broken and all the vertices are variable. Immediately the project is doomed to either fail or underperform.

You have no accountability - without written definition, it is simply a `he/she said, he/she said match`. You also cannot iterate effectively, as there is no known starting point. You cannot negotiate, as nothing is defined and most importantly you cannot design !

The entire software development process starts from definition, even if this is for small chunks of the entire system. `Agile` methodologies have various levels of abstractions, Epics, Stories, Releases etc. There is a reason for this, most methodologies focus on how to write definitions and how to thereafter, estimate and allocate.

Without estimation, you cannot allocate resources and therefore you cannot produce roadmaps, this means that the business has no visibility of what it is doing, this is a recipe for disaster.

Further without specification, organisations tend to skip the architecture step and then the outcome may or may not converge on something robust and efficient. In fact the likelihood of producing not only bad software, but software that cannot be maintained is very high.

A good test to determine how well your project is defined, is to monitor how long the development team can go without leadership interfering or answering questions about features. If the product owner is constantly having to have meetings to explain things then it is probably not very well specified.

3. Interview for Excellence


In an environment where skilled resources are hard to find, it is tempting to be more flexible on the requirements in order to get `bums on seats`. This is a very dangerous game, engineering excellence needs the right team, with the right skill set and the right attitude, anything less and you are setting up the organisation for `engineering non-excellence`. The interview process should be taken very seriously and be very systematic.

Software is built by people, and it is very important to get the right people and the right teams in place. Always favour quality over quantity (unless there is some other hidden motivation).

4. Agile does not mean `do nothing`


As previously mentioned, `agile` is not a synonym for `do nothing` or `leave it till later`. Effective agile development requires a significant amount of planning, domain knowledge and `hypothesis testing` (from the Lean methodologies). Do not allow projects to drift without direction, be systematic in your approach.

5. `Devops` first is not always productive (working code first, devops after)


We live in a world of `Devops`, cloud services, where servers can be spun up from the outset to compensate for poor quality engineering. If you can get software working without these services in a reliable manner they will no doubt work well with these services. However the opposite is not true. We often spend far too much time using the latest and greatest managed service without focusing on the core functionality and robustness of the software we are building. When things go wrong, they will go horribly wrong and best case have a much higher than needed operational cost. Don't let `Devops` be an excuse for poor engineering.

For a description of some potential issues see `Release It! 2nd Edition pg 46`.

Buggy code can in some ways be compensated for by `autoscaling`, but beware of a large bill !

6. Too many resources is as bad, or worse than too little


`The mythical man month` springs to mind, throwing resources at a problem is not always the best way to solve a problem. Often this leads to poor results, frustration and other organisational issues that are very hard to solve down the line. If you can effectively solve a problem with careful thought and limited resource, you are sure that that the problem solution has been thought through and is effective. Simply throwing more resources at a problem, does not mean that you will solve the problem quicker. In fact it might slow you down. 
  • * It takes time for new people to bond with a team
  • * It takes time for new people to understand complex problems and projects
  • * Each new person adds a management overhead
  • * Team which are too large are hard to manage

The phrase `too many cooks in the kitchen` also comes to mind, you don't want a lot of people and personalities justling with each other, this is unproductive.

7. Design before Develop, before Deploy


Before one jumps in and starts developing, there must be an element of design. It is fine to prototype in order to inform design, but the prototypes cannot become the thing that is deployed. Too often proof-of-concept's somehow become the real thing and land up in production. For obvious reasons, this is a bad idea.

It is always a good idea to mock the UI, and use this to help the UI designers figure out how they are going to develop the application. However, it is not a good idea to use the UI mocks as the specification for the product, especially since it misses out all the backend considerations and the non-functional requirements - which in complex systems are vitally important.

Sunday, June 30, 2019

Challenges Facing IoT Systems in 2019



Challenges Facing IoT Systems in 2019

There is no doubt that we have made significant progress in many key components of IoT systems, in recent years. These developments have in fact made IoT possible. Computer hardware has come down in cost, size and price. For many the Raspberry Pi and other similar devices has facilitated the ability to experiment and innovate, this coupled with the huge open-source community that has provided operating systems and tools that unlock these devices.  

Mobile and other communication networks have improved driven by the demand from initially mobile phone users. This has also driven the development of mobile devices, which has driven improvements in batteries, screens, sensors and the entire software eco-system that runs on these devices. 

With all this development, there are still a number of significant challenges. In order to roll out significant quantities of IoT devices, there are opportunities to improve deployment solutions. we live in a world of orchestrated containers, whilst this has created significant progress in terms of continuous deployment these technologies are network intensive and will require further refinement to work in a performant way in an IoT context. Engineers in IoT face similar challenges to those who tackled the large data scale issues 10 years ago, however with more constraints. 

The logistics of deploying large volumes of sensors is prohibitive and hence dynamic software updates are almost mandatory.  

With the vast number of devices deployed, dynamic updates and the importance of the systems that become reliant on this data, security becomes a big concern. We need mechanisms that ensure data validity and authenticity in a way that is effective and performant.

Bandwidth is improving, but this is being consumed by the volume and the number of applications for IoT data. Hence bandwidth remains and will possibly always remain a constraint. 

With the proliferation of devices, we have challenges around processing this data. Technology is emerging that can capture and process huge volumes of data, however, the technology and tooling around these technologies requires further development. 

Hanging off all this data gathering, we are developing data models and machine learning models, there are significant challenges deploying these models. In many cases, we have sensor data, which need to be processed, this along with more static data, such as location, geographical, weather or other data.

Along with all of the above, we need systems to monitor the entire network. In many cases we cannot consider the IoT devices as "Edge" devices, rather they are becoming "part" of the network, and as such need a sufficient level of monitoring to ensure that the overall system remains reliable and stable.

Whilst IoT has certainly opened up the realms of possibility, it brings with it new challenges that we are yet to completely solve. We might look to how we solved similar problems in the past and apply some of that thinking to the new context, or we may have to look for new innovative approaches.

Monday, November 19, 2018

Developing on the Move


I am constantly amazed at the rate of progress in Software Development. Years ago, I recall writing tedious scripts using PERL, TCL and EXPECT to log into Switches and issue commands to provision them. Today we have Docker, Ansible, Kubernetes and a host of other tools, we have advanced compilers, transpilers, linters, formatters and various other tools. We can provision infrastructure on the cloud in seconds and then build complex systems quickly and efficiently. Obviously this means that the systems themselves can become ever more complex.

Computing power is such that AI now has practical applications, natural language processing, computer vision and other applications.

It occurred to me on my lengthy, rainy bus journey home this evening that the next step in our development evolution might be the ability to develop software and systems on the fly. I would love to be able to develop systems whilst I walked in the park or sat observing a lovely view. Would it be possible to talk to an interpreter and then run some process on the output to generate the code and the systems I am developing.

Presumably, we would need to define a language of common constructs that is extensible. Something like :


  • Project create
  • File create index javascript
  • Function create test
  • AWS Provision EC2

Each developer could then using some "modules" that where kept in a open-source repository run some parser on these instructions and produce some output, which would include code, in a language of their choice and DevOps in the technology of their choice. The system would need a language definition that could easily be extended 

Interested to hear the thoughts of others on this ...

Thursday, June 28, 2018

Caution: The DevOps explosion

Everything in moderation - Let's not overdo the DevOps !

Recently, working as a startup CTO and adviser, I have seen what I call a "DevOps" craze. Developers, especially younger ones seem totally swept up in the DevOps revolution. Continuous deployment, auto-scaling groups, hot back-ups, containers etc etc.

Many times these implementations, whilst impressive, are unnecessary for the stage of the company or the product development. Many times, they are difficult to document, relatively expensive, difficult to maintain and can be hard to debug. In many circumstances, it would be far easier to have some old-fashioned deployment or back-up scripts until the company or product has some traction in the market and there are the resources available to properly administer the complexity of the DevOps setup.  This is somewhat of a paradox, initially it would seem that by automating many of the mundane operational tasks that the development team face, you would be saving not only time and resource but also money. However, this is not always the case. Like anything you should implement exactly what is needed and not more. Many service providers make some of these DevOps tasks almost trivial to implement (individually) that the temptation exists to over-engineer the DevOps. This should be avoided.

The main reason to avoid over-engineering of DevOps is not the potential to loose control of cost, it is not the risk of spending much needed development time on DevOps tasks. The main risk is creating an overall system that is reliant on a number of DevOps tool providers, which when used in conjunction create systems that are unnecessarily complex to understand and maintain. For example you push your code to source control, which integrates with another 3rd party, this runs some unit tests and pushes to production servers, these servers have a number of containers that are "networked" together and there servers themselves have various firewall rules and scaling rules etc etc. One can see that this can get very complex, very quickly. Yes, it's wonderful to have these tools and in principle it is wonderful conceptually. But is it really needed ? How often do you really need to deploy to production ? How difficult is the deployment ? How many people take the time to properly document the DevOps configuration and setup ? Is DevOps becoming a risk to your business ?

If you are not carefully managing this, DevOps can become more of a risk than a asset to the business.

Sometimes, less is more. Everything in moderation - especially when it comes to software engineering. 

Saturday, June 2, 2018

Hire for Engineers not for Frameworks

I have been faced with hiring great engineering teams on a number of occasions, I am constantly amazed at home many hiring managers and recruiters seem to focus on particular technologies and experience in those technologies.

My experience has taught me to hire engineers and I have always viewed the technology as a secondary consideration. I always focuses on a grasp of engineering fundamentals and not on information about some current or past technology. In our current world these technologies are out of date by the time the ink on the contract is dry.

Recently, I was approached by a leading company for a very senior role. The in-house recruiter laid on his wonderful resume with world leading software companies before affording me the chance to explain my career-to-date. He then proceeded to ask me a number of questions on specific Java libraries and how they were used. This is all information that is readily available on reference sites and in API documentation and does not  highlight any engineering ability whatsoever.

Unfortunately, this practice is common place. Hiring tests are geared to determining how good someone is with a particular language or technology and not how good they are at thinking or learning a new framework or technology.

I have never hired teams this way. The best engineering or software teams are made up of good solid engineers, who can solve problems and can adapt to a very fast paced and moving technological landscape. 

Wednesday, May 16, 2018

The issues with GraphQL

I recently wrote a little about some of the issues with a technology like GraphQL. Whenever a new technology emerges that claims to solve age old problems, one has to look closely at the assertions made.

In the case of GraphQL the following is listed on their website:


  • Ask for what you need get exactly that
  • Get many resources in a single request
  • Describe what is possible with a type system
  • Move faster with powerful developer tools
  • Evolve you API without versions
  • Bring your own data and code
The How to GraphQL describes GraphQL as "the better REST"

  • No more Over and Under fetching
  • Rapid product iterations on the frontend
  • Insightful Analytics on the Backend
  • Benefits of a schema and type System
Whilst many of these assertions may be true, the reality is that they also can cause significant problems. Like anything in engineering, there are always trade-offs.

  1. Asking for what you need : The backend or API has to be able to generate the queries that will give back the data as requested. As your project grows, if not carefully managed, the queries get bigger and more convoluted - eventually slowing performance and becoming unmanageable.
  2. Getting many resources in a single request: Produces similar issues to the above, furthermore if not managed, can yield a system with many points of failure. If for example one of the sub-queries is not returning valid results or performing - the entire system can suffer (unlike REST).
  3. Evolve your API without versions: Whilst this would seem like a great idea, the reality is that in a complex graphQL implementation, if something goes wrong on the backend - it is much more difficult to debug and has the potential to significantly impact the system. You have to ensure that any changes to the API are backward compatible and don't break the system. You might once again revert to having versions.
  4. Bring your own data and  code: This alludes to the fact that the caller is oblivious to the backend data-source. No different to REST.
In the How To Blog Post mentioned above:

  1. No more Over and Underfetching: This is simply not true, front-end developers will always take the path of least resistance - often not considering performance. In a REST world they may have had to make multiple queries to populate a web page (for example). Since you are now providing a mechanism where this can be done all at once, it will happen. Over-fetching is a big problem in GraphQL implementations !
  2. Rapid Iterations: Yes, this is true - but the quality of the overall system is impacted.
  3. Insightful analytics: I found it much harder to measure the API performance in a GrpahQL world than in a REST world. Afterall, in a REST world one can simple analyse each route and easily find bottlenecks. This is not the case when there is single end-point.

In summary, whilst GraphQL is another tool in an engineering toolbox, it is not a miracle solution. If not managed carefully, it can become a problem in complex projects for the reasons mentioned above. Claims that are made by the creators of new technology must always be evaluated and not taken an face value.

Tuesday, November 15, 2016

The Myth of Machine Learning

Recently there has been a surge in the number of companies claiming machine learning capabilities, from startups to large organisations. The media is full of "machine learning" claims, investors seem to be dropping large amounts into startups claiming to have "machine learning" or "artificial intelligence" capabilities. From Logo design companies to delivery companies, they all claim to have implemented "machine learning". I recently saw a press release for a US based app development company that had raised a significant 7 figure sum, claiming they had developed "Human assisted machine learning" ! One has to ask, what is that ? Any machine "learning" has to be assisted by humans anyway, who would configure the algorithms, applications and hardware, not to mention the training ?

Neural Networks and AI genetic algorithms have been around for a long time. So why now ? The answer is data (and to some extent computing power).  In order to even attempt to do anything smart with "learning algorithms" one must have large sets of data. Since "Analytics" we have that data, sensors and devices connected to the Internet and mini-computers in our pockets, always connected. So gathering data is not a problem. The problem is what to do with the data.

Fundamentally (despite outlandish claims by the media) , Neural Networks can be "trained" to classify sets of input data into categories.


The diagram above shows a plane in 3D space that separates two sets of data (classification). If we project that plane into 2D space we have a non-linear equation.




If there is a means of determining if the classification was successful or not the classification can "learn" as the input evolves. Most current claims of "machine learning" are really simple rules that are able to cluster input data, there is nothing much too these claims. These are more like the old "expert systems" which have a pre-programmed set of rules that help the computation determine some output result. You can imagine this as a large set of IF THEN statements. It is a myth that these systems can learn or that there is any form of "intelligence" in these systems. Somehow we have jumped from these very limited capabilities to machines running the Earth. I guess the one thing we can learn from the media of late (BREXIT and Trump) is that you can't believe what you read !

Whilst I am positive that there are some very clever people out there working on all sorts of clever algorithms, I feel that propelled by media hype there is a big myth surrounding Machine Learning. In some ways it is in the interests of academics, business leaders, marketeers ect. to promote this myth as this fuels their funding.