Accelerating Development

The main goal of every startup is to make money. Every part of the organization should be driven to create money for the company. This means creating products that users want to buy while decreasing the cost to produce the product all in a timely manner.

In a technology startup the product to deliver in many cases is code. The beginning phases of this process involve a lot of late nights, rapid development, and manual processes. As the company grows and the product becomes more complex these practices becomes untenable and error prone. Consequently the speed of development slows and the ability to respond to the market diminishes.

To put this in another context, imagine building widgets. Each widget is made the same as other widgets at the start. The process is manually intensive and error prone but the first batches are what is needed to get to market. These widgets take off and soon there are requests for red widgets and blue widgets. More workers are hired to make the widgets but each does it in their own way and time frame. Widgets are not all made the same and soon the quality is brought into question. What went wrong?

Manufacturing has solved this issue. Most items are built by machines in a repeatable manner that decreases the likelihood of variation. When the product is built it goes through a quality analysis before it is finally shipped.

Development processes should be the same, code goes in, compiled, tested, and deployed with little or no human interaction. By removing this variability errors can become more obvious and caught earlier in the process. This process will allow faster development and easier time to respond to market demands. Through a series of phases in a pipeline code can be complied, tested, assessed, and deployed before humans see it live.

Build

The first phase of this process would be to do automatic builds to keep code flowing from code into a “deployable artifact”. This is akin to involving a machine in what was once a manual process. Taking the raw materials (code) and processing them into a product (library or application) is the essential first step in ensuring code quality because it removes the human factor in the code building process.

An industry standard build system is an application called Jenkins. This system allows the developer to define different stages along a “build pipeline” that will do automated steps to compiling and then deploying the artifacts from source code. This will ensure that the human element is removed and allow for continuous delivery or artifacts to the system.

When a product is built it is important to differentiate the difference from one build to another. This allows the developer to pinpoint what exactly will be deployed or if a version is bad which to avoid. All compiled source code should be tagged and that tag pushed to source code repository. It is best to treat these as patches or even SNAPSHOT releases and when the version is ready for a full deployment or has stabilized then a major or minor patch can be updated.

Example of versioning:

MAJOR.MINOR.PATCH, v3.0.2

Recommendation - Have all source code and repositories build through Jenkins and push and version libraries.

Package

A growing trend in the technology industry is the use of containers. Containers allow for a level of abstraction above the typical runtime by providing deployable artifacts that are the same no matter what language or application they are running. This creates a stabilized development environment because everyone is deploying on the same system even if they are running different languages.

Packaging programs in containers provides the same benefits as described in the build section where there is a shippable product at the end, the only difference is that these packages can be deployed anywhere. Most high scaling and reliable systems run on containerized environments and provide the ability to not only package and tag the containers but also deploy them much faster. The deployment code becomes uniform for all languages and rolling back in case there is an issue becomes much easier.

Recommendation - Build containers on the necessary runtime environment (e.g. Node), package and publish the containers. This will allow developers to quickly start up their systems and eventually can become production based.

Determine Quality

When the build pipeline is in place and products are being delivered the next logical step is to put automatic quality analysis on the product, this will enable the developer to find bugs faster and prevent bad code from being tagged and deployed. The fact that this comes after the build stage is simply that the code should always be moving through first and then take the time to put quality testing at the various stages before and after the build to ensure its quality. Remember the goal is to ship products first to meet demands but then start to implement quality assurance around the process.

Static Code Analysis

The easiest way to assess quality is to utilize tools that analyze the code for bugs based on bad patterns. This is the first line of defense towards bug prevention and requires little setup or interaction from the developer. Popular products like SonarQube work in conjunction with CI tools like Jenkins to analyze the code as it flows through the pipeline, before tests and before the build. Other extensions can be used to check for security vulnerabilities which will save time in the long run by addressing issues before they arise.

Uniform Code (Optional)

Another way of making sure code can be quickly developed and understood is by establishing a uniform style of code. While it may seem like a hindrance at first, it allows for developers to get used to specific flow and not spend time determining why code was written a certain way. It also protects against code becoming sloppy; lack of brackets, misaligned blocks, or unused packages and variables can all result in bugs due to the fact that the code is not easy to follow. While this is not entirely necessary and can be viewed by many as an impediment towards development, its benefits are often seen in the long term once more people develop within the company.

Automated Testing

The best way to ensure quality of a product is by testing it. Automated tests allow for faster results and help prevent against regressions in product quality and ensure a level of stability. Tests range from basic interactions with the system to business logic, the important thing is to make sure the right areas are being tested and the proper results are being asserted.

Unit

Unit tests cover the smallest pieces of a software product. Often these are tied to things like data access or minor utilities. These tests are often very small, run very quickly, and are the largest set of tests that should be part of your code base. Testing individual units ensures that when assembled together each element should work as expected.

Business Logic

Often though units are not run on their own, but rather in conjunction with one another. If it can be assumed that the units themselves are working properly the next level of testing is around specific business logic. The business logic uses the disparate units and combines them into business rules. To test these rules the units are often mocked or stubbed out to work as expected (since they have their own tests) and what is left are tests that look specifically at the business logic. Mocked tests like this often run quickly as well and are important to ensure business logic stays in tact in case code is refactored.

Integration

When the final product is built it is important to go beyond testing the code in isolation but run it as it will be used in a production system. Integration tests do what their name suggests and tests the program integrated with other systems. This typically is done on systems that interact with other systems through some I/O device. An integration test will have dependent systems running (databases, apis, etc) and will have a series of tests that cover basic functionality. These tests are often brittle because they rely on other systems to be working in conjunction with it but are important because they can show if problems arise while interacting with other systems. Examples would include: missing tables in the database, malformed payloads from an queue, bad requests on apis, etc. Due to the fact they are brittle the set of these tests should remain small and run fairly quickly, remember the system should have been thoroughly tested up to this point through faster tests, these tests are to focus on how the systems integrates with dependent systems.

End to End

The final tests are End to End tests which test the system as a whole with the new product. These tests should be the smallest set because they can be brittle as well but serve an important role to make sure the system will work when the new code goes into production. These tests should be constantly changing in order to reflect changes to the system. These tests will be similar to integration tests but will mostly replicate user interactions with the system rather than interacting with an API.

Smoke Tests

Some companies like to establish longer running tests that look over an entire product to check things in more detail. These tests are not part of a normal build pipeline because they are long running (hours as opposed to minutes) and bring attention to regressions that may have inadvertently been added to the system. These tests are alarms and should be brought to the attention of the developers and addressed immediately. If the test is outdated or bad it should be fixed but if a bug has been entered the bugfix becomes the priority. This ensures that the product remains at an established level of quality.

Humans

In the end the best way to determine quality is through humans. They ultimately can see how the product runs and how it should work. However, humans are the most valuable asset to a development team and the most error prone, so the time they spend reviewing the product should be minimal or be spent enhancing the other quality indicators listed above.

For example, humans will need to approve code before it is merged into the main code line to determine whether the code will work and to notice any bugs. However, prior to this code analysis, unit and business logic tests can run, and a linter can ensure that the code is legible. All of this saves the developer time by not trying to find bugs or other issues and can focus on making sure the code fits the groups standards.

Another example is QA testing of the product in a test environment. QA members shouldn’t need to analyze every part of the system after a deploy but have confidence that most of the system was not impacted by this change but can instead focus on testing the areas the ticket outlined. If a bug is found by a QA member proper steps should be taken to make sure that the specific bug has tests around it.

The ultimate goal is to ensure most of the process is automated to cut down on human error and save on human time.

Recommendation - Start with static code analysis and work up to higher level testing. Static code analysis will outline major issues while unit tests will give assurances that the code is working as expected. Integration and end to end testing are great long term items but can inhibit growth in the short term with small teams.

Deployments

In the end the product needs to be delivered to the end users. This is often a manual process done by a developer or system administrator. The trend should be to move away from manual deployments to automated deployments (even if it is a button push) to reduce variability and possible errors. Deployment scripts are used to help alleviate this problem but need to be maintained like all other code.

Test Environments

All code must go through some sort of testing phase beyond unit tests (as described above). This should be in a Test environment of some sort that replicates the production environment the code will eventually run on. Not only is this to test the newest version of the code but it should also be used to test how the code is deployed. By setting up a deployment environment similar to production, deployment scripts can be replicated over environments and kinks worked out before deployment day. In the end deployments to a test environment should be automated by either a successful run of the test suite or by a button push in order to reduce variability.

Production Environments

The eventual goal would be that this type of flow could be translated into delivering to a production environment. This will take everything done in the test environment and deliver to production in the same way. This reduces variability and allows for faster deployments which in turn allow for companies to react to the market. By increasing the throughput of the product, in this case an application, the more demand can be met by the product.

Going Forward

The biggest step in being successful in any business is by working with the market. This means meeting the demands of the market as well as adapting to the market. In order to do that the product delivery mechanism needs to have high throughput and high quality. By replacing manual intervention with an automated workflow these types of demands can be met.

What’s important is to realize the order in which these things should be done. The biggest thing is to setup your pipeline to begin building on its own to build a shippable product. After that it’s a matter of priorities within in the system and gradually replacing manual efforts with automated ones.

Once a system is in place it is important to continue to evaluate the pipeline and process to look for improvements. Each improvement should increase throughput and increase quality. This will aid in meeting the market’s needs.

Related