Best practices for DevOps Continuous Integration /Continuous Delivery (CI/CD) — Aqonta

Using version control: In collaborative development environments with simultaneous development there will be multiple challenges:

A source code management system defines a single source of truth for the code after placing the code under a version control system. The source code will be reproducible by effectively adopting the merge process for mainline development and loop lines for bug fixes and so on in the system. Git is a popular source code management system and GitHub is a cloud variant as a Software as Service (SaaS) model:

Automate the build: Standardized automated build procedure will stabilize the build process to produces dependable results. The matured build process must contain the build description and all the dependencies to execute the build with a standardized build tool installation. Jenkins is the most versatile tool for build schedules; it offers a convenient UI and also has plug-ins integrating most popular tools for continuous integration.

Tests in the build: A few tests are to be performed to validate effectiveness and fitness of code beyond just the syntactical correctness of the code as follows:

  • Unit tests operate directly on build results
  • Static code checks on source code prior to developer check-in. Git pre-commit triggers or CI system could be used to set up a gating or non-gating check
  • Scenario tests for new build applications to be installed and started
  • Functional performance of the code:

Unit test frameworks are popular across source code technologies like JUnit for Java. Selenium Framework provides graphical user interfaces and browser behavior.

Implementing these tests on the developer’s workstation early as part of the build saves time and effort addressing bugs discovered later in the development process.

Early and frequent commit of code: In a distributed development environment with multiple projects, each team or developer intends to integrate their code with the mainline. Also, the feature branches change to be integrated into the main line. It’s a best practice to integrate code quickly and early. The time delay increases between new changes and merging with the mainline will increase the risk of product instability, the time taken, and complications as the main-line evolves from the baseline. Hence each developer working with the feature branch should push their code at least once per day. For main branch inactive projects, the high effort for constant rebasing must be evaluated before implementing.

Developer changes are to be incorporated into the mainline, however, they can potentially destabilize the mainline affecting its integrity for the developers relying on the main line.

Continuous integration addresses this with the best practice of continuous build for any code change committed. Any broken build requires immediate action as a broken build blocks the entire evolution of the mainline and it will be expensive depending on the frequency of commits and such issues. These issues can be minimized by enforcing branch level builds.

Push for review in Gerrit or pull request in GitHub are effective mechanisms to propose changes and check the quality of changes by identifying problems before they’re pushed into the mainline, causing rework.

Address build errors quickly: The best practice of building at the branch level for each change will put the onus on the respective developers to fix their code build issues immediately rather than propagate it to the main branch. This forms a continuous cycle of Change-Commit-Build-Fix at each respective branch level.

Build fast: The quick turnaround of builds, results, and tests by automatic processes should be vital inputs for the developer workflow; a short wait time will be good for the performance of the continuous integration process on overall cycle efficiency.

This is a balancing act between integrating new changes securely to the main branch and simultaneously building, validating, and scenario testing. At times, there could be conflicting objectives so trade-offs need to be achieved to find a compromise between different levels of acceptance criteria, considering the quality of the mainline is most important. Criteria include syntactical correctness, unit tests, and fast-running scenario tests for changes incorporated.

Pre-production run: Multiple setups and environments at various stages of the production pipeline cause errors. This would apply to developer environments, branch level build configurations, and central main build environments. Hence the machines where scenario tests are performed should be similar and have a comparable configuration to the main production systems. Manual adherence to an identical configuration is a herculean task; this is where DevOps value addition and core value proposition and treat the infrastructure setup and configuration similar to writing code. All the software and configuration for the machine are defined as source files which enable you to recreate identical systems; we will cover them in more detail in future articles.

The build process is transparent: The build status and records of the last change must be available to ascertain the quality of the build for everyone. Gerrit is a change review tool and can be effectively used to record and track code changes, the build status, and related comments. Jenkins flow plugins offer build team and developers a complete end to end overview of the continuous integration process for source code management tools, the build scheduler, the test landscape, the artifact repository, and others as applicable.

Automate the deployment: Installation of the application to a runtime system in an automated way is called deployment and there are several ways to accomplish this.

  • Automated scenario tests should be part of the acceptance process for changes proposed. These can be triggered by builds to ensure product quality.
  • Multiple runtime systems like JEE servers are set up to avoid single-instance bottlenecks of serializing test requests and the ability to run parallel test queries. Using a single system also has associated overheads in recreating the environment with change overhead for every test case, causing a degeneration a performance.
  • Docker or container technology to install and start runtime systems on demand in well-defined states, to be removed afterward (we will discuss container technology in future articles.
  • Automated test cases, since the frequency and time of validations of new comments, is not predictable in most cases, so scheduling daily jobs at a given time is an option to explore, where the build is deployed to a test system and notified after successful deployment.
  • The deployment to production is a manual conscious decision satisfying all quality standards and ensure the change is appropriate to be deployed to production. If it can also be automated with confidence, that’s the highest accomplishment of automated continuous deployment too.

Continuous delivery means that any change integrated is validated adequately so that it is ready to be deployed to production. It doesn’t require every change to be deployed to production automatically.