If there’s one thing a software developer absolutely hates, it’s tedious repetitive tasks.
That’s what I was suffering from during my most recent greenfield software development project.
After several intensive months of endless meetings, analysis and project gathering requirements, discussions with many other software dev teams and lots more discussions with the enterprise architect attached to my project, I finally was able to wrap my head around what I needed to accomplish from a developer standpoint, to reach version 1.0 of our final project deliverables.
Thus is the life of the enterprise software developer.
I can easily attest to the level of complexity I faced during the course of the project. It was probably one of the most challenging software projects I’ve ever been involved with.
Enterprise software development is all about figuring out how to wire up and connect different and disparate systems, applications and components together into a unified workflow.
And that’s only the tech side of things… there’s also the non-technical aspect of enterprise development, which can be as challenging, if not more so, than the actual technical development. You’re dealing with lots of other people and teams whom you need cooperation from, in order to get your own deliverables done.
Lots of finesse, soft skills, negotiation, the whole nine yards. And it doesn’t come naturally for a lot of software developers, like myself, who prefer staying hunched in front of a monitor and keyboard all day.
When I finally reached MVP (minimum viable product) with my project, I was naturally flying on cloud nine.
Yet there was still something bugging me about a key aspect of the project.
It was missing CONTINUOUS DELIVERY.
In other words, it was a big chore to move all my software project deliverables into a production environment so it could be ready for real-world usage.
Here’s what I had to do.
- Open up my software code base using my software development tool from my own development machine
- Ensure that all associated internal and 3rd party software libraries were properly loaded into the project
- Compile the code
- Ensure the code properly compiled.
- If code compilation fails, investigate and fix any compile errors
- Run all associated unit and code tests
- If any tests fail, investigate and fix any failing tests
- Generate all associated code documentation from the codebase
- Take the compiled runtime files and perform a file copy/move operation to the remote development and/or production servers
- Verify the deployed files have been properly copied and deployed
- Perform a set of “smoke tests” against the deployed deliverables
- Send an e-mail or other notifications that the deployment either failed or completed successfully
Let’s also keep in mind there are a lot of other assumptions and dependencies I didn’t mention, which are also crucial to this workflow.
If another software developer needs to be able to make changes to the code and deploy their own changes into a development and/or production server environment, they must have proper account access both to the source code as well as access to the development and production servers.
When performing the list of associated “smoke tests” against the newly deployed version of your software, one must be vigilant about performing every test.
Even missing one test or misjudging whether a test succeeded or failed can cause lots of headaches and problems down the road for the end users and intended customers of the application.
And you can bank on the fact I’ve messed up the steps in one way or another. After all, I’m only a puny human and we puny humans make plenty of mistakes.
So anytime I made any significant code changes to the project… maybe a bug fix, or some new functionality, I’d have to manually perform each of the steps I just described in order to get my changes propagated from my development machine up to the development or production servers.
Not only tedious but quite time-consuming, especially if there were a lot of bug fixes or new additions and enhancements you wanted to add to the project.
And quite honestly, I was committing the ultimate sin as a computer programmer. I wasn’t doing something we computer programmers are supposed to excel at: harnessing the power of computers.
And what are the two things a computer can do much better than us humans?
- Perform repetitive tasks
- Perform the repetitive tasks at a superhuman speed
Whenever you have a list of distinct and manual tasks that are repeatable, it’s a prime candidate for a computer to automate.
Computers can perform repetitive tasks at superhuman speeds, and even better, they NEVER make mistakes like we humans do.
Once the set of repetitive tasks has been clearly defined and understood by a computer, via a computer program or script, it can run as many times as we want it to, at any time of the day for as long as we want it performed.
And that is exactly what I should have been doing with that list of compile, build, test, deploy tasks I just described.
Which begs the question … why didn’t I do that?
Ask any software developer, and you’ll hear a variation of the same theme.
THERE’S SIMPLY NO TIME TO AUTOMATE THOSE TASKS.
And, yes, I committed the same ultimate sin.
I, of all people, a person who gets paid for having the knowledge to program a computer to perform repetitive and manual tasks at superhuman speeds, should have known any time and effort I spent up front to automate these tasks would pay itself off in spades, in terms of time and effort, over the long term.
Because once you’ve perfected and thoroughly tested an automation script so it does exactly what you want it to do, it will work flawlessly and quickly for as long as you want it to.
Yet, believe it or not, many organizations still perform these tedious tasks manually.
I guess you can chalk it up to human nature… laziness? hubris?
Or just forgetting the simple fact that computers are infinitely better at performing repetitive and tedious tasks than us humans.
The benefits are enormous when you DO decide to create a continuous delivery workflow, which is what you are doing when you automate all these tasks and let the computer do all the heavy lifting.
In a continuous delivery workflow environment, you can set it up so as soon as you add new code changes to your codebase, with the press of a key or a button, you can have the automation script compile, download/install software libraries, test and deploy to production servers for you.
What that provides is quicker software deliverables, whether it’s in the form of bug fixes or new enhancements/features.
And quicker turnaround time is the name of the game.
Especially in this brave new world of agile software development, which is all about providing software deliverables on a faster timeline than traditional waterfall development.
Creating a continuous delivery environment benefits a multi-developer team as well. Every team member can instantly access the state of a project, the last time it was successfully compiled and deployed into pre-production and production servers, and get an accurate “heartbeat” of the project codebase.
It also serves to keep developers mindful of what they plan on adding to the codebase, because in a properly defined continuous delivery environment, everyone on the team will know when a new code check-in breaks the build, and more importantly, everyone will know exactly WHO did it.
And believe me, NOBODY wants to be the one who breaks the project codebase.
You might as well sew a scarlet “A” on their clothing.