It’s still kind of amazing to me that my particular profession, software development, is one of those few career types where you need to continually learn new things at a breakneck pace.
Of course you need to continually keep marketable in ANY job profession, but in other job professions, it’s rare that you need to be ready and willing to completely throw out what you used to know and replace that knowledge with something completely brand new.
As a software developer, it’s not enough to learn a specific programming language or technology stack, spend a good number of years building up expertise in that particular technology discipline, and once you feel you have gained mastery, be comfortable in sitting on your laurels, and coasting for the rest of your career in that particular knowledge.
That kind of career path can potentially very perilous to follow. I have personally seen fellow software developer coworkers refuse to learn beyond their original core technology expertise.
This can be especially true for software developers who gain expertise in now legacy technologies such as developing for mainframe systems and extremely old programming languages … when I say ‘old’, I mean languages that only legacy systems such as mainframes support — languages like COBOL, FORTH, and the myriad of flavors of BASIC.
Not that there’s anything wrong with these legacy technologies and languages per se. Many of these languages and technologies continue to work and pay bills for the companies and organizations that originally built them.
But like the old saying goes, “Nothing lasts forever”.
Eventually, companies and organizations are forced, sooner or later, to move off legacy technologies, for two reasons:
1. The technology no longer can work within the current organization’s technology infrastructure.
2. The people who know the original technology are no longer around
Reason #1 can be mitigated with things like virtualized environments and container technology like Docker and other emulation environments.
But reason #2 is pretty much an unsolvable problem once there is no one around that knows the original technology. People come and go, they retire, or move onto other career opportunities.
The people who are left, which companies tries to assign to fill and replace the original knowledge domain experts, no matter how competent and knowledgeable, will NEVER attain the same level of expertise. Usually they will know just enough to keep the old systems and technology up and running.
The worst case scenario is when the company decides they no longer NEED the people who have become domain and technology experts in their particular legacy technology.
Sadly, this can result in layoffs. The days of companies who take care of you until you can comfortably retire are long, long gone.
WHY DEVELOPERS NEED TO EMBRACE CHANGE
As a developer, the only way you can stay marketable and relevant is by EMBRACING CHANGE.
Probably above all else, this is the most important way a software developer continues to stay relevant and marketable in today’s job market.
It’s a fact of life that though technology is always changing, the rate at which technology changes, is without a doubt, moving at a much faster clip than five, ten or fifteen years ago.
Take the original C programming language. It was first created and introduced by legendary software engineers, Dennis Ritchie and Ken Thompson at AT&T Bell Labs, in 1972 on an ancient PDP-11 Unix mainframe system.
It wasn’t until 7 years later in 1979 that Bjarne Strousup at Bell Labs, created the next major programming language in computer science called C++.
These two programming languages, became the de facto programming language standards that most software developers around the world embraced for many decades.
It took near another 15 years until the next major programming language, Java, invented by James Gosling and Sun Microsystems, took the world by storm.
Five years later, Microsoft introduced their own competing C# and .NET Framework, to compete with Java.
Up until the turn of the 21st century, technology moved forward, without a doubt, but at a steady and unrushed pace.
Developers had ample time to embrace a particular programming language and tech stack.
I caught onto the Microsoft tech stack bandwagon in the mid 1990s, and took advantage of the relatively unrushed pace of technology to progress and become proficient with Microsoft’s development.
However, a little something called the Internet completely changed all that.
The internet was truly a game changer in context of software development.
Before the rise and popularity of the internet, software development involved the tedious and often times painful process of attempting to get your software application successfully installed on a person’s computer.
When the internet arrived, all a person needed to install was a web browser. Once installed, every web page could be a potential software application.
If a web developer ever wanted to make changes to their software, they would change their code, deploy it once, and every person who revisited the web application would instantly see the revised changes.
You could literally reach every person around the world with an internet connection and a standard web browser.
It was initially used to do simple things like validate html <form> objects before a form and its data was submitted to the target web server. Or do slightly fancy little animations with image objects to make web pages look more dynamic and animated.
Assembly language, is just one step above the level of a computer’s “native tongue”, which is and always has been binary language, 1s and 0s.
When you program a computer at the assembly language level, you’re programming at a level where you need to control exactly how data moves from the CPU of a computer chip, to CPU registers, to memory chips, and to computer peripherals like disk drives and other hardware components.
It’s the lowest common denominator of all computer languages.
Which is pretty much ANY application these days, including web applications, mobile applications and even building applications that act and behave like traditional client desktop applications like Microsoft Word.
When I first started out in my programming career, the norm for software development was building and compiling applications for client desktop computers using traditional and strongly typed computer languages like C, C++ and Visual Basic.
Deploying client desktop applications was always a challenging task, and you were always dealing with one off problems where a certain user’s computer had trouble installing one of your applications, which would often require manual intervention by the developer.
But nowadays, the internet is the glue that holds applications together…rarely do you see the need for an organization to need a software application that doesn’t have a dependency on internet connectivity.
Most software applications are built in “layers” or “tiers”.
The visual layer of an application, the piece that you actually see on a computer or phone or tablet screen, is the “UI” layer, short for user interface.
The UI layer sends or receives any data from the next layer, often referred to as the “middle tier” or “business logic/rules” layer. This layer is where all your application business logic and rules reside.
The data layer, is where your application data gets permanently stored, usually some sort of database, be it relational like Oracle or SQL Server, or non relational like MongoDB or Couch.
Traditionally, you built these layers out in a wide variety of different languages and technical stacks. Software developers would build out the UI layer in one set of technology, while building out the middle tier layer in a completely different set of languages and tech stacks. And then use the SQL language for the database.
Change can be hard to embrace. It’s human nature to dislike and mistrust change. We all prefer our personal comfort zones, and software programmers are no exception.
Once we feel proficient in a particular programming language and technical stack, we feel very protective about it.
In fact, many software developers feel so protective about their particular technical expertise, they often get into technical religious wars about the superiority of their particular programming language/framework.
But this kind of loyalty and obeisance to a particular technology, long term, will ultimately reduce our marketability and value in the ever changing technical job market.
Embracing change and being aware of the long term trends in software development is the best way to stay relevant and marketable.