I always get a kick out of the way Hollywood portrays the act of computer programming and hacking.
I consider myself a fairly fast touch typist, but in movies and tv shows, you see computer whizzes and hackers pound away on their keyboards at seemingly a million words a minute. And it’s ALWAYS written correctly the first time. When you’re perfect, why bother debugging your code??
And of course, the computer programmer is always sitting in front of a gazillion giant sized, 8K monitors, all displaying psychedelic pyrotechnic fireworks of dazzling graphics and animation. Go watch “Swordfish” and see how Hugh Jackman programs his computer intrusion virus … it looks like he’s building some virtual tinker toy.
Of course, the reality of computer programming is nothing like the movies. But that’s understandable if you’ve ever actually written software code in real life.
It’s rather mundane and boring, actually.
You fire up a text editor or code editor from your computer, wait for it to finish loading, and then … *drum roll please*, you start typing computer code into the editor.
Depending on whether the computer language you’re working with is an interpreted language or needs a compiler program to translate your human-readable code into the native machine language “tongue” of the computer, that’s pretty much the act of writing computer code.
Write the code. Let the code compile (either by you or the computer doing it automatically for you), and execute it.
And that’s pretty much it. On an EXTREMELY productive day of writing code (which is pretty rare for me), I essentially sit hunched over a keyboard, and write lots of lines of computer code into a code editor and pray to the programming gods that everything works the first time …. which of course it doesn’t.
I’m certainly not one of those computer whizzes you see in the movies where the code works perfectly on the very first try.
The actual act of writing computer code is the smallest piece of programming. Most of the time, a programmer spends their time debugging their code to make it work.
A LOT of time, actually. There are times I’ve spent days, if not WEEKS, pulling my hair out and trying to fix a particular nasty compute bug.
Now we’ve come a long way since the earliest, first generation computers and computer programmers back in the middle of the twentieth century. The cheapest, underpowered computers and smartphones you can buy today are orders of exponential magnitude faster and bigger, in terms of processing power and storage capacity.
Your typical smartphone you can fit in your skinny jeans is a literal super-connected supercomputer that can access information from anywhere around the world. And at a price that most people can afford with a paycheck or two.
It’s a far cry from the giant, behemoth hulking first-generation computers that took up whole buildings and required industrial level HVAC air conditioning to keep them cool enough to operate correctly. Not to mention the millions of dollars it required to purchase even one.
Yet as far as we’ve come in terms of how much more powerful our modern computer devices are, the actual art of writing computer code hasn’t really changed much over the course of seventy years.
Those first generation computer hackers and programmers wrote code in a text editor not too different from the text editors and code editors we use today. They wrote code, they compiled it, tested it, and debugged whatever needed fixing and correction.
In fact, many computer programmers and hackers still use nothing much more sophisticated than a bare-bones text editor that comes pre-bundled with every computer operating system, to write their code.
Of course, that’s not to say there hasn’t been SOME evolution with coded editors.
Compilers, the software programs that translate your human-readable source code into the native machine language your computer requires, have evolved into very sophisticated programs that tell you lots of useful information when it detects bugs and flaws in your code.
Code editors also have lots of useful analytic tools that can examine your code, even as you type it on the screen, and offer recommendations and display warnings when it thinks you’re about to do something that could break the code.
There’s a particular feature in the Microsoft code editor tool I use in my day to day activities, Visual Studio, called “intellisense”, that displays a little pop-up window with every method and property value available to use in an object.
It saves enormous time for the developer and eliminates the need to memorize the capability of every object.
Yet, it still feels like computer hardware has evolved at a much faster rate than the computer software tools we developers are using today.
But I think we’re on the verge of seeing major changes in the way software gets developed.
I seem to be hearing a lot about artificial intelligence and machine learning lately.
We have digital assistants like Siri and Cortana that we can speak to, and they respond to our questions and commands like digital genies.
If anyone remembers Stanley Kubrick’s science fiction masterpiece, “2001: A Space Odyssey”, you’ll remember the super intelligent computer, the HAL-9000, which can speak with humans and seem almost like you’re talking to a real person.
I think the day isn’t far off where my computer code editor will have a HAL-9000 digital assistant that will be able to understand my commands and questions.
Is it that far fetched to theorize that the code editor tool of the future will begin to know my habits, strengths, and weaknesses? What if it kept track every time I wrote incorrect code, or broke the build?
And what if, over time, it could suggest things to me, to help me avoid the same coding mistakes in the future? Perhaps suggest technical concepts and topics that I am weak in, or is currently missing from my repertoire of knowledge, and help me become a more knowledge software programmer?
And what about the actual act of coding? Will I always be stuck in the paradigm of writing code into a text editor on a computer monitor?
What if I could write code in a sort of virtual reality 3D environment like Microsoft hololens?
What if I could create the computer objects I need by motioning in the air and grabbing or tossing digital objects out of thin air?
And if I needed some data from the internet, what if I could visually connect the data and services I need from remote servers, in a visual representation of cyberspace? William Gibson, the sci-fi author of “Neuromancer”, made his living writing about futuristic worlds where computer hackers did this kind of programming for a living.
We seem to be getting darned close to the realization of his fiction.
Could computers become intelligent enough to write code by themselves, even without the intervention of us humans?
I certainly wouldn’t want to be the naysayer who thinks that’s impossible.
But at the same time, I hope that doesn’t happen until after I retire. I’m not sure what else I could do for a living.
I somehow doubt putting “10 years of air guitar experience” on my resume, will impress employers much.