Evolution of Distributed Computing

by | Technology

As a professional software programmer, I often have to force myself to not get jaded about the rapid advance of technology.

I know, I’m going to go into one of those anecdotes you used to have to listen to, as a child, when your crusty grandpappy used to boast about how “ you young’in whippersnappers” have it so easy compared to his day. Walking to school, in the freezing cold, uphill, BOTH WAYS! And having to shoot for squirrels for supper!

I’m wrapping up the first phase of a major enterprise integration project I’ve been working on for the better part of a year now, and I’m finally seeing the light at the end of the tunnel (no, not the one where you see guys with wings … or pitchforks).

When you work in technology, you often take for granted the underlying “amazingness” of what’s going on behind the scenes.

Let me paint the broad strokes of what my project is about.

One of our customer clients (one among thousands), keeps track of all their business activity (sales orders, fulfillment/shipping, customer information, etc) on a computer server maintained by my company and running our core software and database system.

My specific team offers a specific product/service which our client customer can choose to “subscribe” to.

When that happens, a completely different computer server, hosted on a physical and digital location in our computer network, gets notified about this new subscription event.

This “subscription” computer needs to communicate to a different hosted computer which my team is specifically responsible for.

Our team’s computer server then needs to communicate all this subscription/purchase information over to a web server, which again, is a completely different computer server hosted elsewhere on our network.

This web server hosts some important web service which then communicates to one of our team’s databases, so that a new database record can be created, which will hold the new customer license information for one of our core products.

Finally, an e-mail notification needs to get sent to the customer informing them that their subscription was either successful (the happy path scenario), or something went wrong with the overall subscription request.

And then yet a different computer server will record this subscription event into a historical digital log so that our team has an ongoing history of all existing and new subscription events.

So all in all, we’re talking about seven completely different computers, all having to coordinate communication between each other in an exact order, in order for a successful subscription event to take place.

And the amount of actual computer code I had to write to do all this, while significant, wasn’t anything near the gobs and gobs of code you’d have to write, to create an operating system. Or a word processor. Or even a typical mobile application.

How is this even possible?

It’s possible because of the huge advancements that have happened in the field of distributed computing.

Let’s switch to my deep and foreboding Charlton Heston voice from “The Ten Commandments”. Or maybe James Earl Jones voicing Darth Vader in the original “Star Wars”.

In the Beginning, There Was the Lone Computer

The earliest, first generation computers, back around the middle of the twentieth century, were positively PRIMITIVE and underpowered, compared to even the cheapest, bargain basement computers, or even smartphones or smartwatches we have these days.

But even worse, they were lone wolves. Like those lone gunslingers like Clint Eastwood in all those spaghetti westerns, or the Duke, John Wayne, as he rode off into the sunset.

These first generation computers couldn’t communicate with other computers. The concept of the networked computer was still decades away, and so these hulking behemoth computers could only operate as standalone machines

Yes, multiple people could use it, using what was referred to as “timesharing”, but in reality, everyone was still only able to use the same machine.

But even back then, people realized the power and potential of being able to hook multiple computer systems together in an interconnected network.

Instead of having to physically ship data saved off into a physical media like a reel to reel tape or cartridge, using air freight or ground-based shipping across the country to a different computer, you could transfer the same information digitally and do it in a fraction of the time.

I’m certain even these earliest, first-generation pioneers of computer programmers and hackers, often dreamed about the potential of network computing.

Fast forward to around the time I began my professional software programming career in the mid-1990s.

The modern day world wide web, as we know, was just in its infancy. Computing had advanced by leaps and bounds since those earliest days of computers, but I was still working at a time when creating standalone software applications was still the norm.

In other words, most of my software applications were designed in a way to be self-contained within that single application.

I worked primarily in the Microsoft stack, using Microsoft technology, and object-oriented programming (OOP) was all the rage back at the start of my programming career.

Microsoft developed a technology called COM (Component Object Model) which was their specific embodiment and flavor of object-oriented programming. COM objects were self-contained little digital “nuggets”, that did whatever you wanted them to … open up digital files, send an e-mail message, write data to a disk, etc.

As a software programmer, I would often utilize lots of these pre-built Microsoft COM objects into my own software applications, so that I could take advantage of their functionality, and not waste time reinventing the wheel on common things that every programmer needs, and focus on the specific business needs of the application.

COM objects could even communicate to other COM objects on a completely different computer system. Microsoft called this technology DCOM (Distributed Component Object Model).

It was a significant development milestone for Microsoft. They knew the future lay in distributed computing, and this was their first stab at it.

There were lots of drawbacks to DCOM.

Firstly, DCOM was a proprietary technology. Only two computers using the same underlying Microsoft technology, could take advantage of DCOM technology.

But that left out any other computer system in the dust … Macs, Unix, Linux, and others.

Secondly, it was a very difficult technology to properly set up. And versioning was a nightmare. Say you created a particular version 1.0 COM object, but down the road, you needed to add additional enhancement and functionality and create a version 2.0.

If you weren’t careful, any machine previously deployed with a 1.0 version of the COM object, could get thoroughly messed up, if you didn’t overwrite it with the 2.0 version.

I specifically remember encountering this problem on a project where my development teach initially tried to set up a DCOM based application. After many days of frustrating debugging, we ended up ditching the entire DCOM architecture and resorting to something simpler.

The Java programming language, which came about around the same time, used a similar distributed computing technology called CORBA (Common Object Request Broker Architecture) and RMI (Remote Method Invocation).

These two competing distributed computing technologies, were the first stabs at distributed personal computer technology, and while good first efforts, had their shortcomings and flaws.

The next big advanced in distributed technology was the concept of the WEB SERVICE.

It used two core technologies that helped to make distributed computing more interoperable with computers and systems with different underlying computing platforms.

1. The internet

2. XML

Web services harnessed the power of the internet, which suddenly gave software developers the ability to connect your computer to a computer halfway around the globe, in the blink of an eye.

XML solved the problem of connecting two different computer systems that didn’t use the same computing platform. With the power of the web service, your Mac computer communicates with a Windows PC, as long as you used XML to describe the data and hold the data you wanted to transport.

XML based web services are still very much in use today, but as we all know, technology never stands still.

The next major revolution of distributed computing is REST-based services.

REST-based services differ from XML/SOAP-based web services in two major ways.

The first is the data transport. It uses a much more lightweight data format called JSON (Javascript Object Notation), which is much more compact and takes up less space, which in turn, means you have less data you need to transport across the wire to its destination.

Secondly, REST-based services differ from SOAP-based services because it’s much easier to “consume” REST-based services over a SOAP-based service, which involves a lot more preparation and configuration, to get it working.

Thanks to REST-based web service technology, I was able to create an enterprise integration connecting a whole bunch of different systems together, all physically located in different parts of the country, together in a quick and straightforward manner, and without having to write a Leo Tolstoy amount of computer code to accomplish it.

I often wonder what will be the next generation of distributed computing technology?

Whatever it ends up being, I hope it has something to do with Star Trek “Scotty, beam me up!” transporter technology …

Ready for Your Next Job?

We can help! Send us your resume today.


Need Talent?

Submit your job order in seconds.


About ProFocus

ProFocus is an IT staffing and consulting company. We strive to connect a select few of the right technology professionals to the right jobs.

We get to know our clients and candidates in detail and only carefully introduce a small number of candidates that fit the role well.