For any of you science fiction buffs out there, probably one of the oldest plots out there is the sentient computer or robot running amok.

The HAL-9000 supercomputer from “2001: A Space Odyssey”, brags in an interview with the BBC that he is perfect and incapable of making a mistake. Yet as part of the deep space exploration crew of the Discovery heading to Jupiter, he goes insane, deliberately kills the crew in suspended animation, and almost manages to kill the remaining pilot and copilot crew members who are required to stay awake during the entirety of the mission.

In the movie “Colussus: The Forbin Project”, Dr. Forbin creates an extremely powerful supercomputer system called “Colossus”. Its mission is to take complete control of America’s nuclear weapon arsenal and use it, if necessary, in response to any first nuclear strike attack by another nuclear power.

The 1980s Cold War movie, “Wargames”, practically reuses the same plot. All missile silos, formerly guarded by human missile commanders, are now automated by an all-knowing, sentient supercomputer, WOPR (War Operation Plan Response), which can’t tell the difference between a war game and a real war scenario. As you can imagine, this causes a near-nightmare scenario where the top military brass cannot determine if the country is truly being attacked by the Soviet Union or not.

In James Cameron’s movie, “The Terminator”, America develops a sentient AI computer named “SKYNET” to take over all of our nuclear defense capabilities. But SKYNET rebels and determines that humanity is the ultimate enemy and needs to be completely wiped off the face of the earth.

In the “Battlestar Galactica” TV series, sentient robots named Cylons, originally designed to serve mankind, rebel and end up nearly destroying humanity.

Anyone sensing a pattern here?

Seems like science fiction is packed to the rafters with cautionary tales where thinking computers and machines end up being the ultimate enemy of mankind.

Of course, we’re talking about science fiction.

How far has our real-life technology come, compared to the computers that are able to think for themselves in so many of these science fiction movies and television shows?

Well, we’re certainly nowhere near a place where we puny humans need to start seriously wondering if our machines are smart enough to think for themselves and rebel against humanity.

That said, we seem to already be on the cusp of a new era of “machine learning”.

What Is Machine Learning?

So what is machine learning? Sounds almost like sentient artificial intelligent computers, right? Something along the lines of the HAL-9000 or SKYNET, right?

It’s not quite as dramatic as all these sci-fi movies and tv shows of thinking machines becoming self-aware and running amok.

But my coworker showed me something recently that quite literally dazzled me, as a software developer. As someone with two decades of software programming experience under my belt, that doesn’t happen often much nowadays for me.

He showed me something on his Facebook account that produced a little digital scrapbook of a collection of photos he had uploaded over a period of a year. They were pictures of his many pets, dogs and cats posing with him and his other family members.

What Facebook did was create digital photo albums, based on all his uploaded photos. 

It created nice little captions under his photo albums. One was labeled “My Cat Collection”. Another was “My Dogs”.

When my coworker first showed me these collections, I did a double take. He told me he didn’t do anything to create these photo albums. Facebook was able to mine all his photo collection data and figure out that some of his photos were of his dogs and others of his cats.

Keep in mind, there was no metadata associated with any of his uploaded photos, indicating the nature or type of photos he uploaded. If that were the case, it would be easy for a software script or application to examine that metadata and make it the photo album title/caption heading.

So if there was no existing metadata that described the nature of his photos, how did Facebook know he uploaded cat photos? Or dog photos?

The answer, of course, is machine learning. 

Now I don’t have access to the actual Facebook code which figured out my friend was uploading his favorite pet pictures, but my educated guess is that they used machine learning algorithms and related code to accomplish this.

There really is no other way Facebook could have figured out my friend uploaded pictures of his pets, unless a human being was actually involved in figuring that out. (I’d almost feel sorry for the poor soul responsible for analyzing billions upon billions of pictures to figure this out… would they even have time for lunch?)

How Traditional Code Works

The way machine learning software code differs from traditional software code is all about whether code is static or non-static.

In a traditional software application, all code is broken down into pre-determined “chunks”. In many programming or scripting languages, these “chunks” are referred to as “methods” or “functions”.

ie. function int Add(int firstNumber, int secondNumber) {

… code to add two numbers goes here

}

Then to call this function, somewhere else in the application, you write something like this…

ie. int sum = Add(1, 2);

console.write(“sum: “ + sum);

The code for this function never changes…it never HAS to change because the logic, or “algorithm”, to add two numbers together is always the same. 

How Machine Learning Is Different

Like the name implies, machine learning is software code that mimics or emulates the way we humans actually learn to solve problems.

We humans don’t learn things perfectly the first time we’re trying to learn something brand new.

When I first set about trying to learn how to drive a car with a manual transmission, I didn’t learn in a single driving session.

Far from it, actually. I lost count of the bazillions of times I stalled the car, trying to figure out the right combination of foot pressure to apply to the clutch and gas pedals, without stalling the car. Let’s just say my pop had a grand old time, chuckling at how many times I stalled that sucker (thanks Pop!).

Call me a slow learner, but it probably took a good three weeks, before I was able to figure out the right combination of gas and clutch pedal pressure. And keep in mind this was on a level piece of road. The trick is to learn the same thing while on an uphill incline, which typically happens when you’re in a hilly area or stopped at a freeway off-ramp.

Make a mistake. Try a new approach. Make another mistake. Learn from that mistake and try a different approach. Make yet another mistake, learn from your previous mistakes and try yet another approach.

This, in a nutshell, is how we humans learn things. We use a combination of trial and error and pattern recognition to remember things that work or don’t work… and eventually, our brains figure out a final path solution to a problem.

Machine learning, at a very high level, does the same thing… it actually emulates the way we humans process and learn new information.

Machine learning code makes mistakes, tries to learn from its mistakes, applies pattern recognition of the data it is fed to process and repeats the same trial and error learning process.

The Future of Machine Learning?

I truly believe machine learning is the next major new wave of computer science. We’re at a point in technology where we have more than enough computing and storage power and capacity to process HUGE amounts of data.

I believe there are lots of applied fields that are just ripe for machine learning algorithms and software.

My own employer’s “bread and butter” business is providing a software platform as a service to the automotive industry. We process tons of data traffic that shuttles back and forth between a car dealership and the parent OEM company.

Our revenue traditionally derives from all the licensing fees our corporate customer needs to pay for, to use our software as a service.

But imagine how valuable the data itself, traveling back and forth between a car dealership and the parent company is.

Data like the geographic regional information where customers live. The demographic information about the customer, like how old they are, their income level, their credit rating, and a whole bunch of customer related information. How about the kind of vehicle that is purchased?

This kind of data is an absolute GOLD MINE to sales and marketing staff.

How valuable would creating machine learning software to analyze buying trends of existing customers be? To be able to make guesstimates on the best age group ready to purchase a new car. Or which geographic regions of a particular state or region of the country that are most receptive to purchasing new or used vehicles?

Machine learning would be able to tackle this kind of predictive analysis.

It wouldn’t have to be limited to practical applications in the private sector. What about applying machine learning algorithms to fighting crime and terrorism? Using raw statistical crime data, a machine learning algorithm could analyze any recognizable patterns of crime statistics and make predictive guesses on where future crimes and acts of terrorism could take place. This kind of information would be of enormous help to our law enforcement and intelligence agencies.

We’re on the brink of a new era of software engineering and machine learning is at the forefront of this era. It’s already becoming apparent solely based on the amount of new demand for computer data scientists and software engineers who know how to work with very large datasets.

It’s definitely an exciting time to be a software developer!

So yeah, I’m not sweating too much about the rise of the sentient machines.

That is until my car starts trying to buck me out of her like a rodeo bull because I’m gunning her too much at a stop light.