How Apple Future Proofed Themselves

by | Software Development

Every year since 2007, Apple has launched their latest generation product of iPhones into the world.

There’s always lots of pent-up demand and excitement from fans and the press, and gazillions of podcasts, videos and articles all speculating about what new gee-whiz features will end up on the new phones.

However, there was one recent article that caught my eye and didn’t seem to get much coverage on the internet.

Touch ID, which has been Apple’s trademark security authentication mechanism since the introduction of their iPhone 5s back in 2013, has been replaced by a new security mechanism called Face ID.

Instead of using one of your fingers on the iPhone’s home button to authenticate who you are, the iPhone X scans your entire face, as Apple’s next-generation security mechanism for accessing all apps on the iPhone (and perhaps their other devices down the road).

Such is the nature of technology, which is always progressing forward and at a relentless pace.

But the article mentions something about Face ID that was welcome news to any software developer who used the Touch ID fingerprint scanning security mechanism to authenticate users of their applications.

Face ID automatically replaces Touch ID security without the developer having to lift a finger to make it work.

*cue singing angels*

If there’s one thing that pleases a software programmer to no end, it’s implementing new functionality without paying for it with blood, sweat, and code monkey tears, hunched over a sweaty keyboard and pulling your hair out, trying to implement some brand new functionality.

In this case, every Apple programmer who put in the time and effort to make Touch ID work in their application, won’t have to write A SINGLE NEW LINE OF CODE, to make Face ID work.

So when an iPhone X owner launches an app that formerly used Touch ID to authenticate a user, the iPhone X automatically knows to use Face ID recognition instead of Touch ID technology.

Better yet, all backwards compatibility with iPhones that still use Touch ID technology continues to work, as is.

In one brilliant stroke, Apple has managed to transform every existing iPhone app that uses Touch ID technology, into a Face ID app, all without any manual intervention from the original app developers.

Robert Heinlein, the legendary science fiction writer, once coined the term, “TANSTAAFL”, which was short for There Ain’t No Such Thing As A Free Lunch.

Well kiddos, I’m here to say Apple might have just proved Heinlein wrong.

If you’re wondering how on earth it was possible for Apple to seamlessly make Face ID technology replace Touch ID technology, all without the original app developer lifting a finger to make the necessary changes, you could ask anyone you know who writes software for a living, and they might utter a single geeky sounding word.

POLYMORPHISM.

It’s original origin, is from Greek, which means “having multiple forms”.

And it’s one of the most powerful concepts in object-oriented programming.

Why?

Because it makes your software easy to change.

And why is that important?

Because of the undeniable fact that technology changes. And rather quickly, I might add.

Apple is actually a good example of this. They are no strangers to adapting to changing technology. They’ve been doing it for pretty much their entire history.

The Power of Polymorphism

Apple’s first Macintosh personal computer introduced the world’s first commercialized operating system that used a Graphical User Interface where you used a mouse device to move a pointer around on the screen and do things on your computer in a very visual and intuitive way. Before the advent of GUI-based operating systems, which we all take for granted today, one typed commands, one command at a time on a command line interface… clunky, obtuse, and very frustrating for many computer users.

They had no problem ditching the older 5.25” sized floppy disk for the larger capacity 3.5” disk. They ditched optical media drives entirely in their later computer devices.

More recently, they ditched the audio headphone jack on their iPhone 7.

And of course, they’re perfectly happy replacing their Touch ID technology with the newer Face ID authentication, as their next-generation security mechanism.

That is where the power of POLYMORPHISM in software development comes into play.

Polymorphism allows a software programmer the ability to make their codebase resilient to change.

The fact that Apple declared all apps currently utilizing Touch ID technology to automatically be able to utilize Face ID without a single code change required from the app developer, was the clear tell-tale sign to me that Apple is using the power of polymorphism.

Let’s step back and think about what Touch ID and Face ID technology is really doing at the simplest level.

In a nutshell, both technologies are all about trying to IDENTIFY WHO YOU ARE.

If you are who the iPhone thinks you are, which is the current owner of that iPhone device, then you should be allowed to proceed to launch an app or some functionality within the app that requires that sort of security identification check.

You would obviously want to secure your app for doing sensitive things like checking your bank account and making transfers and withdrawals. Or making online purchases via your iPhone that charge your credit card or withdraw funds from your checking or savings accounts.

Or any number of other sensitive transactions that involve your personal account data.

So regardless of whatever specific kind of security technology mechanism you want to utilize, whether it’s Touch ID, Face ID, or some other new futuristic security technology that will no doubt replace Face ID in the future, you’re still trying to accomplish the high-level goal of AUTHENTICATION. That is, verifying if the person using your application is really who they say they are.

There is a specific object-oriented programming language concept referred to as the INTERFACE, which is the actual “secret sauce” behind making your software codebase resilient to future changes.

So when does a software programmer know when and when not to use polymorphic code?

When you think there’s some code you may have to swap out because of changing requirements.

This happens ALL THE TIME in software development, trust me.

Let’s suppose you’re writing an application to keep track of sales order data from the sales division of your company, and your company uses an Oracle database to persist all their important sales related data.

But then the CIO of the company reads some latest issue of “Better CIOs and Gardens” magazine, and he reads about these newfangled databases call “NoSQL databases” that are all the rage these days in Information Technology.

The CIO suddenly realizes he doesn’t want to be the only one at the upcoming CIO soirée cotillion to admit his company still uses creaky old relational database technology like Oracle.

So the following Monday, he sends out an e-mail directive to all his minions that all Oracle databases must be replaced by NoSQL database technology by the end of the week, chop chop!

It’s a pretty sweeping change that’s going to require every software developer who uses an Oracle database, to swap out any code that utilizes Oracle, and replace it with new code that connects to a NoSQL database instead.

If the developer chooses not to use polymorphic code, he will have to manually swap out all the code that formerly uses Oracle, with the new database. This could potentially involve hundreds, or thousands or many thousands of lines of code.

What’s worse, is say that poor developer spends the countless extra nights and weekends to swap out the code from using Oracle to a NoSQL database.

But what happens when that same CIO reads about some newfangled database that’s suddenly become the all-new hotness?

That poor developer is going to have to do the same thing all over again.

That’s where the power of polymorphic interfaces can help insulate your code against these kinds of changes.

If you think there’s going to be something you implement in your codebase that is likely to change, it’s a good candidate for polymorphism.

In the case of Apple, the concept of AUTHENTICATION is a prime candidate for polymorphic behavior.

There will ALWAYS be a need for SECURITY AUTHENTICATION in software development.

The problem is that the actual specific mechanism for authentication is constantly changing.

Back in the old days, it used to be manually typing in a user id and password into a login screen.

Then fingerprint authentication eventually replaced text passwords.

Then retina scanning authentication.

Now we have biosecurity technology like Face ID.

So how do you write your code to properly authenticate a user in a way that is flexible enough to change as the security mechanism changes over time?

In object oriented programming, you could create an interface that might look something like this:

public interface Security {

public void AuthenticateUser();

}

The purpose of this interface is to define the core behavior of what it needs to do, but NOT the actual implementation.

You’ll notice it contains no actual code that implements the functionality for the AuthenticateUser() function.

You’d write the specific code that implements the actual technology to authenticate a user in a separate class code file that implements that interface.

Say you wanted to authenticate a user using fingerprint technology.

In the Java programming language, that separate class file might implement that interface like this:

public class FingerprintSecurity implements Security {

// actual code to implement fingerprint security goes here..

}

Now what if down the road, you foresee fingerprint security technology getting replaced by some other biometric scanning technology like scanning your retinas?

Then you could write a different class that implements that same security interface but using retina scanning technology:

public class RetinaSecurity implements Security {

// actual code to implement retina scanning security goes here…

}

By keeping the actual code to implement security, in their own separate classes, it becomes much easier to swap out specific security authentication technology with new code.

This is what polymorphic programming is all about, and I am confident this is exactly how Apple pulled off the major feat of seamlessly swapping out Touch ID fingerprint scanning technology with facial scanning technology.

Every Apple developer is probably using a generic interface to authenticate a user, and Apple, behind the scenes, is swapping out the underlying implementation code to authenticate a user, without requiring the original developer to make any changes to their application code.

As long as the app developer is calling Apple’s polymorphic code to authenticate a user, Apple is taking care of the underlying authentication without the developer worrying about the actual implementation.

A-B-C, easy as 1-2-3.

Polymorphism, when used properly, is one of the most powerful concepts of object-oriented programming.

[bctt tweet=”Polymorphism, when used properly, is one of the most powerful concepts of object-oriented programming. ” username=”profocustech”]

It took me the longest time as a software developer to truly grok the concept of polymorphism, but when I did, I felt a little bit like Neo at the end of the first Matrix movie, when he truly saw what the world of the Matrix looked like, as digitized little atoms of information.

Ready for Your Next Job?

We can help! Send us your resume today.


Need Talent?

Submit your job order in seconds.


About ProFocus

ProFocus is an IT staffing and consulting company. We strive to connect a select few of the right technology professionals to the right jobs.

We get to know our clients and candidates in detail and only carefully introduce a small number of candidates that fit the role well.