19.6 C
New York
April 29, 2024
Worship Media
Technology

Details of how the feds broke into iPhones should shake up enterprise IT

Apple has an awkward history with security researchers: it wants to tout that its security is excellent, which means trying to silence those who aim to prove otherwise. But those attempts to fight security researchers who sell their information to anyone other than Apple undercuts the company’s security message.

A recent piece in The Washington Post spilled the details behind Apple’s legendary fight with the U.S. government in 2016, when the Justice Department pushed Apple to create a security backdoor related to the iPhone used by a terrorist in the San Bernardino shooting. Apple refused; the government pursued it in court. Then when the government found a security researcher who offered a way to bypass Apple security, the government abandoned its legal fight. The exploit worked and, anticlimactically, nothing of value to the government was found on the device.

All of that is known, but the Post piece details the exploit the government purchased for $900,000. It involved a hole in open-source code from Mozilla that Apple had used to permit accessories to be plugged into an iPhone’s lightning port. That was the phone’s Achilles Heel. (Note: No need to worry now; the vulnerability has long since been patched by Mozilla, rendering the exploit useless.)

The Apple security feature that frustrated the government was a defense against brute force attacks. The iPhone simply deleted all data after 10 failed login attempts.

One threat researcher “created an exploit that enabled initial access to the phone — a foot in the door. Then he hitched it to another exploit that permitted greater maneuverability. And then he linked that to a final exploit that another Azimuth researcher had already created for iPhones, giving him full control over the phone’s core processor — the brains of the device,” the Post reported. “From there, he wrote software that rapidly tried all combinations of the passcode, bypassing other features, such as the one that erased data after 10 incorrect tries.”

Given all of this, what is the bottom line for IT and Security? It’s a bit tricky.

From one perspective, the takeaway is an enterprise can’t trust any consumer-grade mobile device (Android and iOS devices may have different security issues, but they both have substantial security issues) without layering on the enterprise’s own security mechanisms. From a more pragmatic perspective, no device anywhere delivers perfect security and some mobile devices — iOS more than Android — do a pretty good job.

Mobile devices do deliver very low-cost identity efforts, given integrated biometrics. (Today, it’s almost all facial recognition, but I am hoping for the return of fingerprint and — please, please, please — the addition of retinal scan, which is a far better biometric method than finger or face.)

Those biometrics are important because the weakspot for both iOS and Android is getting authorized access to the device, which is what the Post story is about. Once inside the phone, biometrics provides a cost-effective additional layer of authentication for enterprise apps. (I’m still waiting for someone to use facial recognition to launch an enterprise VPN; given that the VPN is the initial key for ultra-sensitive enterprise files, it needs extra authentication.

As for the workaround the Post describes, the real culprit there is complexity. Phones are very sophisticated devices, with barrels and barrels of third-party apps with their own security issues. I am reminded of a column from  about seven years ago, where we revealed how the Starbucks app was saving passwords in clear-text where anyone could see them. The culprit turned out to be a Twitter-owned crash analytics app that captured everything the instant it detected a crash. That is where the plain-text passwords came from.

This all raises a key question: How much mobile security testing is realistic, whether at the enterprise-level (Starbucks, in this example) or the vendor (Apple) level. We found those errors courtesy of a penetration tester we worked and I still argue that there must be far more pentesting at both the enterprise and vendor levels. That said, even a good third-party tester won’t catch everything — no one can.

That gets us back to the initial question: What should enterprise IT and security admins do when it comes to mobile security? Well, we can eliminate the obvious option, as not using mobile devices for enterprise data is not an option. Their benefits and massive distribution (they are already in the hands of almost all employees/contractors/third-parties/customers) make mobile impossible to resist.

But no enterprise can justify trusting the security in these devices. That means partitioning for enterprise data and requiring enterprise-grade security apps to grant access.

Sorry, people, but there are simply too many holes — discovered and yet-to-be-discovered —  that can be exploited. Inside today’s phones is code from thousands of programmers working for Apple — many of whom never talk with each other — or who have built third-party apps. There is invariably no single person who knows everything about all of the code inside the phone. It’s true for any complex device. And that’s begging for trouble.

Click Here to Visit Orignal Source of Article https://www.computerworld.com/article/3615670/details-of-how-the-feds-broke-into-iphones-should-shake-up-enterprise-it.html#tk.rss_all

Related posts

WWDC: Developers (again) rebel at App Store costs

ComputerWorld

Why smart homes are the future of the smart enterprise

ComputerWorld

Documents, an app for everything that’s always with you

ComputerWorld

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy