Category: Uncategorized

CVE-2017-5689

It’s been a while since our last post, so we wanted to provide something useful this time around. Recently, Intel has issued a fix for CVE-2017-5689, which is an Escalation of Privilege vulnerability in “Intel® Active Management Technology (AMT), Intel® Standard Manageability (ISM), and Intel® Small Business Technology versions firmware versions 6.x, 7.x, 8.x 9.x, 10.x, 11.0, 11.5, and 11.6” – a vulnerability which is detectable remotely.

This vulnerability is widespread. In fact, attackers could simply use shodan.io to compile a list of vulnerable targets. See the attached image below of targets.

Since it is so easy to detect targets using a simple HTTP GET request, we are providing a python script to detect if your webserver is vulnerable! If you are vulnerable, Intel has made the advisory for this vulnerability available here, with steps towards a fix:

https://security-center.intel.com/advisory.aspx?intelid=INTEL-SA-00075&languageid=en-fr

For even more information on the vulnerability, we recommend visiting the National Vulnerability Database entry for CVE-2017-5689. At the time of writing this blog post, the vulnerability is awaiting analysis. However as more information comes to light the NVD will update the page with that information. That entry is available here:

https://nvd.nist.gov/vuln/detail/CVE-2017-5689

Happy hunting!

https://github.com/CerberusSecurity/CVE-2017-5689


Roaring in a den of Lions

Welcome back to CerberusSec. We were a little late for our second blog post because we have a lot going on. What we’d like to talk about this month are difficulties that face any small organization that is attempting to enter the security industry. These are rarely-discussed issues, and we will examine how these issues came around to exist in the first place.

When reporting a vulnerability with the exact same level of material as say a Google advisory, many CNAs will require you to go through extra hoops to prove the validity of your vulnerability, rather than do the investigation themselves. If you’re a small research group, you may as well go the entire distance of writing a PoC exploit, because otherwise your report is ignored and given no credit. This makes entrance into these groups difficult, because you have to have both the skillset to discover vulnerabilities, and the skillsets to develop exploits. Speaking of credit, many organizations only provide bounties to previously known researchers to that organization. Common sense would then say ‘well then throw the CNA a small vulnerability and save the big one you’ve found for a bounty’. As previously stated, unless you have something truly substantial you need to go through extra hoops to even be considered by these CNAs, so you’ll need to give them your best just to get in the door.

We’re very interested in working with reverse engineering and exploit development. You can expect us to post some information about our progress and discoveries in that particular arena soon. We’d like to be able to do something similar to this blogpost: https://www.corelan.be/index.php/2013/02/26/root-cause-analysis-memory-corruption-vulnerabilities/ in the near future with one of our discoveries. Additionally we’d like to compare the reverse engineering process between macOS and Windows in a separate blog post.


The trouble with Mac Fuzzing…

Hello everyone, with the start of the new year our team of researchers would like to get a monthly blogpost running, so that our readers can expect a regular pace for our progress and findings. This post will discuss some of the challenges that need to be met in order to research vulnerabilities in macOS applications.

To begin, lets go over why researchers should be looking at macOS at all. In 2016, Macs made up 7.4% of the global PC market. This seems like a fairly small percentage, but theres more to be gleaned from macOS application vulnerabilities. Apple’s iOS is basically a locked down version of macOS at its core, and many of the stock Apple applications that are found on macOS are also found on iOS. Odds are if you can find a vulnerability in macOS on one of these applications, you can also find a vulnerability in iOS. Approximately 43.5% of smart phone users use an iPhone as of the end of 2016. This is a huge area of cutting edge research, as the most deadly of vulnerabilities on iOS can be sold for up to a million dollars. Since you have access to more computational resources for analysis on macOS, you can perform research tasks like fuzzing more effectively on macOS and then apply vulnerabilities found to iOS.

Lets look at some potential targets that are shared between macOS and iOS, and how their competing products are looking in vulnerabilities. To start, the entire host of iWork applications is available for both iOS and macOS. As of the writing of this blogpost, the last security update for iWork was in October of 2015. Thats an eternity in the security world. By comparison, Microsoft’s Office products are patched almost every month like clockwork. This leads to a few possible conclusions. Either iWork products are inherently more secure, or researchers are not targeting iWork products enough, or it is simply difficult to fuzz iWork products. One way a set of applications can be inherently more secure is to launch the application within a sandbox. macOS and iOS launch all iWork applications within a sandbox, meaning in order for a vulnerability to affect the system you need to additionally find a sandbox escape from within the iWork application. Finding an exploitable crash that also produces a sandbox escape requires a lot of skill and system knowledge, definitely increasing the difficulty of finding a vulnerability compared to Microsoft Office products.

Outside of the technical difficulties, what else bars a researcher’s path to discovering vulnerabilities in these applications? For one, mac hardware is expensive. Looking at fuzzing for example, a typical approach would be to create a server farm of virtual machines that are all running the target OS / Application, and then waiting for a crash. This concept was something that late Apple CEO Steve Jobs was heavily against. He wanted to ensure that there was an aspect of high-end quality to his OS, and as Apple has famously done time and time again, they used legal as a method to assist in this. In the End-User-License-Agreement for macOS, there is a clause that limits the researcher:

“to install, use and run up to two (2) additional copies or instances of the AppleSoftware within virtual operating system environments on each Mac Computer you own

or control that is already running the Apple Software, for purposes of: (a) software

development; (b) testing during software development; (c) using OS X Server; or (d)

personal, non-commercial use.”

This means that for every Mac computer you own, you can only have two instances of macOS virtual machines running at the same time. That makes the cost of research go way up compared to its Windows equivalents due to hardware cost.

Getting past all of the difficulties, how would a researcher go about fuzzing macOS applications? One obvious method is to utilize known fuzzing software such as AFL or Peach on macOS. Unfortunately, those fuzzing programs do not handle programs with GUI very well, as they expect simple executables with simple input, and do not wait for satellite applications to run. This leads to inconsistent behavior, and the potential for false negatives as it skips past GUI rendering. The better approach is to create our own customized macOS fuzzing software, and take advantage of the diagnostic tools built into macOS. Once the software is written, we can utilize macOS’ console application to view crash reports for the applications we crash, and then analyze them for vulnerabilities. Expanding upon this method, we could then write Automator scripts to gather more input files to fuzz, and restart the fuzzing software upon completion of a set of input files, and begin the cycle anew. This is not a new or revolutionary method for fuzzing, as its practices date back to even the 1980’s, however the fact that they are still in use today by renowned researchers show that it is an effective method.