No negotiation: The rising threat of crypto ransomware


We all know the rule – you don’t negotiate with terrorists. When you react to their demands, you prove their tactics work. Worse, you give them a reason to continue.

In the world of technology, the same rule applies. You don’t negotiate with hackers, attackers, and criminals. You don’t line their pockets and send them on to the next helpless victim.

But while we know what’s right, it’s not always easy advice to follow.

Ransomware locks you out of your devices, holding them to ransom. But who cares about devices? The real threat is to your most valuable asset of all – your data.

And the bad news is it’s a threat that’s growing fast.

The crypto ransomware rampage

Crypto ransomware has been around a long time. In fact, PC Cyborg – the first recorded ransomware trojan – was encrypting data and holding it to ransom as far back as 1989.

But while crypto ransomware isn’t a new problem, it’s a threat that’s getting bigger all the time.

According to Symantec, data encryption was only present in 1.2 per cent of ransomware at the start of 2014. By the end of August, that figure hit a terrifying 31 per cent.

So why the sudden increase? Who’s to blame? The answer, at least in part, is CryptoLocker.

CryptoLocker was first detected in September 2013. Distributed through the established Gameover ZeuS botnet and infected email attachments, the trojan encrypted user data and displayed a screen demanding payment.

It was a huge success. According to CERT, reaching just 5,700 computers could lead to profit of $33,600 in one day. CryptoLocker reached around 545,000 computers worldwide.

Fortunately, a government and law enforcement effort saw the dismantling of both the Gameover ZeuS botnet and CryptoLocker in June 2014.

But the problem didn’t go away. Other criminals had seen CryptoLocker’s success and dollar signs lit up in their eyes.

Turning security against you

Why was CryptoLocker so successful? What made this trojan so potent? And why has it changed the IT security landscape forever?

The *** truth is it comes down to cryptography. Which is a lot more *** than it sounds.

Back in 1989 when PC Cyborg held our retro computers to ransom, data was encrypted using symmetric cryptography. As a result, it was possible to reverse engineer the encryption and unlock your data.

But, since then, our security has evolved. Encryption has become far more sophisticated, which for the most part is a great thing. That is until attackers turn it against us.

See, CryptoLocker uses asymmetric cryptography, with two keys – one public, one private – required to encrypt and decrypt data. In this approach, the private key never leaves the attacker’s server, *** reverse engineering impossible.

And that’s the real issue – CryptoLocker was expertly distributed, suitably threatening, and impossible to remedy. In fact, it was so effective that police in in Swansea, MA opted to pay the ransom when one of their own computers was infected.

Defend your data now

Faced with a threat that even the police can’t surmount, is it any surprise that people feel tempted to pay up? And, when people pay, is it any surprise that attacks are becoming more and more common?

It’s a bleak outlook and the only way to practically deal with the threat is to take action now by improving your defense.

We’d recommend:

  • Regularly updating your antivirus software to the latest threat database
  • Creating redundant backups of your data – so if a copy is encrypted, the data isn’t lost
  • Be aware of the files you’re opening – online or by email, only run files from sources you know and trust

They things you should already be doing, but things that are easy to forget. But remember – when it comes to crypto ransomware, a robust defense may be your only hope.


Theft from the skies: Could cell phone data theft affect you?


Now that smartphones are everywhere, it’s easy to forget just how remarkable they really are. When you take a moment to think about everything a smartphone can do – from organizing your address book to taking care of your banking – it’s almost beyond belief.

But all of the functionality available in our small, pocket-sized devices depends on data. Unfortunately, these wireless, constantly-connected devices are also where the safety of your data is at most risk.

Just ask the NSA.

Dirtboxes do a dirty job

According to The Wall Street Journal, the NSA has been running a secret project with the Justice Department, using Cessna airplanes to intercept data transmitted from cell phones and wireless devices.

The planes, affectionately referred to as ‘dirtboxes’, are equipped with devices designed to imitate cell phone towers. These fake towers receive all incoming data within a given range – and this is exactly why they create a complex privacy issue.

Most of us are happy with the idea that criminals and terrorists should forego the right to privacy. Intercepting their calls allows crucial evidence to be collected and analyzed, preventing crime and supporting legal proceedings.

But when a ‘dirtbox’ flies through the sky, it does not collect targeted sets of data – it grabs as much as it can. According to insiders, the unwanted data is then discarded.

This blanket approach to intelligence gathering is flat-out illegal. Of course, the Justice Department deny it exists.

Meanwhile, an anonymous source reported by USA Today claims that the program is large enough to cover the entire population of the United States.

Interception made easy

It would be natural to assume that cell phones are by nature insecure, since data is transmitted openly through the air. But while cell phones transmit data wirelessly, intercepting that wireless signal is difficult.

By design, Bluetooth is an easily discoverable type of radio signal, and relatively *** to intercept. However, most devices limit the data access available over Bluetooth connections, *** this a negligible risk for most of us.

Conversely, it’s much more complex to intercept a 4G, 3G, or GPRS signal. The most effective technique is precisely the one allegedly used by intelligence agencies – pretend to be a cell phone tower. In fact, when you imitate a cell phone tower, even data encryption won’t help.

This requires an investment of time and money into research and development. It means building a physical device smart enough to trick cell phones into *** open connections.

But once an imitation tower is up and running, scooping up call data is effortless. The terrifying truth is that criminals have enough to gain from intercepting your data that developing a false tower is worth the investment.

Defend your cell phone data

While connecting to a fake cell phone tower is a serious risk to your data, it’s not a risk that most of us will face day-to-day. But that doesn’t mean your data is safely stored on your device.

The biggest threat of all is the data we send through the internet. After all, when you go online to use an application or visit a website, you pass information through a chain of different servers – not all of them secured and verified. You leave a trail everywhere you go, and any data that you share is potentially exposed.

Finally, remember that the vast majority of cell phone data breach doesn’t come from transmission at all. It’s when the entire handset is handed to a criminal.

To protect against physical theft, be sure to take advantage of any on-board security measures including passwords, encryption, and remote wipe features. If you are selling your device, restore it to factory default to remove your data entirely.

And whatever you’re doing with your device – whether you’re sending emails, *** calls, or browsing the web – always be aware of the wireless connections you’re opening and the data you could be sending.


FireEye discovers iOS Masque Attack, Apple downplays threat


Computer security experts are warning Apple customers over a new bug that affects iOS devices such as the iPhone and iPad. This post is a continuation of a release last week about safety on your Apple devices.

The US Computer Emergency Readiness Team (US-CERT) said on Thursday that users of such devices running on the latest version of iOS should be careful about what they click on. The team also advised users not to install apps from anywhere other than their own organisation or Apple’s official App Store. The CERT further warned against opening an app if an alert says “Untrusted App Developer,” saying the user should instead click on “Don’t Trust” before immediately deleting it.

The exploit, dubbed “Masque Attack,” was reportedly discovered in July 2014 by FireEye and reported to Apple on the 26th of the same month (security researcher Stefan Esser of SektionEins may have discovered the same or a closely related exploit last year which he presented to the SyScan 2013 conference.

In a blog post on Monday the company said it believes new versions of iOS are still vulnerable though and can be exploited via a masque-based attack campaign they have dubbed “WireLurker.”

WireLurker is the first malware capable of spreading from an infected Mac OS X system to a non-jailbroken iOS device and has reportedly been downloaded over 350,000 times already.

US-CERT explained how Masque Attack works:

“This attack works by luring users to install an app from a source other than the iOS App Store or their organizations’ provisioning system. In order for the attack to succeed, a user must install an untrusted app, such as one delivered through a phishing link.

This technique takes advantage of a security weakness that allows an untrusted app—with the same “bundle identifier” as that of a legitimate app—to replace the legitimate app on an affected device, while keeping all of the user’s data. This vulnerability exists because iOS does not enforce matching certificates for apps with the same bundle identifier. Apple’s own iOS platform apps, such as Mobile Safari, are not vulnerable.”

The computer emergency team went on to say that an app installed in this manner could copy the user interface of the original app, thereby tricking the user into entering their username and password. It could also steal personal and other sensitive information from local data caches as well as perform background monitoring of the device. Lastly, it could snaffle root privileges to any iOS device it was installed on, all because it was indistinguishable from a real app.

With FireEye saying it has confirmed this type of attack, including the uploading of data to a remote server, you would think that Apple would have moved to patch the bug, especially given how the security company has had little time to look into other potential related attacks that may yet surface.

That is not the case though as Apple says no-one has actually been affected by the vulnerability thus far, an assertion that flies in the face of a blog posting from Kaspersky Lab which suggests that WireLurker has claimed victims, albeit a small number.

If you would like to learn more about how the iOS Masque attack works, FireEye has uploaded a demonstration video:


Android 5.0 Lollipop’s sweet new security features


Google’s Android Lollipop is the fifth version of its tablet and smartphone operating system and it could very well find its way onto far more devices than any of its predecessors ever did. With the current trend in smart device proliferation through TVs, watches and even in-car entertainment systems not looking to abate any time soon, it better be secure. If you’ve recently upgraded to Lollipop 5.0 you might want to click here for some handy tips on running ExpressVNPmore smoothly.

So what’s new and what’s good in lollipop?

From the perspective of an average user, the most important piece of news to accompany the release of Android 5.0 is the fact that the Adrian Ludwig, head of Google’s Android mobile software security team, thinks that security on the device should be present but not heard, telling reporters recently that “I don’t think it’s realistic that the average person should care about security.” (We think he’s wrong and that everyone needs to have a degree of security awareness in order to better protect themselves from breaches of their security and privacy).

With Android holding around an 80% market share in the smartphone market it doesn’t appeal only to the technologically savvy segment and so Google, in an homage to Apple, have gone down the road of turning on key security features by default, thus leaving the average user to get on with using their device without worrying about whether they are safe or not.

Ludwig explained Google’s approach, saying “When it comes to security, we’re not designing a single device, or millions of similar devices. We’re building a service which helps users be secure despite the myriad of different ways that Android might come into play.”

As for the threats faced by Android users, there are many, but the biggest comes in the form of device theft and loss.

According to Consumer Reports, over 3 million Americans had their smart phones stolen last year, a rise of almost 100% compared with 2012. Mobile security company Lookout paint a similar picture, saying ten percent of all smartphones in the US have been stolen.

With that in mind, Google has come up with a few different ways of protecting devices and the data stored within them. This is achieved through the lock screen which can only be bypassed via facial recognition, PIN number or passcode, as well as device encryption and the ability to remotely wipe a lost or stolen device.

Of more interest is the Factory Reset Protection option which is the official name for what we know as the “kill switch.” When activated with the owner’s Google password, it will wipe all data from the phone and leave it totally inoperable.

Authorities are likely to welcome the kill switch, especially given that California law dictates that it has to be present on devices sold from 1 July 2015, but the on by default encryption (don’t forget the PIN code for your device if law enforcement asks for it) has already drawn collective gasps from the security services who we all know, love and trust not to use any tools at their disposal in order to spy upon us.

Other new security features are present too though and the most interesting by far is the implementation of guest accounts. Especially useful on devices that are used by more than one person, guest mode can allow other family members to enjoy using your device but without the added worry of later discovering that your settings have been accidentally changed, or a large bill has been incurred by a son or daughter who got carried away with in-app purchases in their favourite game.

Android Smart Lock is also a useful new addition that integrates Lollipop devices with Android Auto embedded systems and smartwatches. A user can set up their device with Smart Lock such that is will only be operable when within Bluetooth range of either their Android Auto system or smartwatch. This sounds like another great way of deterring thieves though I cannot help but wonder if it could lead to a leap in smartwatch theft.

Business owners looking for a more secure means of managing a Bring Your Own Device (BYOD) policy have not been overlooked by Lollipop either. By using containerisation, Android Work will present a seamless experience to the user while allowing IT staff to apply differing security policies to work and personal data and apps.

Also, app deployment will allow IT admins to specify which Google Play apps will be available for installation through the users’ work profile and distribution can easily be controlled by associating apps with particular individuals or groups. The ability to define policies will be applied to both apps as well as on a per user basis.

Overall it seems clear that Google is continuing to progress in the right direction with Android security and the decision to turn on certain features by default is the right one, given how a more security aware consumer base remains little more than an ideal for now.

The fact that the greatest risk an Android user can face is losing their device or having it stolen shows that people arguably remain the weakest link in the security chain and that technical controls are largely irrelevant when physical risks remain a key factor.


Self-repairing software tackles malware! Software, heal thyself!


What if networks and applications could automatically detect malware intrusions, repair any damage done and then slam the door on further infections of the same type? Seems like something out of Star Trek-level science fiction, but thanks to researchers at the University of Utah, this kind of self-healing software is coming to a Linux-based business or military server near you. Malware: be afraid. Be very afraid.

I See What You’re Doing There

The biggest problem with antivirus programs? They rely on lists, whitelists for legitimate code and blacklists for software that comes with a malicious payload. But since hackers make it their mission to create new and ever-more-hidden infections, virus detectors are always one step behind the bad guys. This puts companies in a tough spot. High-performance antivirus can bog down a network and even take servers offline, while opting for a “what may come” approach could include your system going down in flames.

Not so with A3, or Advanced Adaptive Applications, which isn’t bound by typical search-and-destroy rules. Along with defense contractor Raytheon BBN and an awkwardly-named DARPA program — Clean-Slate Design of Resilient, Adaptive, Secure Hosts — the University of Utah’s Eric Eide and his team came up with a way for A3 to detect, repair and shore up network defenses on any Linux-based virtual machine (VM).

Here’s how it works: A3 first uses a set of “stackable debuggers”, which all run in real-time and search the VM for any strange activity. And unlike typical virus software, this security program isn’t looking for specific code but any computer behavior that’s out of the ordinary. If malware is found, A3 stops whatever process has been started, approximates a fix for damage and then adds the bug to its list of no-go code. And it really, really works: the team tested it against Shellshock for DARPA officials in Jacksonville and A3 not only found but repaired the damage in just four minutes. Now past the testing phase, the future looks bright for this self-healing software, although there’s a caveat: the software isn’t available for consumer use on desktops or smarthphones. According to Eide, “we haven’t tried those experiments yet.”

Other Avenues

While A3 is the latest and greatest in the world of responsive malware detection, it’s not the first stab at this kind of thing. For example, HP launched a self-healing BIOS last year to combat malware that runs before an OS is loaded. If attackers are able to gain root access to a computer, it’s possible to alter the BIOS and force malicious code into the system; HP’s BIOSphere compares the to-be-run BIOS against an embedded image of the machine’s original BIOS — if they differ, the original is always loaded.

Retail giant Amazon is also on the self-healing bandwagon. The company just announced Amazon Aurora, a MySQL-compatible database engine paired with their Relational Database Service. According to the press release, Aurora is “fault-tolerant, transparently tolerating the loss of disks and Availability Zones, and self-healing, automatically monitoring and repairing bad blocks and disks.” This is the holy grail, and what A3 is also shooting for: repairs on the fly, without the need to shut down servers or repopulate data.

Turn the Beat Around

It’s worth mentioning, however, that A3 is open source. On the face, this is a good thing: other white-hat users can take Eide’s work and adapt it, perhaps for mobile devices, Window servers or even the Internet of Things.

There’s also a dark side, however. Malicious actors are, by and large, interested in whatever kind of attack returns the biggest benefit for the smallest outlay of work. A few, however, are innovators, and it’s not hard to imagine the risk of a re-purposed A3 or similar self-healing technology: malware designed to scan for antivirus activities, shut them down and “repair” them, rendering them useless. In an already self-healing system this might lead to a stalemate, but as CIO Today points out, many companies can’t keep up with constantly morphing malware. Add self-healing (or destructing) to that list and things get interesting.

A3 and similar self-healing software efforts show real promise in the fight against malware, but don’t get complacent. Infection control and software repair are an all-hands-on-deck situation — there’s no silver bullet here.


Just looking? ISPs are watching you browse


You are being watched. Your IP address is visible to everyone with an internet connection.

What’s more, you’re paying for the privilege. Every time you’re online, your Internet service provider (ISP) is keeping tabs on what sites you visit and how long you stay to create a unique user profile. And while ISPs say they’re not selling this data to companies or handing it over to government agencies, the year-old Snowden revelations and more recently discoveredVerizon “perma-cookies” point to something altogether different. So what’s real the story?

Not So Bad?

As noted by Vuze, ISPs have some legitimate reasons to track your browsing habits. For example, the Copyright Alert System (CAS) allows them to detect infringement and protect rights holders, and in Australia ISPs may soon be required by law to store consumer surfing data for two years in an effort to combat cybercrime. Both ISPs and consumers have rallied against the idea, saying it’s invasive and costly, especially in terms of data storage.

In the United States, meanwhile, ISPs aren’t required to track IP and port connections, but many do and many hold on to that information for a period of time, possibly as much as a year or more. Still, this doesn’t sound so bad — basic IP data that’s relatively anonymous could be used to improve service delivery or justify infrastructure expansions.

But here’s the thing: ISPs could do more — much more — if they had a mind. This includes discovering exactly what kind of content you view, what you write in emails and what you purchase online. Most don’t because of the potential backlash that comes with violating consumer privacy rights, but it’s still a good idea to consider the use of a proxy service or virtual private network (VPN) to make sure there’s no way you can be tracked. Even if you have nothing to hide companies should have to ask, rather than assume your cooperation in recording online movements.

Cookie Monsters?

If that’s where all this ended — ISPs occasionally tracking you to pad their bottom line or broker deals with big retailers for better customer data, most users would probably chalk it all up as the price of being online. But the discovery of a “perma-cookie” used by Verizon and a similar scheme in development at AT&T has customers feeling ill.

It goes like this: over the last few years, Verizon has been dropping a string of 50 letters, numbers and special characters into all wireless traffic between users and websites. This string forms a consumer’s Unique Identifier Header (UIDH), which Robert McMillan of Wired calls a “short-term serial number that advertisers can use to identify you on the web.” These UIDH strings — and AT&T “tracking beacons” — persist for several days and unlike regular cookies, cannot be blocked or disabled by turning on private browsing or clearing your cookie cache. Verizon says it doesn’t use the identifiers to form customer profiles and gave consumers the chance to opt out. The caveat? You need to contact the company directly. Using a VPN will also block these cookies, as will encrypted proxy browsing, but it’s possible for proxies to be disabled by ISPs at will.

Who Watches the Watchers?

There is some oversight of ISPs via the Federal Communications Commission, which compels Internet providers to disclose their network management practices under the Open Internet Transparency Rule. The problem? Perma-cookies aren’t illegal, meaning ISPs can keep doing what they’re doing so long as they report their activities to the FCC.

Some popular websites are taking matters into their own hands: as noted by Technology Review, Facebook recently launched a “dark” version of its social media site at facebookcorewwwi.onion. The new site is only accessible using Tor anonymity software and ensures that users won’t be tracked by governments or ISPs, ideal for users in countries that ban social media or government workers concerned about censorship. It’s worth mentioning, however, that Facebook will still be collecting some data about its dark-side visitors, albeit much less than average.

So yes, you’re being watched, and companies are getting bolder. If you prefer anonymity there are a number of choices: talk to providers directly and opt-out of the more onerous policies, browse only “dark” websites or invest in a private network to provide total coverage.


Laura Poitras and the Digital Exiles


This year, documentary filmmaker Laura Poitras released what may be her most controversial project yet: Citizenfour. The film focuses on NSA hero Edward Snowden and not just what he disclosed to journalists in a Hong Kong hotel room, but more importantly why. 

While many people know that Snowden is now a resident of Russia, it’s less common knowledge that Poitras moved as well — to Berlin, in order to protect the source material for her documentary.

There, she’s part of a growing community called the “digital exiles,” a group of expert journalists, software developers and even MI5 agents all fighting for our online freedom. All have fled to Germany because of its strict policies on digital privacy. The country’s secret service, BND, is not permitted to spy on citizens, and any attempts to curtail personal freedoms are met with staunch resistance.

Who’s Who?

So who are the digital exiles? It’s hard to say, since they don’t hold weekly meetings or make obvious declarations of their whereabouts on social media. They’ve come to Berlin for a reprieve from the scrutiny of their own governments — in Poitras’ case, she was already on the NSA’s watchlist after two films examining the U.S. war on terror, and frequently pulled out of lines at airports or singled out on planes for extra questioning. Citizenfour simply ramped up the interest on government illegal activities.

According to Martin Kaul, social movements editor of Die Tageszeitung, “they are very high profile, the exiles, but I don’t think there are hundreds of them here, or even dozens.”

He points out, however, that hacker culture is strong in Germany, and many citizens were already concerned with their online freedom. Laura Poitras and the Digital Exiles have lent the movement against government surveillance a sense of international urgency.

Key Players

While digital exiles aren’t looking for fame and fortune, it is possible to track some of them down. In a recent piece for The Guardian, Carole Cadwalladr had the chance to interview not just Poitras but several others including Jacob “Jake” Appelbaum and Annie Machon. Applebaum was partially responsible for the creation of the Tor network, which renders users anonymous, and also worked with WikiLeaks.

Machon, meanwhile, is a former spy with British agency MI5. After the department went public in 1989 they enjoyed a period of resounding public support; however, in 1997 Machon revealed secret—and illegal—wiretaps, files held on government ministers, and the illegal incarceration of citizens. She now lives part time in Berlin and offers assistance to other whistleblowers.

Pay Attention

The exile of Laura Poitras and the digital activists to Germany is a stark reminder to us that government surveillance is very, very real.

So how can we take measures to ensure our online privacy and hamper governments’ attempts at spying on us? Using a VPN to encrypt your connection is one part of the equation, but it’s only a start. Privacy is a multi-pronged approach, and for this reason, we highly recommend checking out Reset the Net’s privacy pack, which is filled with tools and information for *** mass surveillance more difficult.

Ultimately, we admire Laura Poitras for her tenacity and bravery, and for the great personal risk she took to create Citizenfour.  We are thankful that global audiences now have access, through the film, to Edward Snowden’s revelations and motivations, and that Poitras’ efforts are *** people aware about how governments infringe on our right to privacy every day.


Bad apples? Malware bites Mac and iOS


Apple devices are immune to malware. That’s the prevailing wisdom, and oft-repeated by those who own iPhones, iPads or Mac laptops as a way to offset restrictive application policies enforced by the tech giant. These Apple lovers do have a point, however, since the company’s Gatekeeper for Mac and “Trust” permissions for iOS allow devices to identify apps developed without a valid Apple Developer ID, and the vast majority (98 percent) of mobile malware targets Android-based devices.

But this doesn’t mean iPhones and Macbooks are entirely safe. In fact, a new malware family is now targeting Apple products specifically and could potentially cause some serious damage. Here’s the bottom line.

Watch Out for Lurkers

As noted in a recent Kaspersky Security Update, the newly discovered WireLurker malware is able to infect both iOS and Mac OS devices. The malware was first observed in a Chinese third-party application store called Maiyadi, says security firm Palo Alto Networks, and infected 467 OS X apps. According to Claud Xiao of Palo Alto, “in the past six months, these 467 infected applications were downloaded over 356,104 times and may have impacted hundreds of thousands of users.”

So how does it work? WireLurker starts by creating trojanized applications for sale in third-party app stores. When downloaded by jailbroken iPhones or Macs with Gatekeeper turned off, WireLurker looks for specific apps, creates copies, patches them with malicious code and then copies the infected app back to the device. If you’re running a non-jailbroken phone, the best WireLurker can do is use a legitimate Enterprise Developer ID to install non-malicious app, which Palo Alto says was a “test case.” Sound scary? It should, but if you’re not running jailbroken phone or downloading apps from third-party stores and then overriding Apple’s Trust permissions, you’re probably safe.


Jekyll and Hide

Of course, it’s worth mentioning that in 2013, researchers from Georgia Tech found a way to get malicious payloads onto Apple devices by using a string of benign-seeming code. According to eWeek, these “Jekyll apps” could easily make it past Apple’s vetting process but later be “turned evil” and behave much like malware. The team also discovered a way to install malicious apps using a real developer ID and a fake USB charger; admittedly more difficult and low-tech, but still worrisome.

The biggest problem here? That despite iOS and Mac security measures, it’s still possible to design code that slips through cracks and then causes real problems. While widespread attacks aren’t likely using either of these methods, Jekyll apps and similar exploits could pose a problem for high-profile targets such as government officials or Internet activists.

Nice Masque

Beyond morphing apps and third-party dangers, there’s another issue: The Masque Attack. Identified by security firm FireEye and short for “masquerading”, Masque attacks are more sophisticated form of WireLurker that rely on Apple’s enterprise and ad-hoc provisioning system. It goes like this: Apple is fine with developers and enterprises distributing apps outside the App Store ecosystems using what’s known as a “provisioning profile”. This profile allows users to download applications directly from a link without using any kind of app store interface. While this method isn’t widespread, it’s a great way for enterprises and startups to develop or test their own applications in-house.

But there’s a loophole. It’s possible for infected applications masquerade as and then overwrite legitimate apps on user devices, so long as the “bundle identifiers” are the same. Apple doesn’t require matching certificates for similarly-bundled apps, instead allowing them to be overwritten at will. This means an industrious attacker could potentially gain access to a corporate network and then push “new” versions of installed apps to all employee phones — with the right developer ID, users could be fooled into updating their applications and expose themselves to malware. Worst case? A masquerading app that grabs data stored in a legitimate app, installs an infected version and sends a stream of data to an unknown server.

Expiry Date?

WireLurker and Masque attacks have put the fear in some Apple users, but this fruit isn’t bad yet — your risk is minimal unless you like jailbreaking phones or surfing Chinese app stores. Still, it’s a sobering reminder that malware creators never rest, and that even Apple’s walled garden isn’t impenetrable. Do yourself a favor: surf smart with a secure VPN — no sense letting lurkers know what apps you’re into — and just like email attachments, don’t trust apps you don’t know. Apples from strangers are never a healthy choice.


Hackers make unsweet music with redirection


On 27 October security researchers at Symantec discovered that was redirecting visitors to the Rig exploit kit, via an injected iframe.

Visitors to the popular music news and reviews website redirected thus were subsequently infected with a range of malware.

In a blog post, Symantec researcher Ankit Singh said that the Rig exploit kit took advantage of two Microsoft Internet Explorer use-after-free remote code execution (RCE) vulnerabilities (CVE-2013-2551 and CVE-2014-0322), an Adobe Flash Player RCE vulnerability (CVE-2014-0497), a Microsoft Silverlight Double Deference RCE vulnerability (CVE-2013-0074), an Oracle Java SE memory corruption vulnerability (CVE-2013-2465), an Oracle Java SE remote Java runtime environment code execution vulnerability (CVE-2012-0507), and a Microsoft Internet Explorer information disclosure vulnerability (CVE-2013-7331).

Upon the successful exploitation of any of those vulnerabilities, a XOR-encrypted payload would be downloaded onto the victim’s computer. The exploit kit would then drop a variety of nasties including downloaders and information stealers such as Infostealer.Dyranges and the notorious Zeus banking Trojan.

Previous research by Symantec revealed how the Rig exploit kit can also drop Trojan.Pandex, Trojan.Zeroaccess, Downloader.Ponik, W32.Waledac.D and ransomware Trojan.Ransomlock.

While is no longer compromised, the attack may have affected a great number of visitors as the site is ranked amongst the top 7,000 most visited on the web, according to Alexa. With an Alexa ranking of around 2,800 in the US, visitors from that region may have been particularly at risk, particularly as Symantec said it was unaware how long had been compromised for prior to its discovery.

Talking toSCMagazine, Singh said the injected iframe took redirected visitors to a highly obfuscated landing page for the Rig exploit kit but he was unaware how the website was initially compromised.

He went on to say that when the user arrived at the landing page the exploit kit would first look to bypass any security software on their computer before searching for particular plugins which it could then exploit.

Singh added that “Infostealer.Dyranges checks the URL in the web browser for online banking services and intercepts traffic between the user and these sites; it may then steal user names and passwords inputted into these sites’ login forms and send them to remote locations. Trojan.Zbot will gather a variety of information about the compromised computer, as well as users name and passwords, which it sends back to the [command-and-control] server. It also opens a backdoor through which attackers can perform various actions.”

Singh concluded that the way in which the exploit kit run was such that a typical computer user would not be aware of its presence on their system.

According to Symantec, its security products already protect its users against such an attack and the same should be true for all other reputable brands of security software. We would, however, advise all users to ensure that their security software is kept fully up to date in order to protect them from the newest threats.


BlackEnergy malware plug-ins run rampant


Kaspersky Lab’s Global Research & Analysis Team last week published an interesting report detailing the crimeware turned cyber espionage tool BlackEnergy.

First identified several years ago, BlackEnergy’s original purpose was the launching of DDoS attacks via its custom plugins. Over time, BlackEnergy2 and BlackEnergy3 evolved and were eventually spotted downloading additional custom plug-ins which were used for spam runs and harvesting online banking information, according to Kaspersky researchers Kurt Baumgartner and Maria Garnaeva. Lately, the malware has been adopted by the Sandworm Team, a group linked to cyber espionage including the targeting of industrial SCADA systems.

The Kaspersky report detailed two unnamed BlackEnergy victims which were attacked during the summer of 2014:

The first was spear phished with an email containing a WinRAR exploit. The hidden executable file then dropped various BlackEnergy plugins.

The second victim was hacked using the previous victim’s stolen VPN credentials, leading to the destruction of some business data and whoever attacked victim number two was not best pleased with Kaspersky either as they left the following message in a tcl script – “Fuck U, kaspeRsky!! U never get a fresh Black En3rgy.

The ease with which the company’s Cisco routers, all of which were running different IOS versions, were compromised was welcomed by the hackers though with the script writer saying “Thanks C1sco ltd for built-in backd00rs & 0-days.”

A recent blog posting from iSIGHT Partners details a Windows zero-day vulnerability (CVE-2014-4114) which affected all versions of Microsoft Windows and Server 2008 and 2012. That vulnerability, the company said, facilitated a BlackEnergy powered cyber espionage campaign that targeted NATO, Ukrainian government organisations, Western European governments, the energy sector in Poland, European telecoms companies and academic institutions within the US. iSIGHT attributed that campaign to Russia.

And, according to the US Department of Homeland Security, BlackEnergy has been hiding in key US computers since 2011 and is set to wreak havoc with critical infrastructure. ABC News says US national security sources have claimed to be in possession of evidence which also points a sturdy finger of blame in the direction of Russia, suggesting that the Sandworm Team may in fact be state-sponsored.

As a Russian company it is perhaps unsurprising to learn that the Kaspersky’s researchers stopped short of identifying mother Russia as the perpetrator behind the various BlackEnergy attacks though, to be fair, they did discover that one of “the DDoS commands meant for these routers” was which, they say, ”belongs to the Russian Ministry of Defense.” Another IP address identified by Baumgartner and Garnaeva – – belongs to the Turkish Ministry of Interior’s government site. These two discoveries, they say, make it unclear as to who is behind the attacks.

Baumgartner and Garnaeva’s research also reveals how the proliferation of plug-ins for BlackEnergy has given the tool a wide range of capabilities. These include a DDoS tool specifically made for ARM/MIPS systems, the ability to wipe drives or render them unbootable and a variety of port-scanning and certificate-stealing plugins, as well as a backup communication channel in the form of Google Plus accounts that could be used to download obfuscated command and control data from an encrypted PNG image file. The researchers said the ‘grc’ plugin used in this instance was designed to contain a new command and control address but they did not observe one being used.

Another curio mentioned in the Kaspersky report was the fact that some plug-ins were designed to gather hardware information on infected systems including motherboard data, processor information and the BIOS version employed. Other plug-ins were gathering information about attached USB devices, leading the researchers to conclude that other as yet unidentified plug-ins may be employed to infect further damage, based upon the information communicated back to the command and control centre.