USB Type-C: Simpler, faster and more powerful

posted Oct 6, 2014, 3:22 PM by Avesta Dayeny   [ updated Oct 6, 2014, 3:23 PM ]

The next generation of USB cables will be easier to use and able to push more data faster.

USB has become ubiquitous as the way to connect our mobile devices to power sources and to other devices. There are currently seven different types of USB connectors already in use: USB 2.0 A, B, mini B and micro B; and USB 3.0 A, B and micro B. There's about to be one more: the USB Type-C.

In fact, the upcoming Type-C plug just might end up being the one plug to rule them all: A single USB connector that links everything from a PC's keyboard and mouse to external storage devices and displays.

"The Type-C plug is a big step forward," says Jeff Ravencraft, chairman of the USB Implementers Forum (USB-IF), the organization that oversees the USB standard. "It might be confusing at first during the transition, but the Type-C plug could greatly simplify things over time by consolidating and replacing the larger USB connectors."

Like Apple's Lightning connector, the USB Type-C connector is vertically symmetrical, which means no more trying to figure out the right way to plug it in.

The Type-C connector made its debut this month at Intel's Developers Forum in Shenzhen, China. It looks a lot like the current flat oval-shaped Micro USB plug, although at 8.3mm x 2.5mm, it is wider and thicker than the current Micro USB connector (which is 6.8mm x 1.8mm).

The new connector has a very specific difference from its predecessors, though: Like Apple's Lightning plug, the Type-C connector is vertically symmetrical with contacts on both sides.

As a result, unlike today's USB plugs, there's no up or down orientation required when inserting it; the connector works just as well either way. This can put an end to the awkward trial and error process of fumbling with a USB plug, trying to figure out the right way to plug it in. When it's correctly seated, it audibly clicks as a confirmation.

It's easy to see the difference between the older USB port (right) and the newer symmetrical Type-C port (left

The Type-C plug arrives at an opportune time because the SuperSpeed USB 3.1 10 Gbps spec, introduced last year, is gaining traction in the industry; new controller chips for devices, hosts and hubs are expected in the coming months that will use the new standard. Called SuperSpeed+ for short, the new spec is backward compatible to the older USB specs, and with the right equipment on both ends, will be able to move up to 10Gbps of data back and forth. That puts it on a par with the Thunderbolt technology used by Apple, and represents a big step up from USB 2.0's peak of 480Mbps and the 4.8Gbps limit of the current first-generation USB 3.0 spec.

This doesn't only mean faster data backup and retrieval from, say, an external hard drive, but it potentially opens up USB 3.1 for a variety of new uses. For instance, it has roughly the same peak bandwidth available as an HDMI 1.4 connection and is capable of potentially carrying a 4096 x 2304 video stream at 30 fps.

How USB deals with power has been updated in other ways as well. Currently, a typical micro USB plug can dole out roughly enough power to charge a phone or tablet. The new Type-Cs incorporate and implement the USB Power Delivery spec that was ratified in 2012. As a result, a Type-C plug can work with devices that require five, 12 or 20 volts of electricity; it tops out at delivering 100 watts of power.

This can allow it to not only charge phones and slates but the extra power available can be used to run hubs and displays -- it could even handle a 4K display or several monitors in an array. "USB is the only spec that can deliver power plus video over a single cable," says Ravencraft. "It's revolutionary, not evolutionary."

The changeover to the Type-C plug won't happen overnight. The plug's approval by the USB-IF is still pending, but will likely be ratified sometime during the summer. After that, if the past is any indication, there will be a six-to-nine-month period during which manufacturers of notebooks, tablets, phones and peripherals will evaluate the new spec and start to design it into their next-generation products.

Ravencraft says that the new plug already has momentum with manufacturers. "There's a lot of excitement in the industry about the Type-C connector, and they're pushing to get new products to market quickly," he says. He adds that there may be demos of both USB 3.1 and the Type-C plug at the 2015 Consumer Electronics Show in Las Vegas this coming January.

After that, it'll likely take a couple of years for equipment with the Type-C connector to start displacing the current USB plugs and cables. In fact, Ravencraft thinks that there will be adapters and dongles that will allow the new technology to coexist with the older USB cables and gear, although you might not get the full advantage of the speed and power upgrades.

Eventually, though, most computers, phones, tablets, hard drives and hubs will have the new plug. At that point, the Type-C connector will have arrived.

How to Protect your Server Against the Shellshock Bash Vulnerability

posted Oct 6, 2014, 3:18 PM by Avesta Dayeny


On September 24, 2014, a GNU Bash vulnerability, referred to as Shellshock or the "Bash Bug", was disclosed. In short, the vulnerability allows remote attackers to execute arbitrary code given certain conditions, by passing strings of code following environment variable assignments. Because of Bash's ubiquitous status amongst Linux, BSD, and Mac OS X distributions, many computers are vulnerable to Shellshock; all unpatched Bash versions between 1.14 through 4.3 (i.e. all releases until now) are at risk.

The Shellshock vulnerability can be exploited on systems that are running Services or applications that allow unauthorized remote users to assign Bash environment variables. Examples of exploitable systems include the following:
Apache HTTP Servers that use CGI scripts (via mod_cgi and mod_cgid) that are written in Bash or launch to Bash subshells
Certain DHCP clients
OpenSSH servers that use the ForceCommand capability
Various network-exposed services that use Bash

A detailed description of the bug can be found at CVE-2014-6271, CVE-2014-7169, CVE-2014-7186, and CVE-2014-7187.

Because the Shellshock vulnerability is very widespread--even more so than the OpenSSL Heartbleed bug--and particularly easy to exploit, it is highly recommended that affected systems are properly updated to fix or mitigate the vulnerability as soon as possible. We will show you how to test if your machines are vulnerable and, if they are, how to update Bash to remove the vulnerability.

Check System Vulnerability

On each of your systems that run Bash, you may check for Shellshock vulnerability by running the following command at the bash prompt:env 'VAR=() { :;}; echo Bash is vulnerable!' 'FUNCTION()=() { :;}; echo Bash is vulnerable!' bash -c "echo Bash Test"

The highlighted echo Bash is vulnerable! portion of the command represents where a remote attacker could inject malicious code; arbitrary code following a function definition within an environment variable assignment. Therefore, if you see the following output, your version of Bash is vulnerable and should be updated:Bash is vulnerable! Bash Test

If your output does not include the simulated attacker's payload, i.e. "Bash is vulnerable" is not printed as output, you are protected against at least the first vulnerability (CVE-2014-6271), but you may be vulnerable to the other CVEs that were discovered later. If there are any bash warnings or errors in the output, you should update Bash to its latest version; this process is described in the next section.

If the only thing that is output from the test command is the following, your Bash is safe from Shellshock:Bash Test

Test Remote Sites

If you simply want to test if websites or specific CGI scripts are vulnerable, use this link:'ShellShock' Bash Vulnerability CVE-2014-6271 Test Tool.

Simply enter the URL of the website or CGI script you want to test in the appropriate form and submit.

Fix Vulnerability: Update Bash

The easiest way to fix the vulnerability is to use your default package manager to update the version of Bash. The following subsections cover updating Bash on various Linux distributions, including Ubuntu, Debian, CentOS, Red Hat, and Fedora.
APT-GET: Ubuntu / Debian

For currently supported versions of Ubuntu or Debian, update Bash to the latest version available via apt-get:sudo apt-get update && sudo apt-get install --only-upgrade bash

Now check your system vulnerability again by running the command in the previous section (Check System Vulnerability).
End of Life Ubuntu / Debian Releases

If you are running a release of Ubuntu / Debian that is considered end of life status, you will have to upgrade to a supported to use the package manager to update Bash. The following command can be used to upgrade to a new release (it is recommended that you back up your server and important data first, in case you run into any issues):sudo do-release-upgrade

After the upgrade is complete, ensure that you update Bash.
YUM: CentOS / Red Hat / Fedora

Update Bash to the latest version available via yum:sudo yum update bash

Now check your system vulnerability again by running the command in the previous section (Check System Vulnerability).
End of Life CentOS / Red Hat / Fedora Releases

If you are running a release of CentOS / Red Hat / Fedora that is considered end of lifestatus, you will have to upgrade to a supported to use the package manager to update Bash. The following command can be used to upgrade to a new release (it is recommended that you back up your server and important data first, in case you run into any issues):sudo yum update

After the upgrade is complete, ensure that you update Bash.


Be sure to update all of your affected servers to the latest version of Bash! Also, be sure to keep your servers up to date with the latest security updates!

USB Flash Drives Could Be Your Biggest Security Risk

posted Oct 6, 2014, 12:35 PM by Avesta Dayeny   [ updated Oct 6, 2014, 12:38 PM ]

If you haven't turned off USB autoplay on your PC, it's conceivable that plugging in an infected USB drive could install malware on your system. The engineers whose uranium-purifying centrifuges were blown up byStuxnet learned that the hard way. It turns out, though, that autoplay malware isn't the only way USB devices can be weaponized. At the Black Hat 2014 conference, two researchers from Berlin-based SRLabs revealed a technique for modifying a USB device's controller chip so it can "spoof various other device types in order to take control of a computer, exfiltrate data, or spy on the user." That sounds kind of bad, but in fact it's really, really dreadful.

Turn to the Dark Side

"We're a hacking lab typically focused on embedded security," said researcher Karsten Noll, speaking to a packed room. "This is the first time we looked a computer security, with an embedded angle. How could USB be repurposed in malicious ways?"

Reseacher Jakob Lell jumped right into a demo. He plugged a USB drive into a Windows computer; it showed up as a drive, just as you'd expect. But a short while later, it redefined itself as a USB keyboard and issued a command that downloaded a remote access Trojan. That drew applause!

"We won't be talking about viruses in USB storage," said Noll. "Our technique works with an empty disk. You can even reformat it. This is not a Windows vulnerability that could be patched. We're focused on deployment, not on the Trojan."

Controlling the Controller
"USB is very popular," said Noll. "Most (if not all) USB devices have a controller chip. You never interact with the chip, nor does the OS see it. But this controller is what 'talks USB.'"

The USB chip identifies its device type to the computer, and it can repeat this process at any time. Noll pointed out that there are valid reasons for one device to present itself as more than one, such as a webcam that has one driver for video and another for the attached microphone. And truly identifying USB drives is tough, because a serial number is optional and has no fixed format.

Lell walked through the precise steps taken by the team to reprogram the firmware on a specific type of USB controller. Briefly, they had to snoop the firmware update process, reverse engineer the firmware, and then create a modified version of the firmware containing their malicious code. "We did not break everything about USB," noted Noll. "We reverse-engineered two very popular controller chips. The first took maybe two month, the second one month."


For the second demo, Lell inserted a brand-new blank USB drive into the infected PC from the first demo. The infected PC reprogrammed the blank USB drive's firmware, thereby replicating itself. Oh dear.

He next plugged the just-infected drive into a Linux notebook, where it visibly issued keyboard commands to load malicious code. Once again, the demo drew applause from the audience.

Stealing Passwords

"That was a second example where one USB echoes another device type," said Noll, "but this is just the tip of the iceberg. For our next demo, we reprogrammed a USB 3 drive to be a device type that's harder to detect. Watch closely, it's almost impossible to see."

Indeed, I couldn't detect the flickering of the network icon, but after the USB drive was plugged in, a new network showed up. Noll explained that the drive was now emulating an Ethernet connection, redirecting the computer's DNS lookup. Specifically, if the user visits the PayPal website, they'll be invisibly redirected to a password stealing site. Alas, the demo demons claimed this one; it didn't work.

Trust in USB

"Let's discuss for a moment the trust we place in USB," said Noll. "It's popular because it's easy to use. Exchanging files via USB is better than using unencrypted email or cloud storage. USB has conquered the world. We know how to virus-scan a USB drive. We trust a USB keyboard even more. This research breaks down that trust."

"It's not just the situation where somebody gives you a USB," he continued. "Just attaching the device to your computer could infect it. For one last demo, we'll use the easiest USB attacker, an Android phone."

"Let's just attach this standard Android phone to the computer," said Lell, "and see what happens. Oh, suddenly there is an additional network device. Let's go to PayPal and log in. There's no error message, nothing. But we captured the username and password!" This time, the applause was thunderous.

"Will you detect that the Android phone turned into an Ethernet device?" asked Noll. "Does your device control or data loss prevention software detect it? In our experience, most do not. And most focus only on USB storage, not on other device types."

The Return of the Boot Sector Infector

"The BIOS does a different type of USB enumeration than the operating system," said Noll. "We can take advantage of that with a device that emulates two drives and a keyboard. The operating system will only ever see one drive. The second only appears to the BIOS, which will boot from it if configured to do so. If it's not, we can send whatever keystroke, maybe F12, to enable booting from the device."

Noll pointed out that the rootkit code loads before the operating system, and that it can infect other USB drives. "It's the perfect deployment for a virus," he said. "It's already running on the computer before any antivirus can load. It's the return of the boot sector virus."

What Can Be Done?

Noll pointed out that it would be extremely difficult to remove a virus residing in the USB firmware. Get it out of the USB flash drive, it could reinfect from your USB keyboard. Even the USB devices built into your PC could be infected.

"Unfortunately, there no simple solution. Almost all our ideas for protection would interfere with the usefulness of USB," said Noll. "Could you whitelist trusted USB devices? Well, you could if USB devices were uniquely identifiable, but they're not."

"You could block USB altogether, but that impacts usability," he continued. "You could block critical device types, but even very basic classes can be abused. Remove those and there's not much left. How about scanning for malware? Unfortunately, in order to read the firmware you must rely on functions of the firmware itself, so a malicious firmware could spoof a legitimate one."

"In other situations, vendors block malicious firmware updates using digital signatures," said Noll. "But secure cryptography is tough to implement on small controllers. In any case, billions of existing devices remain vulnerable."

"The one workable idea we came up with was to disable firmware updates at the factory," said Noll. "The very last step, you make it so the firmware can't be reprogrammed. You could even fix it in software. Burn one new firmware upgrade that blocks all further updates. We could conquer back a little of the sphere of trusted USB devices."

Noll wrapped up by pointing out some positive uses for the controller-modification technique described here. "There's a case to be made for people playing around with this," he said, "but not in trusted environments." I, for one, will never look at any USB device the way I used to.

The Heartbleed Bug

posted Apr 9, 2014, 3:17 PM by Avesta Dayeny   [ updated Apr 9, 2014, 3:27 PM ]

The Heartbleed Bug is a serious vulnerability in the popular OpenSSL cryptographic software library. This weakness allows stealing the information protected, under normal conditions, by the SSL/TLS encryption used to secure the Internet. SSL/TLS provides communication security and privacy over the Internet for applications such as web, email, instant messaging (IM) and some virtual private networks (VPNs).

The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop on communications, steal data directly from the services and users and to impersonate services and users.

What leaks in practice?
We have tested some of our own services from attacker's perspective. We attacked ourselves from outside, without leaving a trace. Without using any privileged information or credentials we were able steal from ourselves the secret keys used for our X.509 certificates, user names and passwords, instant messages, emails and business critical documents and communication.

How to stop the leak?
As long as the vulnerable version of OpenSSL is in use it can be abused. Fixed OpenSSL has been released and now it has to be deployed. Operating system vendors and distribution, appliance vendors, independent software vendors have to adopt the fix and notify their users. Service providers and users have to install the fix as it becomes available for the operating systems, networked appliances and software they use.


What is the CVE-2014-0160?
CVE-2014-0160 is the official reference to this bug. CVE (Common Vulnerabilities and Exposures) is the Standard for Information Security Vulnerability Names maintained by MITRE. Due to co-incident discovery a duplicate CVE, CVE-2014-0346, which was assigned to us, should not be used, since others independently went public with the CVE-2014-0160 identifier.

Why it is called the Heartbleed Bug?
Bug is in the OpenSSL's implementation of the TLS/DTLS (transport layer security protocols) heartbeat extension (RFC6520). When it is exploited it leads to the leak of memory contents from the server to the client and from the client to the server.

What makes the Heartbleed Bug unique?
Bugs in single software or library come and go and are fixed by new versions. However this bug has left large amount of private keys and other secrets exposed to the Internet. Considering the long exposure, ease of exploitation and attacks leaving no trace this exposure should be taken seriously.

Is this a design flaw in SSL/TLS protocol specification?
No. This is implementation problem, i.e. programming mistake in popular OpenSSL library that provides cryptographic services such as SSL/TLS to the applications and services.

What is being leaked?
Encryption is used to protect secrets that may harm your privacy or security if they leak. In order to coordinate recovery from this bug we have classified the compromised secrets to four categories: 1) primary key material, 2) secondary key material and 3) protected content and 4) collateral.

What is leaked primary key material and how to recover?
These are the crown jewels, the encryption keys themselves. Leaked secret keys allows the attacker to decrypt any past and future traffic to the protected services and to impersonate the service at will. Any protection given by the encryption and the signatures in the X.509 certificates can be bypassed. Recovery from this leak requires patching the vulnerability, revocation of the compromised keys and reissuing and redistributing new keys. Even doing all this will still leave any traffic intercepted by the attacker in the past still vulnerable to decryption. All this has to be done by the owners of the services.

What is leaked secondary key material and how to recover?
These are for example the user credentials (user names and passwords) used in the vulnerable services. Recovery from this leaks requires owners of the service first to restore trust to the service according to steps described above. After this users can start changing their passwords and possible encryption keys according to the instructions from the owners of the services that have been compromised. All session keys and session cookies should be invalided and considered compromised.

What is leaked protected content and how to recover?
This is the actual content handled by the vulnerable services. It may be personal or financial details, private communication such as emails or instant messages, documents or anything seen worth protecting by encryption. Only owners of the services will be able to estimate the likelihood what has been leaked and they should notify their users accordingly. Most important thing is to restore trust to the primary and secondary key material as described above. Only this enables safe use of the compromised services in the future.

What is leaked collateral and how to recover?
Leaked collateral are other details that have been exposed to the attacker in the leaked memory content. These may contain technical details such as memory addresses and security measures such as canaries used to protect against overflow attacks. These have only contemporary value and will lose their value to the attacker when OpenSSL has been upgraded to a fixed version.

Recovery sounds laborious, is there a short cut?
After seeing what we saw by "attacking" ourselves, with ease, we decided to take this very seriously. We have gone laboriously through patching our own critical services and are in progress of dealing with possible compromise of our primary and secondary key material. All this just in case we were not first ones to discover this and this could have been exploited in the wild already.

How revocation and reissuing of certificates works in practice?
If you are a service provider you have signed your certificates with a Certificate Authority (CA). You need to check your CA how compromised keys can be revoked and new certificate reissued for the new keys. Some CAs do this for free, some may take a fee.

Am I affected by the bug?
You are likely to be affected either directly or indirectly. OpenSSL is the most popular open source cryptographic library and TLS (transport layer security) implementation used to encrypt traffic on the Internet. Your popular social site, your company's site, commerce site, hobby site, site you install software from or even sites run by your government might be using vulnerable OpenSSL. Many of online services use TLS to both to identify themselves to you and to protect your privacy and transactions. You might have networked appliances with logins secured by this buggy implementation of the TLS. Furthermore you might have client side software on your computer that could expose the data from your computer if you connect to compromised services.

How widespread is this?
Most notable software using OpenSSL are the open source web servers like Apache and nginx. The combined market share of just those two out of the active sites on the Internet was over 66% according to Netcraft's April 2014 Web Server Survey. Furthermore OpenSSL is used to protect for example email servers (SMTP, POP and IMAP protocols), chat servers (XMPP protocol), virtual private networks (SSL VPNs), network appliances and wide variety of client side software. Fortunately many large consumer sites are saved by their conservative choice of SSL/TLS termination equipment and software. Ironically smaller and more progressive services or those who have upgraded to latest and best encryption will be affected most. Furthermore OpenSSL is very popular in client software and somewhat popular in networked appliances which have most inertia in getting updates.

What versions of the OpenSSL are affected?
Status of different versions:
OpenSSL 1.0.1 through 1.0.1f (inclusive) are vulnerable
OpenSSL 1.0.1g is NOT vulnerable
OpenSSL 1.0.0 branch is NOT vulnerable
OpenSSL 0.9.8 branch is NOT vulnerable

Bug was introduced to OpenSSL in December 2011 and has been out in the wild since OpenSSL release 1.0.1 on 14th of March 2012. OpenSSL 1.0.1g released on 7th of April 2014 fixes the bug.

How common are the vulnerable OpenSSL versions?
The vulnerable versions have been out there for over two years now and they have been rapidly adopted by modern operating systems. A major contributing factor has been that TLS versions 1.1 and 1.2 came available with the first vulnerable OpenSSL version (1.0.1) and security community has been pushing the TLS 1.2 due to earlier attacks against TLS (such as the BEAST).

How about operating systems?
Some operating system distributions that have shipped with potentially vulnerable OpenSSL version:
Debian Wheezy (stable), OpenSSL 1.0.1e-2+deb7u4
Ubuntu 12.04.4 LTS, OpenSSL 1.0.1-4ubuntu5.11
CentOS 6.5, OpenSSL 1.0.1e-15
Fedora 18, OpenSSL 1.0.1e-4
OpenBSD 5.3 (OpenSSL 1.0.1c 10 May 2012) and 5.4 (OpenSSL 1.0.1c 10 May 2012)
FreeBSD 10.0 - OpenSSL 1.0.1e 11 Feb 2013
NetBSD 5.0.2 (OpenSSL 1.0.1e)
OpenSUSE 12.2 (OpenSSL 1.0.1c)

Operating system distribution with versions that are not vulnerable:
Debian Squeeze (oldstable), OpenSSL 0.9.8o-4squeeze14
SUSE Linux Enterprise Server
FreeBSD 8.4 - OpenSSL 0.9.8y 5 Feb 2013
FreeBSD 9.2 - OpenSSL 0.9.8y 5 Feb 2013
FreeBSD Ports - OpenSSL 1.0.1g (At 7 Apr 21:46:40 2014 UTC)

How can OpenSSL be fixed?
Even though the actual code fix may appear trivial, OpenSSL team is the expert in fixing it properly so latest fixed version 1.0.1g or newer should be used. If this is not possible software developers can recompile OpenSSL with the handshake removed from the code by compile time option -DOPENSSL_NO_HEARTBEATS.

Should heartbeat be removed to aid in detection of vulnerable services?
Recovery from this bug could benefit if the new version of the OpenSSL would both fix the bug and disable heartbeat temporarily until some future version. It appears that majority if not almost all TLS implementations that respond to the heartbeat request today are vulnerable versions of OpenSSL. If only vulnerable versions of OpenSSL would continue to respond to the heartbeat for next few months then large scale coordinated response to reach owners of vulnerable services would become more feasible.

Can I detect if someone has exploited this against me?
Exploitation of this bug leaves no traces of anything abnormal happening to the logs.

Can IDS/IPS detect or block this attack?
Although the content of the heartbeat request is encrypted it has its own record type in the protocol. This should allow intrusion detection and prevention systems (IDS/IPS) to be trained to detect use of the heartbeat request. Due to encryption differentiating between legitimate use and attack can not be based on the content of the request, but the attack may be detected by comparing the size of the request against the size of the reply. This seems to imply that IDS/IPS can be programmed to detect the attack but not to block it unless heartbeat requests are blocked altogether.

Has this been abused in the wild?
We don't know. Security community should deploy TLS/DTLS honeypots that entrap attackers and to alert about exploitation attempts.

Can attacker access only 64k of the memory?
There is no total of 64 kilobytes limitation to the attack, that limit applies only to a single heartbeat. Attacker can either keep reconnecting or during an active TLS connection keep requesting arbitrary number of 64 kilobyte chunks of memory content until enough secrets are revealed.

Is this a MITM bug like Apple's goto fail bug was?
No this doesn't require a man in the middle attack (MITM). Attacker can directly contact the vulnerable service or attack any user connecting to a malicious service. However in addition to direct threat the theft of the key material allows man in the middle attackers to impersonate compromised services.

Does TLS client certificate authentication mitigate this?
No, heartbeat request can be sent and is replied to during the handshake phase of the protocol. This occurs prior to client certificate authentication.

Does OpenSSL's FIPS mode mitigate this?
No, OpenSSL Federal Information Processing Standard (FIPS) mode has no effect on the vulnerable heartbeat functionality.

Does Perfect Forward Secrecy (PFS) mitigate this?
Use of Perfect Forward Secrecy (PFS), which is unfortunately rare but powerful, should protect past communications from retrospective decryption. Please see how leaked tickets may affect this.

Can heartbeat extension be disabled during the TLS handshake?
No, vulnerable heartbeat extension code is activated regardless of the results of the handshake phase negotiations. Only way to protect yourself is to upgrade to fixed version of OpenSSL or to recompile OpenSSL with the handshake removed from the code.

Who found the Heartbleed Bug?
This bug was independently discovered by a team of security engineers (Riku, Antti and Matti) atCodenomicon and Neel Mehta of Google Security, who first reported it to the OpenSSL team. Codenomicon team found heartbleed bug while improving the SafeGuard feature in Codenomicon's Defensics security testing tools and reported this bug to the NCSC-FI for vulnerability coordination and reporting to OpenSSL team.

What is the Defensics SafeGuard?
The SafeGuard feature of the Codenomicon's Defensics security testtools automatically tests the target system for weaknesses that compromise the integrity, privacy or safety. The SafeGuard is systematic solution to expose failed cryptographic certificate checks, privacy leaks or authentication bypass weaknesses that have exposed the Internet users to man in the middle attacks and eavesdropping. In addition to the Heartbleed bug the new Defensics TLS Safeguard feature can detect for instance the exploitable security flaw in widely used GnuTLS open source software implementing SSL/TLS functionality and the "goto fail;" bug in Apple's TLS/SSL implementation that was patched in February 2014.

Who coordinates response to this vulnerability?
NCSC-FI took up the task of reaching out to the authors of OpenSSL, software, operating system and appliance vendors, which were potentially affected. However, this vulnerability was found and details released independently by others before this work was completed. Vendors should be notifying their users and service providers. Internet service providers should be notifying their end users where and when potential action is required.

Is there a bright side to all this?
For those service providers who are affected this is a good opportunity to upgrade security strength of the secret keys used. A lot of software gets updates which otherwise would have not been urgent. Although this is painful for the security community, we can rest assured that infrastructure of the cyber criminals and their secrets have been exposed as well.

Where to find more information?
This Q&A was published as a follow-up to the OpenSSL advisory, since this vulnerability became public on 7th of April 2014. The OpenSSL project has made a statement at NCSC-FI published an advisory at Individual vendors of operating system distributions, affected owners of Internet services, software packages and appliance vendors may issue their own advisories.

NCSC-FI case# 788210 (published 7th of April 2014, ~17:30 UTC) (published 7th of April 2014, ~18:00 UTC) (published 7th of April 2014, ~19:00 UTC)

Step-by-Step: Build Your Private Cloud with System Center 2012, Windows Server 2012, Hyper-V and Windows Azure

posted Oct 8, 2013, 2:50 PM by Avesta Dayeny   [ updated Oct 8, 2013, 3:05 PM ]

Are you virtualizing your servers? Yes, of course!

Are you spending less time managing your servers as a result? Hmm … No!

Server Virtualization is Great, But …

Server virtualization has been a great set of technologies to reduce our capital expenses and some operating expenses by consolidating a larger number of virtualized server workloads in a smaller footprint of physical rack space. As a result, we’ve been able to purchase less data center hardware and likely have lower power and cooling costs in running our data center.

However, most IT Pros are not seeing a reduction in the amount of time they spend with day-to-day management of server operating systems and applications. Let face it … whether you have 100 physical servers or 100 virtual servers, you still have 100 server operating system instances to administer, configure, monitor, patch and update. In fact, because of reduced capital costs when using server virtualization, many IT Pros report that they are now faced with managing a much larger ( and growing ) number of operating system instances and applications – these days, it seems like everyone in the company wants their own VMs! As a result, IT Pros are forced to spend most of their day managing VMs and applications, and often don’t have enough time to spend on improving their IT environments.

Private Cloud … To The Rescue!

Well, Private Cloud is the answer! Private Cloud is not a product, but rather an approach for designing, implementing and managing your servers, applications and data center resources by reducing complexity, increasing standardization and automation, and provide elasticity – the ability to easily scale your data center up, down, in or out – to support evolving business and technical requirements. 

Private Cloud applies the same principles used for scaling and managing the world’s largest public clouds to your private data center environment. Now, you can have your very own cloud!
Build Your Private Cloud – The Series

Technical Evangelists have authored a content series that steps through building your very own Private Cloud by leveraging Windows Server 2012,  FREE Hyper-V Server 2012, Windows AzureInfrastructure Services ( IaaS ) and System Center 2012 Service Pack 1

Week-by-week, we walk through the steps to envision, plan and implement your very own Private Cloud to take your existing data center to the next level and give you the tools and time back in your day for improving IT services and being able to change and shift with your business / IT needs.

Below is the weekly breakdown of each topic that we've written in this series to help you build your own Private Cloud. Be sure to bookmark this page and check back daily to progress through building your Private Cloud this month!
WEEK 0 - Get ready to follow along!

Get prepared to follow along with our content series by downloading Windows Server 2012, the FREE Hyper-V Server 2012, System Center 2012 SP1, and Windows Azure. Once you have these components, you’ll be ready to follow along with us as we build your Private Cloud together!
DOWNLOAD the Bits You'll Need to Follow Along!

WEEK 1 – Build Your Private Cloud Foundation with Windows Server 2012
MODULE 1: What is a Private Cloud? ( Video ) 

DO IT: Download System Center 2012 SP1

DO IT: Download Windows Server 2012 and FREE Hyper-V Server 2012
WEEK 2 – Building Your Private Cloud Fabric with System Center 2012 SP1

WEEK 3 – Configuring and Optimizing Your Private Cloud with System Center 2012 SP1

WEEK 4 – Deploying and Servicing Applications in Your Private Cloud with System Center 2012 SP1

CONGRATULATIONS! But ... Let's Keep Going ...

You've built your Private Cloud, but you've still got to manage, protect and grow it as your business evolves. Over the next few weesks, you'll learn to extend your base Private Cloud fabric and prepare for MCSE Private Cloud certification ...
WEEK 5 – Extending and Protecting Your Private Cloud

WEEK 6 - Managing Hybrid Clouds, What's New in R2 and Disaster Recovery

MODULE 14: What's New in Private Cloud with the new R2 Releases

MODULE 15: Private Cloud Disaster Recovery and Business Continuance
WEEK 7+: Study and Get Certified on Private Cloud

Prepare for the MCSE: Private Cloud certification exams with these popular FREE exam study guides:

Become an Early Expert!

Backup and Recovery of Windows 8 & Windows 8.1

posted Oct 8, 2013, 2:45 PM by Avesta Dayeny

Update for Windows 8.1: Note that System Image Backup in Windows 8.1 has been moved to the lower left corner of the File History tool in Control Panel as shown below. 

In addition, the Windows 7 File Recovery tool in Control Panel has been renamed to the Recovery tool in Windows 8.1.

Have you recently installed Windows 8? In this article, we'll introduce you to the new options available for making Backup and Recovery in Windows 8 easier than ever, including Windows 8 File History, launchingWindows System Backup and Windows 8 Refresh & Reset PC.
Windows 8 File History

File History is a new feature in Windows 8 that allows you to setup a schedule for automatically saving copies of documents located in your Libraries, Contacts, Favorites or SkyDrive to an external drive or network location. File History can be leveraged for those situations where you need to recover an older version of a particular document that you've overwritten. To turn on File History, use the Control Panel -> File Historyapplet and click the "Turn On" button.

After you've turned on File History, you'll then be able to further customize your Advanced settings, such asschedule (default = every hour), # of versions (default = all versions) to keep and % of disk space (default = 5%) to use for caching changes when your File History location is offline. In addition, you can Exclude foldersthat you may not want to include in your File History, perhaps if you have applications that already use built-in versioning to save documents to your Libraries.

When you're ready to restore files from File History, click the "Restore personal files" link located on the left panel of the File History applet.
Launching Windows System Backup

Windows System Backup is still included in Windows 8! To launch the Windows Backup tool, open the Control Panel -> Windows 7 File Recovery applet and click the "Set up Backup" button. Alternatively, you can launch "sdclt.exe" from the Command Prompt to start this applet.

All of the old familiar options are there! Using Windows Backup, you can backup a full system image orselected files & folders to an external drive or network location. You can also create a system repair disc for repairing and restoring the system in the event that you encounter any boot issues.
Windows 8 Refresh and Reset PC

Sometimes, a system becomes so infected or corrupted that it's easier and safer to just reset the operating system and applications back to a "normal" factory default out-of-box (OOB) experience. In Windows 8, you have two options to make this process easier than ever: Refresh (which resets the OS and applications without losing your documents and other personal files) and Reset (which resets the OS and applications and removes everything else). You can also create a Recovery Drive on a USB flash drive so that you can perform a Refresh or Reset operation even if your PC isn't bootable.

You'll find these new options in the Control Panel -> Recovery applet, along with other familiar tools likeSystem Restore.

What do you think about these new Windows 8 Backup & Recovery features? Will they make your life easier as an IT Pro?

Get Ready for Windows 8.1 in One Page!

posted Oct 8, 2013, 2:43 PM by Avesta Dayeny

Today, Blogging Windows announced that the final release of Windows 8.1 will be available on October 18th! 

Windows 8.1 will be available, beginning October 18th as a free update to customers running Windows 8 via the Windows Store. In addition, Windows 8.1 will also be available for retail purchase and on new devices starting on this same date.
Are you ready for Windows 8.1?

There's approximately 9-weeks between today and the world-wide launch of Windows 8.1. Below is a learning roadmap that you can complete over the next few weeks to help you get ready ...

Download: Windows 8.1 Enterprise Preview

Download: System Center 2012 R2 Preview for deploying and managing Windows 8.1 clients

Watch: 28 Awesome Demos on Windows 8.1

Watch: What’s New in Windows 8.1 for Enterprise IT Pros

Read: Windows 8.1 FAQ for IT Pros

Read: What’s Changed in Security for Windows 8.1?

Read: What's New in Internet Explorer 11 Preview

Read: Internet Explorer 11 FAQ for IT Pros

Read: Start your Windows 8.1 Deployment Planning

Read: Windows 8.1 Preview Product Guide

Do: Enabling Work Folders with Windows 8.1 and Windows Server 2012 R2

See Virtual Lab System Requirements for detailed PC requirements for launching lab.

Do: Learning Windows 8.1 in One Page by Doris Chen

Will you be ready for October 18th?

Best of BUILD 2013: What’s New in Windows 8.1 for IT Pros

posted Oct 8, 2013, 2:37 PM by Avesta Dayeny   [ updated Oct 8, 2013, 2:39 PM ]

At the Build 2013 conference in June, Microsoft formally announced the availability of Windows 8.1 Preview and Windows RT 8.1 Preview. If you’re interested in evaluating this pre-release of Windows 8.1 on non-production PCs, you can obtain the Preview bits at

Although much of the Build conference was targeted towards developers, lots of IT Pros are interested in the ways in which Windows 8.1 will improve enterprise end-user and management scenarios. John Vintzel, Senior Program Manager on the Windows team, delivered a great session on the new features in Windows 8.1 for business organizations. In this article, I’ll provide a summary of John’s session along with a clickable video index to the recording of this session.
What’s New in Windows 8.1 for the Enterprise?

Session Index – What’s New in Windows 8.1 for the Enterprise?

[ 01:04 ] Changes in the Enterprise
Anywhere, anytime expectations from end-users
BYOD goes mainstream
Dynamic, connected, global environment
Increasing need to enable mobile professionals
Mobility demands have grown from 21% to 29% in past two years
80% of all workers do work outside a traditional office

[ 02:30 ] Session Agenda: Windows 8.1 for the Enterprise
Best business tablet and more
Windows apps for business
Enterprise grade security
Empower BYOD
Mobility for the enterprise

[ 03:40 ] Windows 8.1: UI Enhancements
Start “tip” to easily transition between Desktop and Start Screen
Start Screen Improvements
See the desktop only when you need it
Leave the desktop only when you want to
Windowed and improved multi-monitor support
New search experience

[ 05:30 ] Apps for your business
Apps share screen
Multi-monitor support
Higher DPI support
New contracts for apps
High-precision touchpad

[ 08:15 ] Reliability and Power Efficiency
Maximizing Battery Life of Connected Standby mode
Connected Standby Support for both Wireless and Wired network adapters
Improved App Reliability for Background Apps
Background Transfers –
Notification when background transfers complete or fail
Optimized background transfers that can switch between network connections

[ 12:43 ] Mobile Broadband for Anywhere Connectivity
Windows 8.1 PCs can become a WIFI /mobile broadband hotspot
Auto-connect and disconnect capabilities to/from hotspot
Hardware OEMs integrating mobile broadband into System-on-Chip designs for improved battery life and smaller form factors.

[ 14:27 ] Printing Enhancements in Windows 8.1
Windows 8.1 makes printing simple and secure, from any device
Support for NFC tap-to-connect printing
Improved credential manager support for auto-mapping printers
Roaming printer information based on printer connectivity status
Support for WIFI-Direct printing

[ 17:55 ] Remote Desktop Enhancements in Windows 8.1
Touch-first, fast and fluid user experience
Remote Desktop and Remote App for BYOD scenarios
Reduced network bandwidth for Remote Desktop connections
Optimized reconnect time down to less than 10 seconds
RemoteFX GU Offload enhanced to reduce overhead by ~50%

[ 21:58 ] Workplace Join in Windows 8.1
Enables users to associate their BYOD devices with company’s Active Directory without a full domain join
Supported for Windows 8.1 and Windows 8.1 RT
Requires Windows Server 2012 R2

[ 23:01 ] DEMO: Workplace Join

[ 26:21 ] Web Application Proxy
Publish web applications to users from anywhere on any device
Can differentiate web applications published to domain-joined PCs, workplace-joined PCs, or any remote PCs.
Included in Routing and Remote Access ( RRAS ) role in Windows Server 2012 R2

[ 27:54 ] Expanding Device Support
Not Joined to AD
Workplace Joined
Domain Joined

[ 28:35 ] Expanded In-box VPN Support
Built into Windows
Secure and Stable
Performance with no compromise
Integrated experience
Apps that require VPN access can be integrated with VPN Profiles to trigger VPN connectivity on App launch
New 3rd party VPN support:
Juniper Networks
Check Point

[ 31:13 ] Work Folders in Windows 8.1
Access data from anywhere on different devices
Requires Windows 8.1 client and Windows Server 2012 R2
Work folders can be enabled via File Server Role in Windows Server 2012 R2
Creates synced off-line folder on client that can optionally be encrypted using EFS
IT Pro can control policies for authentication and encryption
Work folders can be configured via end-user opt-in or can be pushed to users via Group Policy, System Center, Windows Intune or Scripts

[ 33:35 ] Mobile Device Management ( MDM ) in Windows 8.1
In-box capabilities for mobile device management
Leverages Open Mobile Alliance Device Management ( OMA-DM ) protocol
Extended to platform support to all Windows 8.1 SKUs
Added support for provisioning VPN profiles, wireless networks, certificates and Work Folders
Manage PCs through OMA-DM agent via Windows Intune or 3rd Party MDM solutions

[ 35:27 ] Security in Windows 8.1 - Selective Wipe
Mark and protect Work Folders with EFS
Access can be revoked on demand
Wipe business data remotely without affecting personal data
Per-app option: Wipe can be sent on an app-by-app basis or entire device

[ 37:59 ] Security in Windows 8.1 – Virtual Smart Cards
Ability to provide strong user authentication, using the ability of your device

[ 39:18 ] Security in Windows 8.1 – Biometrics
Fingerprints fully integrated in Windows experiences

[ 40:32 ] DEMO: Virtual Smart Card and Biometric Authentication

[ 47:08 ] Session Summary
Windows 8.1 offers great experiences and devices along with enterprise-grade solutions
What’s Next? Evaluate Windows 8.1 Preview in your own shop!

Below, I’ve listed some great FREE resources available to help you evaluate Windows 8.1 Preview in your own shop. To evaluate Workplace Join, Work Folders, Selective Wipe, and Web Application Proxy, you’ll also need Windows Server 2012 R2 Preview.
DOWNLOAD: Windows 8.1 Preview
DOWNLOAD: Windows Server 2012 R2 Preview
ACTIVATE: Windows Azure Free Trial to evaluate Windows Server 2012 R2 Preview online
DOWNLOAD: Windows Assessment and Deployment Kit ( ADK ) for Windows 8.1 Preview
DOWNLOAD: Remote Server Administration Tools ( RSAT ) for Windows 8.1 Preview

READ: What’s New in Windows 8.1
READ: Windows 8.1 Preview Product Guide
READ: Windows 8.1 Compatibility Cookbook

Get started as an "Early Expert" on Windows Server 2012 R2 with this FREE eBook!

posted Oct 8, 2013, 2:07 PM by Avesta Dayeny   [ updated Oct 8, 2013, 2:12 PM ]

Research has shown that approximately 65% to 70% of organizations today have more than one hypervisor deployed. This presents a large opportunity for IT Professionals to increase their technical differentiation and value in the IT marketplace by supporting multiple hypervisors. Using this free eBook and the "Early Experts" study program, IT Professionals can easily extend their professional knowledge and skills to Windows Server 2012 R2 and Hyper-V on a flexible schedule as time permits.

In this article, get started down your path as an "Early Expert" on Windows Server 2012 R2 ...

Windows Server 2012 R2 is the next major release of Windows Server … and it’s almost HERE! As announced onBrad Anderson’s In The Cloud blog, this new R2 release has already been released to manufacturing (RTM) for our OEM partners and will be generally available in just a few short weeks on October 18th.

In this article, I provide a brief introduction to some of the new capabilities that I’m most excited about, along with the steps that you can use to download a new eBook for FREE: Introducing Windows Server 2012 R2 Preview Release.

What’s New with Windows Server 2012 R2?

LOTS! The new capabilities of Windows Server 2012 R2 provide substantial enhancements when building a Private Cloud foundation across key areas such as Virtualization, Storage, Networking, High Availability and Disaster Recovery. In particular, some of my favorite new features that “R2” adds include:
Generation 2 VMs, Super-fast Live VM Migrations, Online VM Storage resizing, Live cloning of running VMs, Online backup of Linux VMs … and MORE!

Software-Defined Storage
Automated storage tiering, Dynamic storage rebalancing, Storage IO Control, Online VM Deduplication … and MORE!

Robust High Availability
Simplified VM clustering, Protected network monitoring … and MORE!

Software-Defined Networking
Windows Network Virtualization Gateway, Virtual RSS, Dynamic “Flowlet” load-balancing … and MORE!

Disaster Recovery
Site-to-Site replication RPOs down to 30 seconds, Tertiary replication … and MORE!
Whew! But … there’s STILL MORE!

The above list is just a small subset of the exciting new capabilities in Windows Server 2012 R2!
Are you using Active Directory?
Supporting BYOD devices?
Running Web Sites on Windows Server?
Do you manage Printers?
Integrating with Public Cloud services?

Well, all those areas have been improved as well with new features and capabilities!

How can I quickly learn about these new features?

Great question! Mitch Tulloch and our Windows Server team recently released a new 108-page eBook,Introducing Windows Server 2012 R2 Preview Release. This eBook provides a detailed technical overview of the enhancements noted above so that you can quickly get up-to-speed with these capabilities and Get Ready for R2!

This eBook includes 6 detailed chapters on the following topics:
Chapter 1: Cloud OS
Chapter 2: Hyper-V
Chapter 3: Storage
Chapter 4: Clustering
Chapter 5: Networking
Chapter 6: Other enhancements
Get this 108-page eBook for FREE!

We’re making this new eBook available as a FREE resource for you to quickly learn about the new features in Windows Server 2012 R2 … Just follow the steps below to get it delivered directly to you!

Get this eBook for FREE!

Download the Windows Server 2012 R2 Evaluation Kit – Be sure to grab the VHD download with Server GUI. ( You’ll need this to build a Study Lab so that you can follow along with the eBook )

Shortly after downloading your Evaluation Kit, you’ll receive an email titled “Windows Server 2012 R2 Preview Evaluation: Start Here”

Forward the email received in Step 2 above to You'll receive an email in response with your free eBook!

While you’re waiting … Build Your Lab!

While you’re waiting to receive your FREE eBook, get started with building your lab using the evaluation kit you downloaded above. You can build your lab as a dual-boot environment on your existing PC using the downloaded VHD by following these steps:
Do It: Build Your Lab using Boot-to-VHD

Alternatively, if you don’t have access to any PC hardware, you can also build your lab online using our FREE Windows Azure trial program:
Do It: Build Your Server Lab in the Clouds

We'll begin releasing additional hands-on study materials in the next few weeks as next steps in the "R2" edition of the "Early Experts" program. In the meantime, when you receive your FREE eBook, be sure to read at least Chapter 1 to get prepared for the upcoming hands-on materials.

Read: Chapter 1 - Cloud OS in the FREE eBook that you'll receive.

BYOD: The new battleground for CIO value

posted Sep 20, 2013, 10:29 AM by Avesta Dayeny   [ updated Sep 20, 2013, 10:30 AM ]

Today’s business users expect instant access to all work-related services and data from their personal mobile devices, challenging IT to re-think long-held ideas about infrastructure, policies, and governance. Because bring-your-own-device (BYOD) reflects massive growth in the consumerization of IT, it has become the battleground on which Chief Information Officers must demonstrate value and relevancy

This battle for relevancy cannot be won by compromises and diplomacy alone. No, the war for value demands CIOs who can innovate, adopt new technologies, and fully embrace a user-centered approach to IT management.

Although BYOD aggravates differences between users and IT, the underlying problem is poor communication and lack of trust rather than technology itself. For example, research(PDF) from Accenture highlights misaligned priorities and trust as an issue of critical concern between CIOs and Chief Marketing Officers. According to the study, 45 percent of CIOs report that marketing is at or near the top of their priority list, while 64 percent of CMOs believe that CIOs place marketing IT at the bottom of their priorities. This gap indicates severe lack of communication; the two sides don’t seem to talk. Steve Mann, CMO at information products supplier, LexisNexis, believes an “Inherent tension” exists between users’ desire to choose their own devices and applications versus IT’s charter to secure the enterprise boundary. Another CMO, Vala Afshar, from networking manufacturer, Enterasys, further explains the tension, “When your tools at home are better than those at work, it causes frustration. Mobility and collaboration are a lifestyle and you cannot expect employees to be chained to their desk.”

Tensions with the Business
Despite requests by users for greater BYOD network access, the CIO must still adhere to his or her corporate mandate. When responding to requests for BYOD, the University of New Hampshire’s CIO, Joanna Young, evaluates whether new “end-user technology will integrate with our systems and not break five other things or kill our OpEx.” BYOD creates user expectations that personal devices will instantly work in the enterprise just as they do at home. CIO Joanna Young acknowledges that users “become annoyed” when they cannot perform simple activities such as scheduling meetings, accessing SharePoint sites, or interacting with others using their own device. “When IT does not provide this level of access,” she says, “it causes a lot of tension.”

The need for speed. Although BYOD heightens gaps between IT and lines of business, the underlying tensions are not new; sources of conflict include divergent operational metrics, measures, and goals.

Aside from the rise of shadow IT, this divergence has created an entire industry of cloud-based SaaS vendors, which exploit the gap by letting users buy and deploy technology without asking IT. Business users today can purchase powerful equipment and enterprise software easily and at low cost, encouraging independence and offering flexibility. The University of New Hampshire’s CIO, Joanna Young, recognizes that lines of business desire speed: “Users want to make decisions and do things quickly; IT either tries to slow them down, or cannot meet their expectations, without adding cost. Time to market is a key issue that weaves throughout the relationship between IT and business.

Mind the Gap
When IT policies collide with business users’ expectations, misalignments of goals and interest occur; a couple of real-life examples: 

IT forces senior executive to become data verification clerk. A former employer of Steve Mann periodically required him to verify the security authorizations for hundreds of people that worked on his team. This task required Mann to check every security permission, manually, by “painstakingly going line-by-line, employee-by-employee.” When IT forces senior executives to spend significant time performing lengthy clerical tasks, the situation marginalizes IT and further drives the CIO toward business irrelevancy.

BYOD runs amuck. Joanna Young explains that the University of New Hampshire created a mobile app built for specific platforms that are “the most prevalent on campus.” Based on the advice of their “shadow IT advisor,” some employees purchased different mobile devices without first asking IT. As a result, these employees could not use the official UNH app to access corporate systems and IT was unable to provide the desired level of service without incurring additional overhead.

Innovate and Collaborate
BYOD causes a shift in how business users relate to technology while simultaneously raising their expectations of IT. As a result, BYOD challenges CIOs with organizational issues that go far beyond security concerns and support costs. Innovation, rather than negotiated compromises that leave both IT and business users dissatisfied, is the solution. Many CIOs own the word “no” as standard response to users. Unfortunately, that attitude creates a negative, self-fulfilling cycle that alienates the business and devalues IT. Instead, CIOs should adopt a positive and helpful mindset, always remembering that IT’s true mandate is serving the business. For this reason, we urge CIOs to rethink their role and embrace a shift from “chief gatekeeper” to Chief Innovation Officer.

Innovation, rather than diplomacy, is the CIO’s secret weapon in the BYOD battle; collaboration and “default to yes” are the new currency of CIO relevancy.

1-10 of 12