BrandPost: Securing Your Multi-Cloud Strategy

Historically, the transition from older technology to new technology is pretty straightforward. While a handful of folks using an Underwood manual typewriter may have been reluctant to give them up, the majority of users were eager to switch to an electric model. Today, people line up around the block to turn in their barely broken-in smartphone for the latest model.

Last year, IDG estimated that 96 percent of organizations had adopted the cloud in one form or another. So you would think that the move to the cloud would be all but over. The problem is that “the cloud” is not a very precise term. Because, when you dig into the details, the landscape is a little more fluid than those number might make it seem.

The transition to the cloud is still in flux

One complicating factor is multi-cloud. According to Rightscale, organizations today are using an average of 3 private clouds and 2.7 public clouds. They run applications in about 3 of them and are testing about 1.7 more.

However, according to a recent IHS Markit survey, 74% of organizations that had moved an application into the public cloud have subsequently decided to move it back into their on-premises or private cloud infrastructure.

Of course, this doesn’t mean they reversed all of their cloud deployments. But it is a trend that not many folks are talking about. It turns out that the cloud story is far from over. Nearly half of respondents, for example, noted that they had moved cloud deployments back into their infrastructure as part of a “planned temporary” deployment to support an IT transition such as a merger or acquisition. Other factors at work include unexpected costs, poor cloud performance, new regulations, and changes in underlying technologies.

The biggest issue is security

However, by far, the biggest challenge is security. According to IDG’s 2018 Cloud Computing Survey, respondents said they plan to move a full half of their public cloud installed applications to either a private cloud or non-cloud environment over the next two years, primarily due to security concerns. The problem is, in their rush to adopt a cloud strategy, many CSOs misunderstood the nature of cloud security. And as a result, according to Gartner, 95 percent of cloud security failures are ultimately the fault of the customer and not the cloud provider.

Even those organizations using a single cloud infrastructure still have to select, deploy, configure, and manage their security systems, and a misconfigured cloud firewall is just as vulnerable as a physical one. That is easier said than done. Most of the IT staff dedicated to security have little cloud experience, and DevOps teams building out cloud applications and environments have little expertise when it comes to security. And organizations certainly don’t have the resources needed to manage the security of several different environments simultaneously. Let’s take a quick look at some of the mitigating factors:

  • Private Cloud. Organizations have, on average, three different private cloud environments in place. That means three different hypervisors, three different infrastructures, and three different sets of resources – each with their unique security profiles – that need to be secured.
  • Public Cloud. These same organizations also have between two and three different public cloud environments functioning as a platform or infrastructure. Like private clouds, these environments often have different protocols, features, and abilities that make them suitable for some network functions and not for others.
  • SaaS. In addition, the average employee uses at least eight different apps, with companies with between 500 and 1,000 employees utilizing over 150 different apps, and organization with more than 1,000 employees using well over 200. Even the smallest organization, with between 1 and 50 workers, utilizes 40 different cloud applications.
  • Shadow IT. Ninety-three percent of respondents in one survey said they regularly have to deal with Shadow IT – the use of unsanctioned cloud services and apps – with half claiming that security control gaps and misconfigurations have led to data breaches and fraud. And Gartner estimates that Shadow IT comprises 30 to 40 percent of IT spending in large enterprises.

Securing each of these cloud instances is a challenge, especially for organizations with limited IT staff or who are feeling the pain of the current cybersecurity skills gap. But that is the easiest of the problems. Cloud environments, especially public clouds, come with a variety of security tools that can be selected and deployed with the usual efforts associated with configuration, proper deployment, and ongoing management. The trick here is that cloud environments are highly elastic and continually evolving, so security strategies and solutions need to be able to adapt to those changes.

The complexity of securing a hybrid cloud

The challenge is that these problems are all compounded by a hybrid cloud environment – especially one that merges a physical network with private and public cloud environments. Managing the fluidity between private cloud and public cloud and keeping both secure is not just a difficult task; it is one that few organizations are prepared to succeed at. Ensuring consistent security for the applications, workloads, and other resources – and the data they leverage – that move across and between different cloud environments involves a nearly impossible level of complexity when the right strategies and tools aren’t in place.

Anyone looking to maintain a secure hybrid cloud environment needs to have a master security strategy and desired operational model definition in place before they begin. IT staff and budget are unlikely to change, so before a single device is deployed or a single application is leveraged, organizations need a plan that allows them to scale their network footprint – and associated attack surface – essentially using the same resources they had before they began. That requires an understanding of cloud security issues that most CSOs and their staff do not possess.

Where to begin

To begin, here are four critical concepts that need to be understood before such a plan can be developed.

  1. Not all cloud security tools are the same. Cloud security solutions come in two flavors. Purpose built security solutions that run on top of the cloud infrastructure, and cloud native solutions that are perceived to be part of the cloud services infrastructure just that they are managed by the provider. If you are looking for genuinely effective security that provides the most functionality, a combination of purpose built security tools and use of cloud native security services consistently managed is a preferred approach.
  2. You need the right tool for the job. Cloud environments are complex and require different sorts of security solutions. Agile application development, for example, requiring security tools that can be integrated into code or loaded into a container and then tied into a chain of application elements. Cloud infrastructures require NGFWs, web application firewalls, IPS systems, and advanced threat protection solutions. SaaS applications require things like CASB, Sandboxing and other application security services to ensure that access to applications and data can be controlled.
  3. Security tools need to be able to see and share information across deployments. Reducing complexity requires reducing the IT overhead required to deploy, configure, update, and coordinate a highly distributed security system. The last thing an organization needs is uncontrolled vendor and solution sprawl resulting in siloed tools that can’t see or share information.

Complicating this further, solutions deployed in different cloud environments do not natively talk to each other or share the same descriptions to similar resources, events or policies, which can make it difficult or impossible to implement consistent security policies between environments to protect workflows and applications that move across the network. This creates security gaps that cloud-savvy cybercriminals are all too willing to exploit, requires security abstraction layers that can translate between different environments to ensure consistent enforcement.

  1. Centralized control is essential. Finally, these security tools will only work without significantly raising IT overhead if they are tied together through a single-pane-of-glass management and orchestration interface – whether a single device or an integrated SOC – to extend granular visibility and consistent control across the distributed network. This includes centralized configuration management and assessment, policy and update orchestration, event and intelligence correlation, and the ability to marshal a coordinated response to detected malware and breaches.

Summing up

Cloud deployments are likely to remain in flux for the foreseeable future while organizations determine the best place to keep data, applications, and other digital resources. And while they work to get their figure out the serious issues of cloud security. In the meantime, IT leaders need to establish a security framework that guides the adoption and deployment of new cloud services so that digital transformation doesn’t result in your company being a victim of some of today’s determined and highly organized cybercriminal organizations.

Learn more about other major cloud trends from the IHS Markit survey commissioned by Fortinet here.

Learn more about how Fortinet’s multi-cloud solutions provide the necessary visibility and control across cloud infrastructures, enabling secure applications and connectivity from data center to cloud.

 

Side-Channel Attack against Electronic Locks

Clive Robinson • August 14, 2019 2:38 PM

@ All,

Side channels are an issue for three basic reasons,

1, They tend to be covert, thus easy to miss during testing.

2, They tend to be very difficult to design out of a system, especially consumer products.

3, Unless the design engineers both hardware and software have a lot of experience in this area their solutions will tend to be expensive.

However a fundemental reason why they exist is “bandwidth” the more of it you have the easier it is to find ways to build and hide covert channels, over and above those that arise due to inapropriate design. One especially bad design choice for side channels is “efficiency” primarily designing for minimum time or fastest response.

As I’ve noted in the past there os the rule of thumb of “Security-v-Efficiency” in general the more efficient you make something the less secure it is.

We have power supply units that are well up in the 90% range, in part they do this in two ways. The first is to have a very low impedence high voltage source, the second is to very rapidly switch this into an energy storage component such as a capacitor or inductor with as little “Effective Series Resistance” as possible (parallel resistance can be mathmatically transformed into an equivalent series resistance).

Thus you have very high speed thus wide bandwidth pulses that are directly related to load. Such signals due to the low source impedence are easily seen as proportional to power consumption by the load circuit. Which is why the press of a key on a keyboard shows up on the power supply lines. That is the press of any key causes an interrupt which then causes the keyboard to be scanned in the X and Y directions. To try and improve efficiency many keyboard scan algorithms stop at first active wire detection in both the X and Y wires.

Thus when the Interupt occures the microcontroler wakes up or switches into interupt mode which generates the equivalent of a stat signal. It then scans for a pressed wire, which means that there is a time delay dependent on which key is pressed. Thus there is a visable power signiture related to which key is pressed. Other “regular functions” generate their own specific power signiture. In general the more efficient the power supply the clearer these signals are. Worse perhaps is the output of High Level Language Compilers, each standard library function will in effect have it’s own power signature. Thus it is possible to “reverse engineer” code via the effect the compiled functions have on the power supply, it was something I was doing back in the 1980’s and I gather that back when “old iron mainframes” running “batch jobs” had CPU cycle times below 0.5MHz “operators” would leave a Mediumwave AM radio tuned to an appropriate frequency and listen to the code executing and know roughly what it was doing by the type of noise.

Unfortunately many people designing electronic hardware these days are not realy engineers, and nor for the most part do they need to be. Component manufacturers provide “suggested circuits” that can with fairly minimal electronics knowledge be bolted together into a system. It is after all what we are seeing with IoT devices from “no name” design houses getting things built in China. Mostly these people find that their circuits function as desired but often have Electromagnetic Compatability (EMC) issues. Few of them realise that those signals causing EMC fails are “side channels” carrying away into the EM Spectrum for all to hear the secrets from within their “knock-together” designs.

One sure way to tell the design engineers are not realy engineers with any kind of security knowledge, is when they solve their EMC issues by use of “jittering / whitening” the main system clock. Whilst it might get the averaged signal inside the EMC Mask, it actually decreases system security. Because what it does is turn the the “side channel signals” into “Spread Spectrum” signals, which actually make life easier for an EM Spectrum eavesdropper…

I could go on but there are way to many people designing “security electronics” that realy realy should not be doing so…

Likewise those who write the “firmware” or “application” that runs on some OS they bought in that likrwise was not even remotely designed for security on the Power and EM domains…

Unfortunatly “security locks” especially battery powered electronic locks realy are a “snake oil peddling” paradise… You would have thought people who’s job is security would have “wised up by now” but apparently not.

IDG Contributor Network: Thoughts from Defcon 27 – This is why I do what I do

Defcon is the one of the oldest and largest continually running hacker conventions, started by The Dark Tangent. According to their own FAQ, Defcon started as a party for members of “Platinum Net,” a Fido protocol-based hacking network out of Canada. Fido was one of the protocols used to store and forward information before the Internet was pervasive and popular. People used it to create ad-hoc networks that stored and forwarded files and messages across the world.

Back in the late 1980s and early ’90s, the phone company did not offer unlimited service. They charged significant amounts of money to make long distance calls. Many of the kids who grew up on Commodores, Apples, Amigas, Spectrums and PCs traded cracked/pirated games (warez), traded demos, chatted, and wanted to explore.

A number of groups came together to find weaknesses in the phone system, Alliance Teleconferencing, payphones, the nascent wide-area networks such as TYMNET and PC Pursuit, and corporate phone systems so that they could avoid having to pay for these long-distance services. This was called phone phreaking and was part of the hacker scene.

Many of these networks and their brethren – especially the main one, FIDONet, before the Internet – relied on phreakers to help facilitate cheap or free communication. Maintaining store and forward networks to relay messages, warez and files cost a lot of money at slow baud rates.

This extended to hacking, where numerous people got a hold of accounts on university or corporate systems that had Internet, TYMNET, Telenet or PC Pursuit connections, and extended the scene to Internet Relay Chat (IRC), proprietary chat systems, File Transfer Protocol (FTP) sites and Internet or network BBSes. Outdials, which were connections from these systems to standard telephone modems to allow for free long-distance modem calls, were critical for college students that wanted to call BBS systems at home.

The scene members got together regularly at parties and meetings. Diversi-dial (DDial) had at least one national convention. There was HoHoCon in the Winter, PumpCon (which still exists) in the autumn in Philadelphia, and SummerCon in the summer. There were numerous others, including the 2600 meetings, which still occur the first Friday of every month. The demo scene groups still have meetings in the US and Europe. Even the people that met about their Commodores have their choice of multiple Vintage Computer Federation events.