Skip to main content

https://technology.blog.gov.uk/2016/07/13/why-security-says-no-wont-cut-it-anymore/

Why ‘security says no’ won't cut it anymore

Posted by: , Posted on: - Categories: Chat, Transformation

GDS poster displaying the words 'Trust. User. Delivery.'s

I spoke recently at the Business Reporter’s Data Security in the Cloud event about how security has changed to face the reality of the modern internet era. The old world of assurance and compliance and ‘security says no’ won't cut it anymore. Security thinking has to be holistic and take into account users, culture, context and behaviour not just technology.

This post will summarise some of the areas I discussed in my talk, detailing these modern realities and how we manage the changing security landscape.

Thinking beyond the cloud

The GDS remit has always been about digital transformation, which our former colleague Tom Loosemore recently expressed as “applying the culture, practices, processes and technologies of the internet-era to respond to people’s raised expectations.” Note the lead-in on culture, practices and processes before we get to technologies.

All too often, when responding to changing security expectations, there’s a tendency to talk about the cloud and related IT approaches rather than considering the context of broader change that’s happening to organisations.

We need to think about what’s changing across the whole environment, rather than simply thinking of cloud security in isolation. For example, while adopting cloud technologies, we’ve also seen the ascendency of continuous delivery practices, a shifting skills profile in our organisations and a move to being dependent on a range of small suppliers and contracts rather than large outsourcing contracts.

Securing while trusting teams

In the fast paced internet era, we need to move at the pace that’s expected of us and that means devolving lots of responsibility into focused teams. Teams need to be as autonomous as possible and effective teams need context. That starts with understanding what everyone is trying to achieve and ensuring they have the right tools at their disposal to deliver.

Securing in this type of setting can’t involve blanket lock-downs. This just won’t work; if we block the tools people want to use, we will only get more Shadow IT (people tend to circumvent controls to get their jobs done more efficiently).

Instead security must be proactive in helping teams work at speed, while selecting and using the most intuitive and secure tools available.

Transparency is essential 

Tom Read who led the Cabinet Technology Transformation (now Group CTO at the Department of Business Innovation and Skills) has talked about an experiment his team ran to measure the number of people using non-work devices. The team installed some Wi-Fi access points around government buildings and then kept track of how many people connected to their personal devices. This let them identify people whose needs weren’t being met by their official IT. The team could then talk to those people about how new tools would help them.

Where there are tradeoffs to be made between how people want to work and what makes for secure behaviour, we can explore those with the users and find the best design. In the old way you may have a secure system that gives you a degree of confidence but the mass of shadow IT and the users working around your security policies means poor visibility into the real security of your system. That’s a natural result of a blanket approach and we need to do better.

Apart from more personalised and targeted security policies, we need tighter auditing. We need to know, for example, who is spinning up virtual machines and if someone has made changes to a server. If we know that we have a better chance of determining whether a change is appropriate or if it’s evidence of tampering. Previously a lot of the work we want to track was done by sysadmins but now the majority of it can be managed through automated auditing systems. Configuration management and infrastructure automation tells you whether there is any deviation in your infrastructure that can indicate compromise. The use of these systems can also vastly reduce the number of people needing direct access to a system, which can be hard to track.

Auditing’s not just for managing infrastructure, it works at the software as a service level as well. The best cloud productivity tools present us with opportunities to get logs of activity and an understanding of who has copied which documents, who has shared what with whom, and so on. We can get useful data about what’s happening in a way that’s not intrusive to our users and review logs to see unusual patterns.

There’s also transparency needed from our providers. Now that teams are working across a global network rather than within carefully controlled business networks, we need to gain certain guarantees from cloud hosting providers and dig deep into their security policies. For instance, providers can supply us with a guarantee not to look inside our virtual machines or containers, and we can ask what data encryption mechanisms they have in place to avoid them seeing our data.

Finally, design for privacy. There are differing public attitudes on privacy and it’s not clear where public expectations will go. At the moment though, principles of good privacy design revolve around making things transparent, ensuring clarity of ownership of data, providing the subjects of data with control, and minimising the amount of duplication and sharing. These are also important tools for building secure systems. If there’s one area to watch in the next few years, it’s privacy engineering.

Assessing cloud providers

When we talk to a hosting provider we don’t want to do a complete security audit ourselves. We want to know where they’ve applied industry best practices and how they can assure us of their methodology.

ThIs means establishing the right level of relationships with providers. When entering into security conversations as government, it’s all too common for us to be met with layers of the provider organisation; first public sector sales, followed by compliance, etc. We should instead first be talking to the actual architects and engineers. We want to talk to them about what systems really do and how they’re composed. Then we can be sure we’re on the right path.

Our colleagues in CESG produced these really helpful principles on cloud security but we still need to take care in how we assess providers against them and apply them internally ourselves. The security practices for our primary hosting provider needn’t be the same as for our shared calendaring app. Think proportionality, think trust, think context - we’re still working on how we apply this thinking ourselves and we’ll blog more on this soon.

Apply the principles incrementally and proportionally. When you start, there should be just a basic sanity check. For example, if trialling some Software as a Service, you may look to see if the provider has a clear privacy policy, whether it offers obvious points of contact, if it requires good passwords, and offers everything over strong HTTPS, but you wouldn’t want to go to the effort of understanding how every element of the system is tested or how incidents are handled. As you decide whether the tool is what you need for a given task, you’ll be able to understand whether there are areas you should probe more deeply.

Preparing for incidents

However much we prepare, there is always the possibility of an incident. In order to respond quickly, the team on the ground needs context to make decisions, clear leadership and an understanding of their communication channels.

Incident management is often perceived to involve managing at speed during chaos, but learning from emergency response teams in other realms, we have recognised that response teams that constantly practice, run drills and rehearse their roles are significantly more effective.

Teams should be running red team exercises and game days to rehearse incident management practices, and after each incident we recommend that a blameless post mortem is conducted to identify whether there are actions that could improve the team’s ability to respond.

All too often we think about incident management through the lens of dealing with the moment. It needs to follow through into action to address systemic issues and this needs to be done proportionally and calmly. That’s something that’s very much in scope of the new National Cyber Security Centre (NCSC).

I ended the talk reiterating that "cloud" is just one area of change impacting our organisations at the moment. When considering security, we need to think about the wider changes taking place to how we work and what we expect from our technology, etc. We’re continually developing our thinking in this area and we’d be interested in your feedback.

We also plan to update security sections in the Service Design Manual soon on areas such as cloud and information security. In the meantime, the following resources may help:

Risk management in digital projects

Government cloud security principles

Principles for building secure digital systems

You can follow James on Twitter, sign up now for email updates from this blog or subscribe to the feed.

If this sounds like a good place to work, take a look at Working for GDS - we're usually in search of talented people to come and join the team.

Sharing and comments

Share this page