7 minute read

I stumbled upon this post on the AWS subreddit: How are you mitigating the risk of a rogue AWS engineer accessing our data or damaging the RDS instance?

The author wants to address the CISO's concerns about this scenario.

The top responses dismissed the concern.

This sounds like more of an issue with your leadership not understanding how AWS works[...]


Honestly, this sounds like more of a managerial question[...]

Comments fall into one of the three categories:

  1. It's in an audit, certification report/AWS Artifacts.
  2. Something about encryption
  3. Maybe the CISO is on to something

The majority of the responses included the sentiment that "the CISO doesn't get it".

I don't think this is the case. This is a realistic and legitimate concern.

It's a situation where everyone is somewhat correct.

Before we address the comments, let me rephrase the concern1:

AWS staff can access customer data in RDS, either via direct access to the infrastructure or via the sys-level super users of database engines.

This concern isn't limited to AWS, it's common across all cloud service providers (CSP).

It's certifiably not accessible

Regulations, standards, certifications, etc. tend to not prohibit user interactions with a system.

For example, there are strict access controls guidelines in PCI DSS2, Financial Conduct Authority's (FCA) handbook 3, etc. but nothing that would suggest that AWS RDS - or other service - is certifiably not accessible by AWS staff. The controls are mostly self-imposed by AWS, and other CSPs.

The goal here it to prove that we've done our due diligence4 for our vendor. It's compliance. We haven't really talked about the mechanisms in place to prevent this. Even further, how these mechanisms are visible to the customer. An attacker cannot be stopped by calling out "but that would break policy".

An organisation will mitigate third party risks by compliance. Pragmatic due diligence stops here. Realistically as a customer you wouldn't have the means to enquire about the inner workings of your vendor, and that's OK. The customer will never know the mechanism, but can trust that they are in place. As a customer you trust a CSP that their staff don't have access to your customer data.

In any event, you can invoke the shared responsibility model, and your organisation can possibly escape the accountability of the concern. You can say to your customers: "It wasn't us, and we definitely didn't have such an expectation from our due diligence."

CSPs want their customers to make this argument.

We prohibit -- and our systems are designed to prevent -- remote access by AWS personnel to customer data for any purpose, including service maintenance, unless that access is requested by you or unless access is required to prevent fraud and abuse, or to comply with law.
-- AWS documentation


I've written about AWS' KMS threat model5. When it comes to encryption in CSPs, the pragmatic threats to accessing your data derives from IAM.

Server side encryption is customer insurance against a "black swan" event against your CSP. Including the External Key Store (XKS) scheme and bringing your own key. With any server side encryption scheme, you're trusting your CSP to encrypt the data and secure the key. The customer hands the data and points to a CSP hosted key. The rest is up to the CSP.

There is again a level of trust - albeit enforced by a stronger premise - to the CSP. For example, in XKS you trust your CPS will delete your data key after each encrypt and decrypt operation.

Snowden slide, decrypt and encrypt here

You get the idea.

Similarly HSMs also involve a level of trust in that AWS staff cannot retrieve the Domain Key. This extends the trust in a different direction. A fourth party company certifiably builds the HSM in such a way that no one would be able to retrieve the HSM/Domain Key.

But server side encryption is meaningless for RDS in this scenario and has a completely different threat model. In our scenario AWS staff would have direct access to the database. In server side encryption the encryption/decryption is transparent and in this case doesn't require IAM permissions to KMS, unlike S3.

Client side encryption6 is where a customer can potentially increase the security of their system. It makes the provider's position a lot more complex when it comes to retrieving your data. Even if AWS staff can login as a "SUPER", they now get encrypted data. They need to get access to your client key as well. A case of self-imposed separation of duties for CSPs.

If your data is encrypted before being stored, the CSP has to go through more steps in order to retrieve a key that they don't have in the first place. Arguably - again with an enforced premise of trust - potentially impossible7 or potentially noticeable by the customer.

The CISO might be on to something

By now you should notice a theme, it's all about trust.

CSPs build security controls to defend against their own machinations and their own internal threat actors. By extension, the CSPs protect their customers. This isn't something that you can easily and openly talk about as a vendor, but this is the reality of cyber security.

How often do data centre operators put controls in place because they do not trust their staff?

The argument then goes that we can enable logs. CSPs provide transparent operations, by enabling logging, customers should be able to have an unfettered view of access to their database. This is the case with AWS CloudTrail, Azure Audit Logs, GCP Cloud Audit Logs, etc.

The difference between AWS and other CSPs is that AWS doesn't provide a separate access transparency logs service.

These services make access requests to customer data, visible to the customer. Because a customer might actually request CSP staff to access their account and get very close to the data as part of a support ticket. Because not every database holds private data.

Following this, what makes transparency logs trustworthy then, and not AWS RDS audit logs? Why trust one and not the other? Can we even verify the provenance and integrity of CSP logs ?

Reflections on Trusting Trust, seems apropos.

The moral is obvious. You can't trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code.
-- Ken Thomson

It's trust turtles all the way down.


I purposefully use the word concern instead of risk. We're not going into formalities and likelihoods. We want to first address if this is a legitimate concern that we should even consider, at least based on the responses from the post.


"Access to payment account data must be granted only on a business need-to-know basis. Logical access controls are technical means used to permit or deny access to data on computer systems." - PCI DSS 4.0


"Automation may reduce a firm's exposure to some 'people risks' (including by reducing human errors or controlling access rights to enable segregation of duties), but will increase its dependency on the reliability of its IT systems." - FCA handbook SYSC 13.7.|


This is the definition of due diligence. As a customer you're showing that you've fulfilled your reasonable standard of care for your industry or at least risk appetite. The extent to which you need or can go further depends on the vendor, your pockets, the industry, etc.


Transparent Data Encryption (TDE) or similar technologies included. The data is encrypted before it's stored, and preferably not using a CSPs KMS.