Up until the recent past, if you wanted to use persistent storage with Fargate your only option was to use S3 or some other object store and not persist data in your container. But now, Fargate supports EFS mounts, which is great. You can persist files to the mount, your containers can come and go.
As I prepared to deploy this for workloads for my employer I encountered an issue… The containers wouldn’t start because the mount couldn’t mount. The cause?!
The errors made it pretty obvious, the DNS for
fs-abcd1234.efs.us-east-1.amazonaws.com could not be resolved. For these AWS accounts external AD resovlers are used in a shared account. Cross-account looks for EFS mounts do not work, even if using Transit Gateway or VPC peering. The given solution is to mount by IP, not by filesystem-id, or edit
/etc/hosts. But that isn’t possible with Fargate, the hosts file is maintained by AWS and you have to provide a file system id for mounting. To resolve this there were both short-term and long-term solutions.
For the short-term solution we abandoned using Fargate and stuck with ECS on EC2 instances. This allowed us to modify our own
/etc/hosts file in the user-data on boot with a script similar to this:
Now trying to mount EFS via the containers worked as expected. There are some downsides though.
- Every EFS volume has to be added to the hosts, so if we have just a few services it isn’t a bit deal, but as we scale it becomes unmanageable.
- We have to maintain the EC2 instances, which is something we want to avoid and thus prefer Fargate.
This brings me to…
The longer term fix is to resolve the issues with DNS resolution but still allow us to use our internal AD DNS servers. Once again, until semi-recently this wasn’t possible, but now Route 53 Resolvers are available.
The solution was to utilize them in a way that all lookup traffic goes to our internal DNS resolvers except for
To accomplish this in a shared account outbound resolvers and rules for
amazonaws.com were created. Then using RAM the resolvers were shared with all other accounts in our org that were part of the same network and the VPC DHCP option sets were updated to use internal VPC resolvers. Now we could resolve EFS endpoints and DNS still worked as expected. This has the added benefit of making DNS resolution more robust (See here for details).
Now we can keep our DNS config, use Fargate (and Lambda!) with EFS and this will probably resolve issues with other services we have yet to discover. Now the only thing left to do is convert the ECS services we have to use Fargate, which given we are using cdk ecs-patterns will be pretty straight forward. Just make sure you use version 1.4 of the Fargate platform.