Last updated on May 10th, 2024 at 07:47 am

In this tutorial we will take a look at how to AWS EKS cluster and connect to EFS using PHP. We are using eksctl command line tool and at the end deploy a sample PHP application. For pods to have EFS storage we need EFS CSI Driver installed to the cluster. Once the deployment is complete, this sample PHP web application will be able to write data to the EFS mount. We will be using Dynamic provisioning available in EFS.

As per the EKS best practice guide, its always good to use Managed Node group since the provisioning and lifecycle management of EC2 nodes are automated. You can use the EKS API (using EKS console, AWS API, AWS CLI, CloudFormation, Terraform, or eksctl), to create, scale, and upgrade managed nodes. Managed nodes run EKS-optimized Amazon Linux 2 EC2 instances in your account, and you can install custom software packages by enabling SSH access. More details on EKS Best Practice


  • Make sure you have eksctl installed and configured for your AWS environment
  • At least 2 subnets with Public IP support(Cluster access public=true)
  • We will be launching less expensive t2.medium type instance
  • Make sure to review pricing for EKS cluster , EC2 instances and EFS
  • Tutorial was created with the assumption that all IAM related access / configuration is already taken care and the user running the eksctl command has sufficient permission to create EKS cluster/Run EC2 instances – Refer 1st section in this document under section “To create an Amazon EKS cluster
  • IAM role already created and driver trust policy is attached for EFS – More details here (I prefer AWSCLI method)

Create EKS Cluster

Using eksctl and the YAML file (name it mycluster.yaml) below I am going to create a cluster named cluster-new-efs, in the manifest file I have defined my region along with VPC and Subnet(These are public).

I am also creating a managed nodegroup named ng-new-2-test, launching 1 instance (you are free to launch more as per your requirement just adjust the desiredCapacity value ) with type t2.medium and keypair newkeypair. This make sure that we can login/ssh to the nodes using the keypair specified.

kind: ClusterConfig

  name: cluster-new-efs
  region: us-east-1
  id: "vpc-1234abcd"
          id: "subnet-123abc"
          id: "subnet-456def"
  - name: ng-new-2-test
    instanceType: t2.medium
    desiredCapacity: 1
     allow: true
     publicKeyPath: newkeypair
      nodegroup-role: worker

Run the command below to create the cluster

$ eksctl create cluster -f mycluster.yaml

Find more details on Creating / Managing cluster via eksctl here –


It may take a while to provision the cluster (5 mins approx.) . After the cluster gets created we can run this command to verify everything looks good.

$ eksctl_cluster_tests_efs % eksctl get clusters
cluster-new-efs us-east-1 True

Install Amazon EFS CSI driver

Follow this document to install EFS addon, I am adding some steps I did for your reference. As you can see I used private ECR registry.

Before you do this make sure to create an IAM role and attach the required AWS managed policy to it.

Very Important
$ kubectl apply -f original-private-ecr-driver.yaml
serviceaccount/efs-csi-controller-sa created
serviceaccount/efs-csi-node-sa created created created created created
deployment.apps/efs-csi-controller created
daemonset.apps/efs-csi-node created created

Since I am using dynamic provisioning, lets follow this document

a] Download the storage class file from
b] Create an EFS file system(if you don’t have 1 already) and get the FileSystem ID
c] Edit the storageclass.yaml file and update the value for fileSystemId key with your file system ID.
d] Now run the kubectl command

$ kubectl create -f storageclass.yaml created
$ kubectl get sc
efs-sc         Delete          Immediate              false                  5s
gp2 (default)   Delete          WaitForFirstConsumer   false                  25d

Awesome we have now successfully launched an EKS cluster and installed EFS driver on it.

Test automatic provisioning

Lets start this by creating an index.php file as part of config map, name it configmap_php.yaml

kind: ConfigMap
apiVersion: v1
  name: php-file
  index.php: |
    $myfile = fopen("/check/data.txt", "a");
    $i = 1;
    while ($i < 10) 
    {$new=date("Y-M-D h:i:s A");
    echo $new."\n";
    $current = $new.PHP_EOL;
    fwrite($myfile, $current); 

As you can see I am creating a file named data.txt in /check folder and then looping through it 9 times to write data. Run

$ kubectl create -f configmap_php.yaml

Lets create the PHP pod along with the PVC, name it php-app.yaml. As you can see first we are creating a PersistentVolumeClaim and then mounting the folder as /check as configured in our configmap.

apiVersion: v1
kind: PersistentVolumeClaim
  name: efs-claim-for-php
    - ReadWriteMany
  storageClassName: efs-sc
      storage: 10Gi
apiVersion: v1
kind: Pod
  name: php-app-new
    - name: php-app-new
      image: php:7.2-apache
      - name: web-file 
        mountPath: /var/www/html/index.php
        subPath: index.php
      - name: persistent-storage
        mountPath: /check
  - name: persistent-storage
     claimName: efs-claim-for-php
  - name: web-file 
     name: php-file

Once you are ready run this

$ kubectl create -f php-app.yaml

Note: You don’t really have to use configmap for the file index.php , once the pod is launched you can also copy the index.php file to /var/www/html/ that will also do the trick. Make sure to modify the php-app.yaml file accordingly if you are doing that and then run the command below

 $ kubectl cp index.php php-app-new:/var/www/html


Last step is to test whether things are working fine and pod is able to write data to EFS, when you run the curl command below please wait for the output it may take 9 seconds to display the output.

If you would like to see the data appending live, then run the tail command in a different terminal and execute the localhost in parallel.

$ kubectl exec -it php-app-new -- curl localhost
2024-Mar-Fri 09:31:12 PM
2024-Mar-Fri 09:31:13 PM
2024-Mar-Fri 09:31:14 PM
2024-Mar-Fri 09:31:15 PM
2024-Mar-Fri 09:31:16 PM
2024-Mar-Fri 09:31:17 PM
2024-Mar-Fri 09:31:18 PM
2024-Mar-Fri 09:31:19 PM
2024-Mar-Fri 09:31:20 PM

It looks good and PHP is able to write data to the file.

To confirm this run the exec command to tail the file

$ kubectl exec -it php-app-new -- tail -f  /check/data.txt
2024-Mar-Fri 09:31:12 PM
2024-Mar-Fri 09:31:13 PM
2024-Mar-Fri 09:31:14 PM
2024-Mar-Fri 09:31:15 PM
2024-Mar-Fri 09:31:16 PM
2024-Mar-Fri 09:31:17 PM
2024-Mar-Fri 09:31:18 PM
2024-Mar-Fri 09:31:19 PM
2024-Mar-Fri 09:31:20 PM


If by any chance you are seeing this error in the EFS controller logs

I0306 22:14:34.672788       1 event.go:298] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"efs-claim", UID:"5e9a97c5-3e63-402d-a71e-2c124d312350", APIVersion:"v1", ResourceVersion:"6586782", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "efs-sc": rpc error: code = Internal desc = Failed to fetch File System info: Describe File System failed: WebIdentityErr: failed to retrieve credentials
caused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity

Please make sure that the IAM role has sufficient permissions and also verify that the service account that the efs-csi-controller pods use has the correct annotation, run the following command:

$ kubectl describe sa efs-csi-controller-sa -n kube-system
Name:                efs-csi-controller-sa
Namespace:           kube-system
Annotations: arn:aws:iam::xxxx:role/AmazonEKS_EFS_CSI_DriverRole
Image pull secrets:  <none>
Mountable secrets:   efs-csi-controller-sa-token-mtb95
Tokens:              efs-csi-controller-sa-token-mtb95
Events:              <none>

As you can see IAM role to be used is annotated in the efs-csi-controller service account. This again go back to our prerequisites section where we are verifying whether IAM role and driver trust policy is attached for EFS addon correctly.

Leave a Reply

Your email address will not be published. Required fields are marked *