List contents of an Amazon S3 bucket with Python

We have already seen in a previous example how we can upload a file to Amazon S3 with Python. In this case we are going to see how we can List content of an Amazon S3 bucket with Python. The first thing we have to remember is that a bucket is a container, a kind of directory. The buckets will store all the keys of the files that we have uploaded to Amazon S3.

You have everything you need to start working with Python, Amazon S3 and tinys3 in the example of uploading a file to Amazon S3 with Python.

The example of listing contents of an Amazon S3 bucket with Python we are going to do it with the tinys3 library that you can find at https://github.com/smore-inc/tinys3

This is how we should install tinys3.

pip install tinys3

Now we are going to import the library into our file and define the Amazon S3 keys that we are going to use.

import tinys3

S3_ACCESS_KEY = 'BAKIBAKI678H67HGA'
S3_SECRET_KEY = '+vpOpILD+E9872AialendX0Ui123CKCKCKw'

Now we connect to Amazon S3 using the Connection class

conn = tinys3.Connection(S3_ACCESS_KEY,S3_SECRET_KEY,'the-test',endpoint='s3-eu-west-1.amazonaws.com')

We see that this class receives four parameters:

  • Amazon S3 Access Key
  • Amazon S3 Secret Key
  • The bucket to use. In this case we have called it ‘the-test’
  • The Amazon S3 endpoint or region we use

The next thing will be to use the .list() method to obtain all the keys of a bucket.

If we want it to be from the root of a bucket we will write:

list = conn.list('')

And if we want it to be from a specific directory:

list = conn.list('/directory')

Now we have the bucket information in a dictionary with the meta information. Specifically, the key ‘key’ will give us the information about the files. This is how we go through the entire dictionary through a loop:

for file in list:
    print file['key']

And we will have already managed to list the content of an Amazon S3 bucket with Python.