Connect Data from Facebook Ads to Amazon S3
This simple and quick guide gives you easy instructions on how you can extract your data from Facebook Ads and then analyze it in Amazon S3.
Facebook is the world's most popular social network, equipped with a powerful advertising network where billions of dollars are spent each year. Marketers love using Facebook Ads ads to reach their target audience at scale.
Amazon Simple Storage Service (popularly known as Amazon S3), provides IT teams and developers with durable, secure, and highly-scalable object storage. Offered by Amazon Web Services, Amazon S3 is built to easily store and retrieve any amount of data from various locations, such as web sites and mobile applications, corporate apps, and data from IoT devices and sensors. Amazon S3 is very easy to use and has a simple interface.
Before loading your data into Amazon S3, you will have to prep it first. If you don't already have a data structure in which to store the data you retrieve, you'll have to create a schema for your data tables. Then, for each value in the response, you'll need to identify a predefined datatype (INTEGER, DATETIME, etc.) and build a table that can receive them.
Facebook Ads's documentation should tell you what fields are provided by each endpoint, along with their corresponding datatypes. Complicating things is the fact that the records retrieved from the source may not always be "flat" – some of the objects may actually be lists. In these cases you'll likely have to create additional tables to capture the unpredictable cardinality in each record.
You have three options for extracting data from Facebook:
- You can export a report that you have created and saved. All you have to do is navigate to Ads reporting from the Facebook Ads Manager navigation and click on the report and then click Export.
- You can use a tool like Improvado. By leveraging Improvado, you can easily and quickly integrate, connect, and see all your Facebook Ads Insights data flow seamlessly into your desired visualization tool or database. Improvado does not require any technical skills to operate. You can sync your data over with just a few clicks.
You have to first create an S3 bucket to upload files. For creating a bucket in Amazon S3, you will have to sign in to the AWS Management Console. After signing in open the Amazon S3 console. Now click on Create Bucket.
Type a bucket name in the Bucket Name box. You will find it in the Create a Bucket dialog box. You should choose a unique bucket name in Amazon S3. As bucket names have to comply with certain rules, you should visit Bucket Restrictions and Limitations present in the Amazon Simple Storage Service Developer Guide. Now you have to select a region.
You should create your bucket in the same Amazon region as your cluster. So, click California, if your cluster is in the California region. Now click Create. Note that when Amazon S3 creates the bucket, the console will display your empty bucket in the Amazon Buckets panel. The next step is to create a folder. Now click on the name of your new bucket. Then click on the Actions button, and then click Create Folder.
You will find it in the drop-down list. Once you have your bucket, you can easily add an object to the bucket. An object could be any type of file, such as a text file, photo, data file, or anything else. Also, you may encrypt or compress the files before loading them. Now you can upload your data files to your new S3 bucket.
To do it, click on the name of your data folder. Select the Files wizard in the Upload and click on Add Files. This will open a file selection dialog box. Now choose all the files you have downloaded and extracted, and click Open. All you have to do now is Click Start Upload.
Keeping data up to date
If you've made it this far, congrats! You probably have a written a program or script to extract your data and move it into Amazon S3.
Now it's time to think about how you will keep this data up-to-date by loading updated or new data. Of course, you can just replicate all your data every time you have updated your records, but that would be extremely manual and time-consuming.
Luckily there is a better way. The key is building your script so that it can sense incremental updates made to the data.
Thankfully, Facebook’s API results include fields like "date" so that you can identify those records which are new since the last update you made (or since the most recent record you have copied). Once you have taken new data into consideration, you can easily set your script either as a continuous loop or cron job to pull down new data as soon as it appears.
The Easiest And Fastest Way To Do It
If all this sounds a bit overwhelming, don't worry -- there is an easier way to get this done!
Thankfully, products like Improvado were developed to move data from Facebook to Amazon S3 automatically. Our connectors quickly and efficiently scale to your business needs and promote agile workflows which require less planning as well as fewer resources.