Forem Creators and Builders 🌱

Chris Pluta
Chris Pluta

Posted on • Updated on

Implementing Video Feature

When we adopted forem) we didn't implement every feature right away. Over time we've been investigating how to enable particular features and what we want to use.

We came across the Video Upload feature and thought it might have a place in our forem community, Lets Build. There are several members that use YouTube for their tutorials. However, some are submitting regular dev logs to document their journey. Sometimes someone does something interesting and just wants to record it. So that's what prompted us to figure out this piece of functionality and see how our members adopted it. πŸ˜„

We started looking at the configuration to discover what utilities we needed. The main three settings are:


When these are configured, they are used in s3_direct_upload.rb. The first thing you notice in the comment of the file is the community is unaware of any other community having this feature working. Well, that didn't stop us! πŸ˜„

Let's dive into what it took to get this running and how you could too.


I started taking notes after the fact so I apologize for anything missing. Please drop a comment and I will try and fill in the gaps as possible.

Additionally, not everything will have the best security configuration. However, being more open allows more things to work when adding something new. πŸ‘

Required Services

As the comment leads you to believe, you will need a few AWS services. These services are: S3, Lambda, and CloudFront.

The S3 is for file storage, Lambda is to transcode the file, and CloudFront is to serve up the video to the browser.

Why all the hassle?

From what I gathered AWS already offers a service to help transcode your file. However, they charge $.03 per minute of video to be processed. Lambda is a lot cheaper. So if cost isn't much of an issue then I'm sure you can try the Elastic Transcoder and let me know how it went!

Initial S3 Setup S3

The first thing we need to setup is our S3 buckets. We need to set two of them up. The reason for this is once we get to our Lambda step our transcode function will be triggered after a file has been uploaded. Once this occurs, we need to process the file into two other kinds of files that need to be uploaded back to a container. If we choose the same container it will re-trigger the process and at that point we might as well start this section over again. :smile

To avoid our infinite upload loop we need to make two buckets. Let's call them forem-video and forem-video-input.

To setup a bucket, we will start from the home screen, we can search for the S3 service. Once found, click it.

Click Create bucket + and let's get into the wizard to setup our bucket. We will do this process twice, once for forem-video and again for forem-video-input.

Set the name to one of the names are going through. Since we are US based forem we want to pick a US region. If you're picking an Eastern region it is important to pick Virginia and NOT Ohio. The reason for this is there is different supported authentication hashes between both regions. If you dive deeper into the dependency of s3_direct_upload you will see the signature is using sha1. If you pick Ohio it only supports sha256.

You can leave options at default and click Next.

On Set permissions uncheck Block all public access. Since we need to allow upload and download of files. Say you acknowledge then click Next.

Once everything looks good click Create Bucket.

Repeat a second time for the other bucket and continue on!

Once we have our buckets, we need our security credentials. In AWS you can click your account in the top right and click My Security Credentials.

Go to Access keys and click Create New Access Key.

Now we have enough to fill out our setting variables!

The Access key will go in AWS_S3_VIDEO_ID. The Secret Access Key will go in AWS_S3_VIDEO_KEY. And finally, the S3 name of forem-video will go in AWS_S3_INPUT_BUCKET.

Setup CloudFront

If we investigate more, we will find a controller that serves up the files. This is in article_with_video_creation_service.rb. In the first variable we see a reference to CloudFront. So that seems like a dependency we need. πŸ˜„

To do this we need to find CloudFront on AWS using the service search. Once it's found we can click Create Distribution.

Click Getting Started under Web.

The origin name will be the forem-video S3 container.

Because we like to be secure select HTTPS Only under Viewer Protocol Policy.

Everything else should be good to go! Click Create Distribution.

This will take a moment to process but we should now have our domain name!

Now that we have the pieces of the puzzle for our code base let's get in there and wreck some house!

Make Code Changes

The code changes are relatively simple. There are two changes we need to make to article_with_video_creation_service.rb.

The first change we eluded to earlier is the reference to CloudFront. We now have our own domain so let's change it to that one!

Second is further down, you will notice a reference to dev-to-input-v0. This will be replaced with our new bucket of forem-video-input.

Check this in and deploy it. Where we can try the upload feature and see that we get a CORS problem. But everything should look like it's wired up properly if we check out our network tab of the browser. In the OPTIONS call we should see our input bucket name included there to make sure our site configuration took effect.

You will see the progress bar doesn't move at all. Once we setup CORS we should at least be able to upload to our input bucket.

Setup S3 Permissions for Upload

In our forem-video-input bucket we need to allow uploads from our site. To do this go to the bucket and click Permissions.

Click CORS Configuration and paste the following:

<?xml version="1.0" encoding="UTF-8"?> 
<CORSConfiguration xmlns=""> 
Enter fullscreen mode Exit fullscreen mode

This will allow any site to upload to this bucket.

Click Save on the top right of the panel.

Next click Permissions and click Everyone. This will open a small slider with some permissions. We want to allow people to write objects. Click Save.

Next let's go to the forem-video bucket. This will be where the Lambda function will be dropping the processed files. We need to make sure files can be uploaded, and the correct permissions are set.

So, go to Permissions on the forem-video bucket.

Click Everyone and make sure Write Bucket Permissions and Write Object and List Objects are all set. Click Save.

Now we can go back to our site and try an upload.

If everything is configured correctly so far, we will see a progress bar happen, and see a POST request to videos in our Network tab of our browser!

We can verify this by going to our forem-video-input bucket and see a file uploaded, and we can check our Dashboard and see we have a pending video post.

Woo! Progress!

As I mentioned earlier if you get past the CORS issue or get a signature issue, make sure to double check the region you are using supports SHA1 signatures. If you're in US East make sure to pick Virginia and not Ohio. Any other region I am unclear about.

Setup Lambda

Now that we got through all the initial setup and the file is uploaded where we need it we can now figure out how to transcode our file. This is where you're going to see my inexperience with this technology show, but it definitely works, just not the same as

Going back to article_with_video_creation_service.rb you will see there are two file extensions. The transcode process must at least make files with the extension .png and .m3u8.

The main question is how do you process these? There seems to be a couple methods but I found ffmpeg that can be ran as a process to transcode a file. So I tried to see how to do this in Lambda. Turns out there is a feature called a Layer where you can load tools like this to be used in your code.

Just like our other services find Lambda. Go to Functions and click Create Function. You will get prompted with three options. Chose Browse serverless app repository and search for ffmpeg.

There should be a result called ffmpeg-lambda-layer. Chose this and click Deploy.

Awesome! Now we have the tool all ready for us to use. We could have uploaded this dependency ourselves. But, I personally didn't want to do something someone else might have already done. πŸ˜„

So now we have our layer, we can now make our function. So again, let's click Create Function but this time let's select Author from scratch. Setup your function name, such as, video-conversion, set the runtime to Node 12.x (currently the latest). Next expand Choose or create an execution role and make sure Create a new role with basic Lambda permissions is selected. Click Create function.

Let's get ready to roll!

Setup Layer

Before we continue on with adding a layer, we need the ARN of the layer we added via the repo. To do this go to the left navigation and select Layers under Additional Resources. Copy the Version ARN for ffmpeg.

Under Configuration you will see a fun diagram. It has your function name and below it, it says Layers.

Select Layers.

Once you do this a few more options become available. Scroll down and you will see Add layer.

Click Add layer. Click Specify an ARN and paste the Version ARN we copied from the Layers in the left navigation. Click Add.

Now we are setup for using ffmpeg.

Setup Process

Checking out our layer from the repo, there is a github with some example to start with. I love open source. We can click our function name in the Designer to see our code editor.

Make three new files with the same files names and copy the contents from the example/src folder over to the editor. This will be our base.

Most of our work will be in index.js. Let's take this in pieces... and butcher it along the way. πŸ‘

Generate png

So instead of using any process variables, I hard coded the snot out of everything in the function. You know, for science.

The first thing is that the file is already generating a thumbnail png for us! So why not just use it?

So, let's make sure things look like the following in the beginning of the file.

const s3Util = require('./s3-util'), 
    childProcessPromise = require('./child-process-promise'), 
    path = require('path'), 
    os = require('os'), 
    EXTENSION = '.png', 
    THUMB_WIDTH = 616, 
    OUTPUT_BUCKET = 'video', 
    MIME_TYPE = 'image/webp'; 
Enter fullscreen mode Exit fullscreen mode

Setup Permissions

Before we get testing, we need to setup some permissions on the Lambda to give access to our S3 forem-video bucket. To do this go to Permissions in the lambda function.

In the execution role click the role name for the function. This will launch a new tab.

In the spirit of "let's get this to work" we will be over-zealous with our permissions.

Make sure the permissions tab is selected and click Add Inline Policy. Choose the S3 service, All Actions. Under Resources click Add ARN for bucket. Set the name to be video/*. Click Add.

Click Review Policy. Give it a name and click Create Policy.

Make files public for viewing

In s3-util.js there is a parameter for ACL set to private. Change this to public-read. This will allow the file to be read once uploaded.

When you test out the transcoding process, you may get an ACCESS DENIED error due to the ACL not being set. If you don't want to do this step now keep it private and it will work. However, you will need to manually make sure the files are marked as public before the site can read them.

Trial and Error

First, we need to setup a test scenario. To do this click the dropdown next to Test. Click Configure test events and make sure Create new test event is selected. Next, we need to run through the event to make sure its configured.

Looking at index.js we need to only setup a handful of elements. These are and object.key.

When we uploaded our test video earlier, we want to grab that name from the forem-video-input bucket and place that in the object.key property. In regards to the it should be forem-video-input, since this is where it should be reading the file from.

Once these are set click Save.

Make sure this value is now selected to the left of the Test button. Once confirmed click Test.

We should see a success! If not check to make sure permissions are configured properly.

Go to the forem-video bucket and you should now see a newly created .png file in there. Now if you recall article_with_video_creation_service.rb you will see it was embedded in a folder with the same name as the video, and the file is named differently.

To fix that we need to change the upload line to change the file name to look something more like the following:

    return s3Util.downloadFileFromS3(inputBucket, key, inputFile) 
        .then(() => childProcessPromise.spawn( 
            '/opt/bin/ffmpeg', ['-loglevel', 'error', '-y', '-i', inputFile, '-vf', `thumbnail,scale=${THUMB_WIDTH}:-1`, '-frames:v', '1', outputFile], { env: process.env, cwd: workdir } 
        .then(() => s3Util.uploadFileToS3(OUTPUT_BUCKET, key + "/" + "thumbs-" + key + "-00001.png", outputFile, MIME_TYPE)) 
Enter fullscreen mode Exit fullscreen mode

Generate m3u8

Now that we have a pattern in place and things are looking good let's setup the .m3u8 file.

First let's setup a variable for our file name in all our variable setup of output names:

outputVideoFile = path.join(workdir, id + '.m3u8') 
Enter fullscreen mode Exit fullscreen mode

Once that's setup we can run our ffmpeg command. The following should be appended after the upload of the .png file:

        .then(() => childProcessPromise.spawn( 
            ['-loglevel', 'error', '-y', '-i', inputFile, '-profile:v', 'baseline', '-level', '3.0', '-s', '640x360', '-start_number', '0', '-hls_time', '10', '-hls_list_size', '0', '-f', 'hls', outputVideoFile], 
            { env: process.env, cwd: workdir } 
        .then(() => s3Util.uploadFileToS3(OUTPUT_BUCKET, key + "/" + key + ".m3u8", outputVideoFile, 'application/x-mpegURL')) 
Enter fullscreen mode Exit fullscreen mode

I found this command from here.

This will generate and upload our .m3u8 file! Let's try this out and see how the site works now!

If we check out our site the video won't work, we will see that it's missing a bunch of .ts files. These files are generated when the .m3u8 file was created. So, they all exist in temporary storage!

Generate ts files

Since these files exist in temporary storage when the .m3u8 file is created we just need to make sure to upload them!

So, we need to add a new require to the file system to find the file names.

Add the following to the initialization at the top of the script.

    fs = require('fs'), 
Enter fullscreen mode Exit fullscreen mode

Now that we can traverse our temporary directory, we can get a list of all the file names to be uploaded.

To make this change we need a list of promises to be processed before moving out to completion of the function.

We can replace the .then for the .m3u8 processing with the following:

        .then(() => { 
            var promises = []; 
            fs.readdirSync(workdir).forEach(file => { 
                if (file.endsWith('.ts')) { 
                promises.push(s3Util.uploadFileToS3(OUTPUT_BUCKET, key + "/" + file, qualifiedPath, 'video/MP2T')) 

            promises.push(s3Util.uploadFileToS3(OUTPUT_BUCKET, key + "/" + key + ".m3u8", outputVideoFile, 'application/x-mpegURL')); 

            return Promise.all(promises); 
Enter fullscreen mode Exit fullscreen mode

We can do another test and we should now have everything we need!

Once it finishes, we can check out our article and we should now see our video ready to play!

Send Finalize request

Things are looking up! Now we don't want to manually mark a video as being transcoded every time.

So, let's utilize our the video_states endpoint to mark it complete for us.

The first thing is we need a secret defined for a user in the system. I don't know if there is a way from the front-end to generate this but I achieved by going into the database and finding our system user and set the secret on there. This is plain text so it can be whatever you want/need it to be.

Next up let's make our final modifications to our lambda function.

Back to security we want to make sure we are using HTTPS. So, like we added the file system, we want to require in https.

This will be added to the top of the file with the other requires:

https = require('https'), 
Enter fullscreen mode Exit fullscreen mode

Now that we have that we need to setup another .then to the end after the generation of the .m3u8 file.

.then(() => { 
            return new Promise((resolve,reject)=>{ 
                const postData = JSON.stringify({ 

                const options = { 
                    hostname: '', 
                    path: '/video_states', 
                    method: 'POST', 
                    headers: { 
                        'Content-Type': 'application/json', 
                        'Content-Length': Buffer.byteLength(postData) 

                const req = https.request(options, (res) => { 
                    let statusCode = res.statusCode; 

                    console.log(`STATUS: ${res.statusCode}`); 
                    console.log(`HEADERS: ${JSON.stringify(res.headers)}`); 
                    res.on('data', (chunk) => { 
                        console.log(`BODY: ${chunk}`); 

                    res.on('end', () => { 
                        console.log('No more data in response.'); 

                    if (statusCode >= 400 && statusCode <= 500) { 
                        reject("no dice") 

                req.on('error', (e) => { 
                    console.error(`problem with request: ${e.message}`); 

                // Write data to request body 
Enter fullscreen mode Exit fullscreen mode

This solution was derived from here and modified around a promise and some error handling.

As you can notice the JSON body the Message is a string. In the video_states controller it is parsed and read to find the article with the video.

We can try our file one more time and see if we get an email and the transcode is marked as completed!

Setup Trigger

Now that everything is in place our last thing to do is setup our trigger!

To do this check out the Designer and in the lower left click + Add Trigger. Select S3, bucket name is forem-video-input, the event type is POST.

Make sure the trigger is enabled and acknowledge once you guarantee the buckets won't point at eachother. πŸ˜„

Click Add and let's get back to the site to try it out!

Try it out for real!

Upload your file and wait for it to transcode. You've tested the file a few times to know how long to wait right? πŸ˜†

You should see an email come through, then you know your trigger worked and everything is good to go!

How did I figure this out?

The first thing was finding out the site variables required. Once that was figured out it was starting to identify the key pieces to the puzzle. Most of this was uploading a file on and checking out the network traffic. First was the upload itself then second was how it was loaded. Looking at the network when I play a video on I know we didn't do it the same way. So there is definitely optimizations or a cleaner way to do what we did. However, this is definitely a way to get it done, and good first step to make improvements later. πŸ˜„

Good luck and hope you can get your video upload feature running as well.

Update - Missing CloudFront Policy

Once we went live with this feature it worked for some and not for others. What we found was we were getting a CORS error when trying to watch the video. It worked on my phone and the person that uploaded the video, but did not work on any browser on my computer.

So what was missed?

Turns out when you configure CloudFront there are some headers that need to be whitelisted. How do we do this?

When you view your CloudFront distribution, go to you Behaviors and edit your existing behavior.

Go down to the Cache Policy and Create a new policy. Set the Name and Comment.

Next scroll down to the Headers section and choose Whitelist.
Add the following headers:

  • Origin
  • Access-Control-Request-Method
  • Access-Control-Request-Headers

Everything else is good and Create cache policy.

Now that the policy is created you will have a new value for the Cache Policy. Pick your value then pick Yes, Edit.

Now that this is set you need to wait 5-20 minutes before this policy takes effect to try it out!

Thanks for re-tuning in for the update! Good luck!

Top comments (9)

lee profile image

Hey Chris. How did you get past the CORS issue? Did you need to specify an origin request policy in Cloudfront? I have all the CORS policies setup on the SΒ£ buckets directly and I am still getting the cors error. cheers!


cpluta profile image
Chris Pluta

Is this on the upload or download?

Also you reminded me that I need to write an update about a policy change we needed to do when downloading the file. It was inconsistent in who it worked for.

The missing step is on Cloudfront in the Behaviors you need to create a new Caching Policy. We need to do this to whitelist some headers for origin.

Everything can be default but you need to whitelist the following headers:

  • Origin
  • Access-Control-Request-Method
  • Access-Control-Request-Headers

Once you do that make sure its setup as the policy and save it. It will take somewhere between 5-20 minutes until it takes effect.

If you're still having issues please reach out again!

lee profile image

It was on upload. Great I'll give those allowed headers a shot.

Thread Thread
cpluta profile image
Chris Pluta

If it was on Upload check out your Network tab and check the response. What we got was an error about HMAC-Sha1 was not supported.

The reason for that was because of what the region supported. We thought we could have used US-East-2 but in terms of upload it needed to be US-East-1 since that supported Sha1. If you really want to get into the code, it's in the s3_direct_upload dependency under the signature method which is hardcoded for sha1.

If your region does support Sha1 then make sure your CORs policy is set in your s3 container with the configuration defined under _Setup S3 Permissions for Upload _.

ben profile image
Ben Halpern

This is really great. More thorough than anything we even have documented ourselves!

lee profile image

Hey Ben, would you recommend implementing this until an updated video solution is released?

lee profile image

Thanks for sharing this. Really appreciated

amorpheuz profile image
Yash Dave

Oh wow! This is exactly what I was looking for to add video uploads to my local instance for a bug fix I am working on. Thanks for the article!

yotaphae profile image

There is a typo in the link ""