Simple Video Processing with FFmpeg and Ruby on Rails: FFmpeg System Usage Limitations

In this article I’m going to describe the possibilities of standard gems for video processing with Ruby on Rails, plus FFmpeg performance testing and optimization tweaks.

I’ll be processing videos in the background with the sidekiq gem.

I could use almost any database for video processing in Rails, but for this project I’m going to use the NoSQL database MongoDB.

All video processing will be done with FFmpeg.

This is the first (introductory) article about video processing with Rails. My second article will cover more advanced things like video filters, reading video metadata, taking screenshots for each second of video, and real-time monitoring of video processing progress.

I plan to write about experiments with video processing filters in my third article.

Also, you can find my sample project on GitHub.

Project Setup

Let’s look at how to set up this project.

1. Install FFmpeg with all options

If you’re running macOS, then you can use homebrew for installation:

brew install ffmpeg --with-fdk-aac --with-frei0r --with-libvo-aacenc --with-libvorbis --with-libvpx --with-opencore-amr --with-openjpeg --with-opus --with-schroedinger --with-theora --with-tools

In this install I’ve included all possible options to avoid potential issues that may appear if an FFmpeg extension isn’t installed.

On Linux OSs like Ubuntu or Debian you might have to install FFmpeg from source. You can use this guide to do that.

2. Install FFmpeg thumbnailer

We’re going to use the ffmpegthumbnailer utility to create video thumbnails. You can install this utility with brew install ffmpegthumbnailer.

3. Install imagemagick

If you decide to work with generated thumbnail images or with uploaded watermark images, then you’ll need to install imagemagick.

4. Install MongoDB and Redis

You’ll need to install MongoDB (brew install mongodb) for your project's database and Redis (brew install redis) storage for sidekiq job parameters and data.

5. Make sure you have Ruby and Rails

I assume you already have rvm (ruby version manager), Ruby, and Rails installed, so we can likely skip this part. If you don’t have Ruby and Rails installed, install them now. Keep in mind that I’m using Ruby 2.3.0 and Rails 5.0.1 for this project.


1. Install and configure gems

First, create a new Rails project without a default active record, since we’re going to use MongoDB for storage:

rails new video_manipulator --skip-active-record

Next, add all necessary gems to the Gemfile.

Video processing:

gem 'streamio-ffmpeg'

gem 'carrierwave-video', github: 'evgeniy-trebin/carrierwave-video'

gem 'carrierwave-video-thumbnailer',

   github: '23shortstop/carrierwave-video-thumbnailer'

gem 'carrierwave_backgrounder', github: '23shortstop/carrierwave_backgrounder'

gem 'kaminari-mongoid', '~> 0.1.0'

Data storage

gem 'mongoid', '~> 6.1.0'

gem 'carrierwave-mongoid', require: 'carrierwave/mongoid'

Background jobs

gem 'sidekiq'

Website styles

gem 'materialize-sass'

The following gem will be used for JSON serialization: 

gem 'active_model_serializers', '~> 0.10.0'

As you can see, some of these gems are installed directly from github without rubygems. In general it’s not good practice to install gems like this, but these particular gems include some fixes and recent changes, while the originals of these gems have last been updated a few years ago. You can find out more about configuring these gems here.

The materialize-sass gem is used for styles. You might use some other style bootstrap gem or even create your own HTML markup styling.

Note that due to compatibility with carrierwave_backgrounder, we’re using sidekiq workers, not modern active workers from Rails.

Therefore, inside config/initializers/carrierwave_backgrounder.rb, sidekiq is configured as the backend for video processing and uploading queues:

CarrierWave::Backgrounder.configure do |c|

 c.backend :sidekiq, queue: :carrierwave


For the mongoid gem we just generate its generic configuration:

rails g mongoid:config

This is enough for our demo research app. Of course, for commercial apps you shouldn’t store a database configuration file in the project's repository since it might contain some database connection credentials. The same goes for MongoDB, since by default it doesn't have a password for database connections; in production you have to set up a password for MongoDB too.

2. Generate a basic scaffold for videos

Since we’re just making a demo application for processing videos with Rails, we’re not going to add authorization and users.

Let’s generate a scaffold for videos with this command:

rails generate scaffold Video title:string file:string file_tmp:string file_processing:boolean watermark_image:string

In order to store and process files in the background, our model requires the following attributes: 

file_tmp:string – stores uploaded files temporarily


file_processing:boolean  – lets us check when processing has finished;  updated by the carrierwave_backgrounder gem

Generate video uploader

To generate our video uploader, we need the command: rails generate uploader Video

Let's include all necessary modules from previously installed gems into our newly generated VideoUploader: 

 # Store and process video in the background

 include ::CarrierWave::Backgrounder::Delay

 # Use carrierwave-video gem's methods here

 include ::CarrierWave::Video

 # Use carrierwave-video-thumbnailer gem's methods here

 include ::CarrierWave::Video::Thumbnailer

Then we’ll define processing parameters for video encoding: 


   resolution:           '500x400', # desired video resolution; by default it preserves height ratio preserve_aspect_ratio: :height.

   video_codec:          'libx264', # H.264/MPEG-4 AVC video codec

   constant_rate_factor: '30', # GOP Size

   frame_rate:           '25', # frame rate

   audio_codec:          'aac', # AAC audio codec

   audio_bitrate:        '64k', # Audio bitrate

   audio_sample_rate:    '44100' # Audio sampling frequency


You can also define acceptable video file formats: 

 def extension_whitelist

   %w[mov mp4 3gp mkv webm m4v avi]


Next, you have to specify where uploaded files should be stored: 

 def store_dir



The main part of VideoUploader is encoding operation. We don’t need to keep the original file, which is why processing is forced over the original file. So when processing is finished, the processed file will replace the original. This also ensures that thumbnails will be generated from the processed file.

During processing we use the :watermark option from the carrierwave-video gem to overlay a user-provided image from the watermark_image field in the Video model.

Here’s the main video processing code: 

 process encode: [:mp4, PROCESSED_DEFAULTS]

 def encode(format, opts = {})

   encode_video(format, opts) do |_, params|

 if model.watermark_image.path.present?

       params[:watermark] ||= {}

       params[:watermark][:path] = model.watermark_image.path




In our case, this code also uses the carrierwave-video-thumbnailer gem for generating video thumbnail images from the middle of the file. Note that there’s code that ensures the proper content type for generated thumbnails, since by default thevideo file’s content type and extension would be used.

Thumbnails are processed as a different version of the uploaded attachment with the following version definition:  

version :thumb do

   process thumbnail: [

     { format: 'png', quality: 10, size: 200, seek: '50%', logger: Rails.logger }


   def full_filename(for_file)

     png_name for_file, version_name


   process :apply_png_content_type


 def png_name(for_file, version_name)



 def apply_png_content_type(*)

   file.instance_variable_set(:@content_type, 'image/png')



You can read more about this solution here.

Inside the Video model we have to define all necessary attributes: 

 # file_tmp is used for temporarily saving files while

 # they’re processed and stored in the background by carrierwave_backgrounder gem

 field :file_tmp, type: String

 # file_processing attribute is managed by carrierwave_backgrounder

 # it contains an indicator of uploaded file state: being processed or not

 field :file_processing, type: Boolean

We also have to define mount_uploader for the file attribute that uses VideoUploader: 

 # mount_on is specified here because without it the gem

 # would name a filename attribute as file_filename

 # In some cases this is logical, but in our case it’s strange to have

 # file_filename attribute inside video table

 mount_uploader :file, ::VideoUploader, mount_on: :file

Here we specify that we want our attachment to be processed and stored in the background:

​ process_in_background :file

 store_in_background :file

 validates_presence_of :file

This is especially useful when the file is processed and stored in some external storage like Amazon S3. Uploading and processing can take a long time, so it's good to let them run in the background because otherwise users will have to wait until these tasks have finished, which also might exceed the server request timeout.

We also have to add an uploader mount for a watermark image: 

 # Same as above. We don’t want to have watermark_image_filename attribute

 mount_uploader :watermark_image, ::ImageUploader, mount_on: :watermark_image

Let's add the uploader for our watermark_image. To do so, run the command rails generate uploader Image. Here’s our image uploader:

class ImageUploader < CarrierWave::Uploader::Base

 storage :file

 def store_dir



 def extension_whitelist

   %w[jpg jpeg gif png]



Don’t forget to add the sidekiq configuration to config/sidekiq.yml:

​:verbose: true

:pidfile: ./tmp/pids/

:logfile: ./log/sidekiq.log

:concurrency: 3


 - carrierwave

The main part of the code we are going to focus on is the queues key, since carrierwave is the queue where all video processing tasks are stored.

We’re not going to cover the HTML views part. In most cases, the generated scaffold should be enough. The only thing that you might need to add is the watermark_image attribute in the video creation form.

This is what video uploading from might look like without any additional markup: 

<%= form_for(video) do |f| %>

 <%= f.text_field :title, placeholder: 'Title' %>

 <%= f.file_field :file, placeholder: 'Video File' %>

 <%= f.file_field :watermark_image, placeholder: 'Watermark Image' %>

 <%= f.submit %>

<% end %>

Then in the video details view you can use the video_tag Rails helper method to display an HTML5 video player 

<%= video_tag(video.file.url, controls: true) if video.file.url.present? %>

Now you just have to run a background job: 

bundle exec sidekiq -C config/sidekiq.yml

and your server: rails s

That’s it! You have a simple video processing server that was set up in 15 minutes.

System Resource Usage Control

FFmpeg video processing is a heavy operation that usually takes almost 100% of server CPU resources. This might slow down some more important operations like web servers and databases.

There are few solutions that can help you deal with this overload: you can limit FFmpeg CPU usage with some Linux system tools and FFmpeg configuration flags, run processing inside a docker container with limited resources, or separate video processing into some service that runs on a different machine than the primary server.

Also note that in this section we’re using video and audio filters, which will be covered in the second part of this article.

1. Limit CPU usage with system tools

You can alter FFmpeg system priority with the Linux nice command during process startup, or alter the priority of the already running process with renice.

With nice, the highest priority is -20 and the lowest possible priority is 19. Setting a nice priority can help you save CPU cycles for more important processes.

Next you can use cpulimit when you want to ensure that a process doesn't use more than a certain portion of the CPU. The disadvantage of nice is that your process can't use all of the available CPU time when the system is idle.

Here’s my CPU load without FFmpeg or any other heavy operations:  System Idle

A small 15-second video clip with one video and one audio filter can be processed in 28 seconds on my 4-core Macbook Air with 100% CPU load

time ffmpeg -i test_video.mp4 -vf frei0r=vertigo:0.2 -af "chorus=0.5:0.9:50|60|40:0.4|0.32|0.3:0.25|0.4|0.3:2|2.3|1.3" -vcodec h264 -acodec aac -strict -2 video_with_filters.mp4

93.45s user 0.88s system 333% cpu 28.285 total was processed in 28 seconds with 100% CPU load – this is what my system usage looks like with no limitations.

With 50% CPU limit

time cpulimit --limit 50 ffmpeg -i test_video.mp4 -vf frei0r=vertigo:0.2 -af "chorus=0.5:0.9:50|60|40:0.4|0.32|0.3:0.25|0.4|0.3:2|2.3|1.3" -vcodec h264 -acodec aac -strict -2 video_with_filters_cpulimit_50.mp4

103.44s user 1.57s system 50% cpu 3:26.60 total – this is what my system usage looked like with a 50% CPU limitation.

FFmpeg also has a -threads option that limits the number of threads used (CPU cores). The recommended value is the total number of CPU cores minus one or two. The default value is all available CPU Cores. Note that this option always should be added as the last parameter (before output file name).

​time ffmpeg -i test_video.mp4 -vf frei0r=vertigo:0.2 -af "chorus=0.5:0.9:50|60|40:0.4|0.32|0.3:0.25|0.4|0.3:2|2.3|1.3" -vcodec h264 -acodec aac -strict -2 -threads 1 video_with_filters_one_thread.mp4

60.02s user 1.48s system 99% cpu 1:02.04 total

It took me one minute to complete this operation. CPU load was not as bad either: FFmpeg only thread was like this.

Read more about restricting Linux system resources here.

You can also use all these options simultaneously.

If you’re using the same gems as in this example app – streamio-ffmpeg and carrierwave-video – then you can specify the -threads parameter as a custom option  the same way it was used for filters in our application.

In order to use the nice and cpulimit commands with the streamio-ffmpeg gem, you have to alter the default FFmpeg command path inside the initializer file.

The original streamio-ffmpeg gem has a verification that verifies if the FFmpeg binary is executable.

This check won’t let you set this parameter directly with this data /usr/local/bin/cpulimit --limit 50 /usr/local/bin/ffmpeg.

So there are two ways to set this parameter directly: redefine the FFMPEG.ffmpeg_binary=(bin) method to remove this check or create an executable bash script. Since monkeypatching a gem's code is bad practice, let's create a bash script with our code:

/usr/local/bin/cpulimit --limit 50 /usr/local/bin/ffmpeg "$@"

This "$@" parameter passes all .sh script arguments to the FFmpeg command.

Let's put this script into the project's root folder. Don’t forget to make it executable: 

chmod +x

Now you can set the path to your script at initialization: 

FFMPEG.ffmpeg_binary = "#{::Rails.root}/"

The current sidekiq configuration (config/sidekiq.yml) can process three tasks simultaneously. If you also want to limit system load, you have to set concurrency to 1 to run only one processing task at a time.

​:verbose: true

:pidfile: ./tmp/pids/

:logfile: ./log/sidekiq.log

:concurrency: 3


 - carrierwave

2. FFmpeg inside Docker

Another option is to run FFmpeg commands inside Docker. Doing this, it’s possible to limit how many resources a single FFmpeg instance may consume. This also this gives you the ability to control not only CPU usage but also memory and IOs.

Let's try it with a predefined image from GitHub.

I’ve altered the command to avoid using the frei0r filter since this FFmpeg image doesn't have the frei0r plugin:

​time docker run --rm -v `pwd`:/tmp/workdir -w="/tmp/workdir" jrottenberg/ffmpeg -i test_video.mp4 -filter_complex colorchannelmixer=.393:.769:.189:0:.349:.686:.168:0:.272:.534:.131 -af "chorus=0.5:0.9:50|60|40:0.4|0.32|0.3:0.25|0.4|0.3:2|2.3|1.3" -vcodec h264 -acodec aac -strict -2 test_video_docker.mp4

0.02s user 0.02s system 0% cpu 33.632 total operation has been completed in 33 seconds.

With the default configuration, my CPU load wasn’t 100% for all cores, but also included FFmpeg with Docker.

You can learn in detail how this works in Docker references.

And of course you have to alter the streamio-ffmpeg gem’s code to use this command with Docker if you want to use it with Ruby gems we mentioned above.

3. Separate video processing service

If you have enough resources to run heavy background operations on a dedicated server, then that might be the best option.

You could create a video processing service with an API and callbacks for different processing events like processing finished or failed, or with websocket processing progress notifications.

This requires more work on the developer’s side to create a custom discrete service and integrate it with the primary application server (which runs some business logic), but it’s much better in terms of performance and reliability than sharing one machine between the business logic the server and jobs that run in the background.

Here’s a very simple example of such a background jobs application video processor app.

It can process only one task – video trimming – and it doesn't have any callbacks, but it has a very simple API.


FFmpeg video processing is a heavy operation that should be performed in the background. The Ruby on Rails ecosystem has all features needed to do this:

1. Background job processors (sidekiq, resque)

2. Background file storage gems (carrierwave_backgrounder)

3. Video processing gems (streamio-ffmpeg, carrierwave-video, carrierwave-video-thumbnailer)

By default, FFmpeg takes almost all CPU power, which may slow down other critical operations on your server machine.

There are few approaches to dealing with this problem:

1. Limit resources used with Linux system tools like nice or cpulimit to limit system priority and CPU usage.

2. Use FFmpeg's -threads option to limit the number of threads that FFmpeg uses for video processing operations.

3. Isolate all video processing operations from the primary operating system into a fully controllable environment with Docker.

4. Move all video processing logic from the main application to some video processing service running on a separate machine.

FFmpeg is a very powerful tool that’s capable of recording, converting, and streaming audio and video.

In this article we’ve covered just a small part of its capabilities: video encoding, format conversion, and video watermarking (with the overlay filter).

4.4/ 5.0
Article rating
Remember those Facebook reactions? Well, we aren't Facebook but we love reactions too. They can give us valuable insights on how to improve what we're doing. Whould you tell us how you feel about this article?