Experiments with FFmpeg Filters and Frei0r Plugin Effects

FFmpeg is a robust open-source framework designed for command-line-based processing of video and audio files, and widely used for format transcoding, basic editing (trimming and concatenation), video scaling, video post-production effects, and standards compliance. 
To date, it remains one of the commonly used solutions for processing videos on Ruby on Rails. Frei0r, in turn, is an extremely simple plugin API that combines the most common video effects into simple filters, sources and mixers that can be controlled by parameters.

I’ve prepared an extensive guide on video processing with FFmpeg. In this guide, we'll start with the basics - how to set up a project, ending up with controlling system resource usage. You can find the project repository on GitHub.

Getting ready

Step 1. Install FFmpeg with all options

If you’re running macOS, you can use Homebrew when installing FFmpeg:

brew install ffmpeg --with-fdk-aac --with-frei0r --with-libvo-aacenc --with-libvorbis --with-libvpx --with-opencore-amr --with-openjpeg --with-opus --with-schroedinger --with-theora --with-tools

All of the additional parameters I added to the code above might not be needed for your current project. But I’ve included almost all possible parameters to avoid issues when something doesn’t work because some extension isn’t installed.

Step 2.Fix possible issues with the Frei0r plugin

When I installed FFmpeg with Frei0r, the Frei0r effects didn’t work at first. To check if you have such an issue, run FFmpeg and try to add Frei0r effects to a video:

ffmpeg -v debug -i 1.mp4 -vf frei0r=glow:0.5 output.mpg

When you run that command, you might see an error like this:

[Parsed_frei0r_0 @ 0x7fe7834196a0] Looking for frei0r effect in '/Users/user/.frei0r-1/lib/glow.dylib' [Parsed_frei0r_0 @ 0x7fe7834196a0] Looking for frei0r effect in '/usr/local/lib/frei0r-1/glow.dylib' [Parsed_frei0r_0 @ 0x7fe7834196a0] Looking for frei0r effect in '/usr/lib/frei0r-1/glow.dylib'

If you run the ls -l /usr/local/lib/frei0r-1/ command you’ll see that the plugins are installed with the .so extension.
To solve this problem, on my machine (macOS 10.12.5, ffmpeg 3.3.2, frei0r-1.6.1) I copied the .so extensions to .dlib:

for file in /usr/local/lib/frei0r-1/*.so ; do cp $file "${file%.*}.dylib" ; done

You also have to set this environment variable with the path to the folder where the .dylib files are stored:

export FREI0R_PATH=/usr/local/Cellar/frei0r/1.6.1/lib/frei0r-1

This solution looks like some strange hack. Nevertheless, I finally got Frei0r to work well.

Step 3. Install FFmpeg thumbnailer

We’re going to use the ffmpegthumbnailer utility to create video thumbnails. You can install this utility with brew install ffmpegthumbnailer.

Step 4. Install ImageMagick

If you decide to work with generated thumbnail images or with uploaded watermark images, then you’ll need to install imagemagick.

Step 5. Install MongoDB and Redis

You’ll need to install MongoDB (brew install mongodb) for your project's database and Redis (brew install redis) storage for sidekiq job parameters and data.

Step 6. Make sure you have Ruby and Rails

I assume you already have rvm (ruby version manager), Ruby, and Rails installed, so we can likely skip this part. If you don’t have Ruby and Rails installed, install them now. Keep in mind that I’m using Ruby 2.3.0 and Rails 5.0.1 for this project.

Step 7. Install and configure all gems

Let’s go directly to creating our project. First, create a new Rails project without a default active record, since we’re going to use MongoDB for storage:

rails new video_manipulator --skip-active-record

Next, add all necessary gems to the Gemfile.

Video processing:

gem 'streamio-ffmpeg'

gem 'carrierwave-video', github: 'evgeniy-trebin/carrierwave-video'

gem 'carrierwave-video-thumbnailer',

   github: '23shortstop/carrierwave-video-thumbnailer'

gem 'carrierwave_backgrounder', github: '23shortstop/carrierwave_backgrounder'

gem 'kaminari-mongoid', '~> 0.1.0'

Data storage

gem 'mongoid', '~> 6.1.0'

gem 'carrierwave-mongoid', require: 'carrierwave/mongoid'

Background jobs

gem 'sidekiq'

Website styles

gem 'materialize-sass'

The following gem will be used for JSON serialization: 

gem 'active_model_serializers', '~> 0.10.0'

As you can see, some of these gems are installed directly from github without rubygems. In general, it’s not good practice to install gems like this, but these particular gems include some fixes and recent changes, while the originals of these gems have last been updated a few years ago.

The materialize-sass gem is used for styles. You might use some other style bootstrap gem or even create your own HTML markup styling.

Note that due to compatibility with carrierwave_backgrounder, we’re using sidekiq workers, not modern active workers from Rails.

Therefore, inside config/initializers/carrierwave_backgrounder.rb, sidekiq is configured as the backend for video processing and uploading queues:

CarrierWave::Backgrounder.configure do |c|

 c.backend :sidekiq, queue: :carrierwave


For the mongoid gem we just generate its generic configuration:

rails g mongoid:config

This is enough for our demo research app. Of course, for commercial apps you shouldn’t store a database configuration file in the project's repository since it might contain some database connection credentials. The same goes for MongoDB, since by default it doesn't have a password for database connections; in production you have to set up a password for MongoDB too.

Step 8. Generate a basic scaffold for videos

Since we’re just making a demo application, we’re not going to add authorization and users.

Let’s generate a scaffold for videos with this command:

rails generate scaffold Video title:string file:string file_tmp:string file_processing:boolean watermark_image:string

In order to store and process files in the background, our model requires the following attributes: 

file_tmp:string – stores uploaded files temporarily
file_processing:boolean  – lets us check when processing has finished;  updated by the carrierwave_backgrounder gem

Step 9. Generate video uploader

To generate our video uploader, we need the command: rails generate uploader Video

Let's include all necessary modules from previously installed gems into our newly generated VideoUploader: 

 # Store and process video in the background

 include ::CarrierWave::Backgrounder::Delay

 # Use carrierwave-video gem's methods here

 include ::CarrierWave::Video

 # Use carrierwave-video-thumbnailer gem's methods here

 include ::CarrierWave::Video::Thumbnailer

Then we’ll define processing parameters for video encoding: 


   resolution:           '500x400', # desired video resolution; by default it preserves height ratio preserve_aspect_ratio: :height.

   video_codec:          'libx264', # H.264/MPEG-4 AVC video codec

   constant_rate_factor: '30', # GOP Size

   frame_rate:           '25', # frame rate

   audio_codec:          'aac', # AAC audio codec

   audio_bitrate:        '64k', # Audio bitrate

   audio_sample_rate:    '44100' # Audio sampling frequency


You can also define acceptable video file formats: 

 def extension_whitelist

   %w[mov mp4 3gp mkv webm m4v avi]


Next, you have to specify where uploaded files should be stored: 

 def store_dir

The main part of VideoUploader is encoding operation. We don’t need to keep the original file, which is why processing is forced over the original file. So when processing is finished, the processed file will replace the original. This also ensures that thumbnails will be generated from the processed file.

During processing, we use the :watermark option from the carrierwave-video gem to overlay a user-provided image from the watermark_image field in the Video model.

Here’s the main bit of code: 

 process encode: [:mp4, PROCESSED_DEFAULTS]

 def encode(format, opts = {})
   encode_video(format, opts) do |_, params|
 if model.watermark_image.path.present?
       params[:watermark] ||= {}
       params[:watermark][:path] = model.watermark_image.path

In our case, this code also uses the carrierwave-video-thumbnailer gem for generating video thumbnail images from the middle of the file. Note that there’s code that ensures the proper content type for generated thumbnails, since by default thevideo file’s content type and extension would be used.

Thumbnails are processed as a different version of the uploaded attachment with the following version definition:  

version :thumb do

   process thumbnail: [

     { format: 'png', quality: 10, size: 200, seek: '50%', logger: Rails.logger }


   def full_filename(for_file)

     png_name for_file, version_name


   process :apply_png_content_type


 def png_name(for_file, version_name)



 def apply_png_content_type(*)

   file.instance_variable_set(:@content_type, 'image/png')



Inside the Video model we have to define all necessary attributes: 

 # file_tmp is used for temporarily saving files while

 # they’re processed and stored in the background by carrierwave_backgrounder gem

 field :file_tmp, type: String

 # file_processing attribute is managed by carrierwave_backgrounder

 # it contains an indicator of uploaded file state: being processed or not

 field :file_processing, type: Boolean

We also have to define mount_uploader for the file attribute that uses VideoUploader: 

 # mount_on is specified here because without it the gem

 # would name a filename attribute as file_filename

 # In some cases this is logical, but in our case it’s strange to have

 # file_filename attribute inside video table

 mount_uploader :file, ::VideoUploader, mount_on: :file

Here we specify that we want our attachment to be processed and stored in the background:

​ process_in_background :file

 store_in_background :file

 validates_presence_of :file

This is especially useful when the file is processed and stored in some external storage like Amazon S3. Uploading and working with it can take a long time, so it's good to let them run in the background because otherwise users will have to wait until these tasks have finished, which also might exceed the server request timeout.

We also have to add an uploader mount for a watermark image: 

 # Same as above. We don’t want to have watermark_image_filename attribute

 mount_uploader :watermark_image, ::ImageUploader, mount_on: :watermark_image

Let's add the uploader for our watermark_image. To do so, run the command rails generate uploader Image. Here’s our image uploader:

class ImageUploader < CarrierWave::Uploader::Base

 storage :file

 def store_dir



 def extension_whitelist

   %w[jpg jpeg gif png]



Don’t forget to add the sidekiq configuration to config/sidekiq.yml:

​:verbose: true

:pidfile: ./tmp/pids/sidekiq.pid

:logfile: ./log/sidekiq.log

:concurrency: 3


 - carrierwave

The main part of the code we are going to focus on is the queues key, since carrierwave is the queue where all such tasks are stored.

We’re not going to cover the HTML views part. In most cases, the generated scaffold should be enough. The only thing that you might need to add is the watermark_image attribute in the video creation form.

This is what such from might look like without any additional markup: 

<%= form_for(video) do |f| %>

 <%= f.text_field :title, placeholder: 'Title' %>

 <%= f.file_field :file, placeholder: 'Video File' %>

 <%= f.file_field :watermark_image, placeholder: 'Watermark Image' %>

 <%= f.submit %>

<% end %>

Then in the video details view you can use the video_tag Rails helper method to display an HTML5 video player:

<%= video_tag(video.file.url, controls: true) if video.file.url.present? %>

Now you just have to run a background job: 

bundle exec sidekiq -C config/sidekiq.yml

and your server: rails s

That’s it! You have a simple video processing server that was set up in 15 minutes.

FFmpeg and Frei0r filter processing examples

FFmpeg is a video and audio converter that can also process live audio and video. Moreover, you can resize video on the fly with high-quality results. You can find more detailed information about FFmpeg in the FFmpeg documentation.

For a description of all available FFmpeg filters, check the filters section of the documentation.

Video and audio effects

FFmpeg with the Frei0r plugin is used for video effects. The options -vf and -af specify simple video and audio filters. The -filter_complex option specifies complex filters.

To avoid excessive audio processing, you should copy the audio stream unchanged from your input source file with the option -c:a copy when using all video filters.

For all audio effects, we can copy the video steam unchanged with the -c:v copy option.

You can find more details about stream copy here.

The -c:v copy option isn’t applied for reverse, slow down, and speed up effects because they alter both audio and video streams.

By default, FFmpeg selects the best input stream of each kind (video, audio, subtitles, etc.) Let’s assume that for our audio effects we want to convert all audio input streams, however (maybe we have the same stream in different languages, for example). You can do this with the -map 0 option.

All Frei0r plugin video filters are applied with the FFmpeg simple filter option -vf frei0r=filtername:param1_value|param2_value|param3_value

The official Frei0r page doesn’t contain any documentation on how this option should be used. That’s why I had to conduct my own research. During my research, I found a helpful resource that describes Frei0r filters along with some other plugins. With the help of this resource, I learned about all available filters and their parameters.

Here’s the original video that I used for my video effects processing demonstration:

And here’s the original video I used for my audio effects demonstration:

#1 Sepia effect

The complex colorchannelmixer filter adjusts input frames by remixing color channels. This filter changes the color temperature to sepia-like.

-filter_complex colorchannelmixer=.393:.769:.189:0:.349:.686:.168:0:.272:.534:.131

#2 Black and white effect

Zero saturation with the hue filter produces a black and white output video.

-vf hue=s=0

#3 Vertigo effect

The vertigo effect performs alpha blending with zoomed and rotated images. For my example, I set the phase increment parameter to 0.2 and kept the zoomrate parameter with its default value to modify the video.

-vf frei0r=vertigo:0.2

#4 Vignette effect

The vignette effect makes your film look like it was shot with a vignetting lens. I used the default settings.

-vf frei0r=vignette

#5 Sobel effect

The Sobel effect is used in image processing and computer vision, particularly within edge detection algorithms where it creates an image emphasizing edges. The Sobel effect doesn’t have any parameters you can change.

-vf frei0r=sobel

#6 Pixelizor effect

The pixelizor effect is also applied with default settings and creates a pixelated picture.

-vf frei0r=pixeliz0r

#7 Invert0r effect

The invert0r effect inverts all colors of a source image.

-vf frei0r=invert0r

#8 RGBnoise effect

The RGBnoise effect adds RGB noise to your video. It has one parameter that defines the amount of noise added. The result looks like a 90s movie played on an old VHS player.

-vf frei0r=rgbnoise:0.2

#9 Distorter effect

The distorter effect distorts image in a weird way. I set two parameters here: amplitude and frequency of the plasma signal.

-vf frei0r=distort0r:0.05|0.0000001

#10 IIRblur effect

IIRblur provides an Infinite Impulse Response Gaussian blur. I used this filter with its default settings.

-vf frei0r=iirblur

#11 Nervous effect

The nervous effect flushes frames in time in a nervous way; it doesn’t have any additional parameters.

-vf frei0r=nervous

#12 Glow effect

The glow effect creates a glamorous glow. I set the Blur of the glow parameter to its maximum value.

-vf frei0r=glow:1

#13 Reverse effect

The reverse effect is achieved with two FFmpeg filters – video reverse and audio reverse – running at the same time.

-vf reverse -af areverse

#14 Slow down effect

The slow down effect uses the setpts filter to double the Presentation Time Stamp (PTS) of the original film. As a result, you get a video that’s half the speed of the original.

The atempo filter is used to halve the speed of audio with the 0.5 (50% of the original tempo) setting.

-filter:v setpts=2.0*PTS -filter:a atempo=0.5

#15 Speed up effect

The speed up filter works with the same approach as slow down effect, only in the opposite direction. It takes half of the original video Presentation Time Stamp and makes it twice as fast.

-filter:v setpts=0.5*PTS -filter:a atempo=2.0

#16 Echo audio effect

The aecho audio filter reflects your audio stream to create an echo effect just as if you were in the mountains.

-af aecho=0.8:0.9:1000|500:0.7|0.5

The first parameter is the input gain of the reflected signal (0.8). The second parameter is the output gain of the reflected signal (0.9). Next there’s a list of delays from the original signal in milliseconds, separated by the pipe | symbol. After this parameter goes the list of decays for each delay in the previous list. The colon : separates items in lists.

#17 Tremolo audio effect

The tremolo filter performs sinusoidal amplitude modulation.

-af tremolo=f=10.0:d=0.7

The f parameter is for modulation frequency in Hertz. The d parameter shows the depth of modulation as a percentage.

#18 Vibrato Audio effect

The vibrato filter performs sinusoidal phase modulation.

-af vibrato=f=7.0:d=0.5

The f and d parameters perform the same functions as in the tremolo filter.

#19 Chorus audio effect

The chorus filter resembles an echo effect with a short delay. The difference is that the delay in the echo filter is constant, whereas in the chorus filter it varies using sinusoidal or triangular modulation.

-af chorus=0.5:0.9:50|60|40:0.4|0.32|0.3:0.25|0.4|0.3:2|2.3|1.3

The parameters in the chorus filter are ordered as follows: input gain, output gain, delays list in ms, decays list, speeds list, depths list. Parameters are separated by the pipe and items in lists are separated by the colon.

#20 Watermark

The watermark feature is implemented in the carrierwave-video gem. It uses FFmpeg’s overlay filter.

On the video below, you can see the combination of the vertigo, vignette, RGBnoise, distorter, glow, reverse, echo, tremolo, chorus, and watermark visual effects.

The watermark on this video is a transparent png image with the red text FFmpeg TV:

Generating thumbnails

Now we’re going to add the Thumbnail model, which is embedded in the Video model. Each Thumbnail record will be stored as one thumbnail image:

class Thumbnail

 include Mongoid::Document

 embedded_in :video

 # Here background uploading is not used since this entity is created in

 # background already

 mount_uploader :file, ::ImageUploader


The next step is to turn back to the Video model again. To do so, we need to define the thumbnails association and the callback method to save our thumbnails:

embeds_many :thumbnails

 # Config option: generate thumbnails for each second of video

 field :needs_thumbnails, type: Boolean, default: false

 # Callback method

 def save_thumbnail_files(files_list)

   files_list.each do |file_path|

     ::File.open(file_path, 'r') do |f|

       thumbnails.create!(file: f)




Next, let's place all thumbnail creation logic in the CarrierWave::Extensions::VideoMultiThumbnailer module. This module will be designed to be compatible with the carrierwave-video gem’s logic.

This file is too large to be fully presented within the article. To see the full file, check out this repository.

Here’s how you can describe such a module:

module CarrierWave

 module Extensions

   module VideoMultiThumbnailer

     def create_thumbnails_for_video(format, opts = {})

       prepare_thumbnailing_parameters_by(format, opts)

       # Create temporary directory where all created thumbnails will be saved



       # Run callback for saving thumbnails


       # Remove temporary data




     def run_thumbnails_transcoding

       with_trancoding_callbacks do

         if @progress

           @movie.screenshot(*screenshot_options) do |value|



           # It’s an ugly hack, but this operation returned this in the end

           # 0.8597883597883599 that is not 1.0




  # Some method definitions are skipped here. Look in the repository for more details.

     # Thumbnails are sorted by their creation date

     # to put them in chronological order.

     def thumb_file_paths_list

       Dir["#{tmp_dir_path}/*.#{@options.format}"].sort_by do |filename|

     def save_thumb_files
       model.send(@options.raw[:save_thumbnail_files_method], thumb_file_paths_list)

     def screenshot_options



           preserve_aspect_ratio: :width,

           validate: false

The module I’ve described above uses the screenshot method directly from the streamio-ffmpeg gem. In our case, it generates a thumbnail for each second of the video.

Also, since this operation is local, we have to create, use, and remove temporary directories for our generated thumbnails with the tmp_dir_path. All generated images are sent to the model’s callback, which is specified in the save_thumbnail_files_method parameter.

For some strange reason, its progress value doesn’t always return 1.0 in the end (returning 0.8597, for example), which is why I had to manually set the 1.0 progress value after the operation is finished.

Inside VideoUploader, we include the CarrierWave::Extensions::VideoMultiThumbnailer module and update the encode method using the create_thumbnails_step:

 include ::CarrierWave::Extensions::VideoMultiThumbnailer

 def encode(format, opts = {})

   # ...

   read_video_metadata_step(format, opts)

   create_thumbnails_step('jpg', opts) if model.needs_thumbnails?



 def create_thumbnails_step(format, _opts)



     progress: :processing_progress,

     save_thumbnail_files_method: :save_thumbnail_files,

     resolution: '300x300',

     vframes: model.file_duration, frame_rate: '1', # create thumb for each second of the video

     processing_metadata: { step: 'create_video_thumbnails' }



System Resource Usage Control

FFmpeg video processing is a heavy operation that usually takes almost 100% of server CPU resources. This might slow down some more important operations like web servers and databases.

There are few solutions that can help you deal with this overload: you can limit FFmpeg CPU usage with some Linux system tools and FFmpeg configuration flags, run processing inside a docker container with limited resources, or separate all these into some service that runs on a different machine than the primary server.

Also note that in this section we’re using video and audio filters, which will be covered in the second part of this article.

1. Limit CPU usage with system tools

You can alter FFmpeg system priority with the Linux nice command during process startup, or alter the priority of the already running process with renice.

With nice, the highest priority is -20 and the lowest possible priority is 19. Setting a nice priority can help you save CPU cycles for more important processes.

Next, you can use cpulimit when you want to ensure that a process doesn't use more than a certain portion of the CPU. The disadvantage of nice is that you can't use all of the available CPU time when the system is idle.

Here’s my CPU load without FFmpeg or any other heavy operations:  System Idle

A small 15-second clip with one video and one audio filter can be processed in 28 seconds on my 4-core Macbook Air with 100% CPU load

time ffmpeg -i test_video.mp4 -vf frei0r=vertigo:0.2 -af "chorus=0.5:0.9:50|60|40:0.4|0.32|0.3:0.25|0.4|0.3:2|2.3|1.3" -vcodec h264 -acodec aac -strict -2 video_with_filters.mp4

93.45s user 0.88s system 333% cpu 28.285 total was processed in 28 seconds with 100% CPU load – this is what my system usage looks like with no limitations.

With 50% CPU limit

time cpulimit --limit 50 ffmpeg -i test_video.mp4 -vf frei0r=vertigo:0.2 -af "chorus=0.5:0.9:50|60|40:0.4|0.32|0.3:0.25|0.4|0.3:2|2.3|1.3" -vcodec h264 -acodec aac -strict -2 video_with_filters_cpulimit_50.mp4

103.44s user 1.57s system 50% cpu 3:26.60 total – this is what my system usage looked like with a 50% CPU limitation.

FFmpeg also has a -threads option that limits the number of threads used (CPU cores). The recommended value is the total number of CPU cores minus one or two. The default value is all available CPU Cores. Note that this option always should be added as the last parameter (before output file name).

​time ffmpeg -i test_video.mp4 -vf frei0r=vertigo:0.2 -af "chorus=0.5:0.9:50|60|40:0.4|0.32|0.3:0.25|0.4|0.3:2|2.3|1.3" -vcodec h264 -acodec aac -strict -2 -threads 1 video_with_filters_one_thread.mp4

60.02s user 1.48s system 99% cpu 1:02.04 total

It took me one minute to complete this operation. CPU load was not as bad either: FFmpeg only thread was like this.

Read more about restricting Linux system resources here.

You can also use all these options simultaneously.

If you’re using the same gems as in this example app – streamio-ffmpeg and carrierwave-video – then you can specify the -threads parameter as a custom option  the same way it was used for filters in our application.

In order to use the nice and cpulimit commands with the streamio-ffmpeg gem, you have to alter the default FFmpeg command path inside the initializer file.

The original streamio-ffmpeg gem has a verification that verifies if the FFmpeg binary is executable.

This check won’t let you set this parameter directly with this data /usr/local/bin/cpulimit --limit 50 /usr/local/bin/ffmpeg.

So there are two ways to set this parameter directly: redefine the FFMPEG.ffmpeg_binary=(bin) method to remove this check or create an executable bash script. Since monkeypatching a gem's code is bad practice, let's create a bash script with our code:

/usr/local/bin/cpulimit --limit 50 /usr/local/bin/ffmpeg "$@"

This "$@" parameter passes all .sh script arguments to the FFmpeg command.

Let's put this script into the project's root folder. Don’t forget to make it executable: 

chmod +x ffmpeg_cpulimit.sh

Now you can set the path to your script at initialization: 

FFMPEG.ffmpeg_binary = "#{::Rails.root}/ffmpeg_cpulimit.sh"

The current sidekiq configuration (config/sidekiq.yml) can work on three tasks simultaneously. If you also want to limit system load, you have to set concurrency to 1 to run only one processing task at a time.

​:verbose: true

:pidfile: ./tmp/pids/sidekiq.pid

:logfile: ./log/sidekiq.log

:concurrency: 3


 - carrierwave

2. FFmpeg inside Docker

Another option is to run FFmpeg commands inside Docker. Doing this, it’s possible to limit how many resources a single FFmpeg instance may consume. This also this gives you the ability to control not only CPU usage but also memory and IOs.

Let's try it with a predefined image from GitHub.

I’ve altered the command to avoid using the frei0r filter since this FFmpeg image doesn't have the frei0r plugin:

​time docker run --rm -v `pwd`:/tmp/workdir -w="/tmp/workdir" jrottenberg/ffmpeg -i test_video.mp4 -filter_complex colorchannelmixer=.393:.769:.189:0:.349:.686:.168:0:.272:.534:.131 -af "chorus=0.5:0.9:50|60|40:0.4|0.32|0.3:0.25|0.4|0.3:2|2.3|1.3" -vcodec h264 -acodec aac -strict -2 test_video_docker.mp4

0.02s user 0.02s system 0% cpu 33.632 total operation has been completed in 33 seconds.

With the default configuration, my CPU load wasn’t 100% for all cores, but also included FFmpeg with Docker.

You can learn in detail how this works in Docker references.

And of course, you have to alter the streamio-ffmpeg gem’s code to use this command with Docker if you want to use it with Ruby gems we mentioned above.

3. Separate video processing service

If you have enough resources to run heavy background operations on a dedicated server, then that might be the best option.

You could create such service with an API and callbacks for different processing events like processing finished or failed, or with websocket processing progress notifications.

This requires more work on the developer’s side to create a custom discrete service and integrate it with the primary application server (which runs some business logic), but it’s much better in terms of performance and reliability than sharing one machine between the business logic the server and jobs that run in the background.

Here’s a very simple example of such a background jobs application video processor app.

It can work on only one task – video trimming – and it doesn't have any callbacks, but it has a very simple API.

In this guide, we’ve covered the implementation of FFmpeg video and audio filters and Frei0r plugin effects and our media experiments with them. FFmpeg and Frei0r are powerful tools. The filters described here mostly add some additional distortions to video. But FFmpeg can also be used to perform video and audio enhancements such as lens correction, sound level normalization, noise and shake removal, video streaming, and much more.

4.4/ 5.0
Article rating
Remember those Facebook reactions? Well, we aren't Facebook but we love reactions too. They can give us valuable insights on how to improve what we're doing. Would you tell us how you feel about this article?
Want to create a media app?

Discover how we can help you

Learn more

We use cookies to personalize our services and improve your experience on this website and its subdomains. We may use certain personal data for analytics and marketing purposes. Please read our Privacy Policy before using this website.