High Quality Recordings

This document is a work in progress

If you just want to record webms or make gifs for posts in the threads, refer to Recording. This page outlines how to create high quality recordings that may be used for teasers, trailers, developer commentary or other long, high quality videos that are hosted externally (e.g. on YouTube).

Required software

  • OBS - Will be used for recording the footage
  • FFmpeg - Will be used prepare the raw footage before and after editing.
  • Audacity - Only necessary if you're recording a voice over (developer commentary/let's play/etc.)
  • Blender - To edit the footage. (Any other editor that allows you to cut, arrange, blend, etc. audio and video tracks will do.)
  • Blender Render Controller - This program allows Blender video exporting and rendering to be split up on multiple cores. The full version comes with FFmpeg included. (Needs mono on Linux)

Some general notes

Things you should know about video compression (and compression in general)

Why bother

For the most part, your computer stores images in RGB format with an 8 Bit color depth. This means you have 8 Bits for red, green and blue, each, which results in 24 Bits, or 3 Bytes, per pixel. Assuming you want to record 1080p footage, this means you have 1920 * 1080 * 3 = 6220800 Bytes (just under 6 Megabytes) per frame. At 60 frames per second, this results in almost 356 Megabytes per second. Assuming a 90 seconds long trailer, this would result in a more than 31 Gigabytes large file for video alone. This is obviously unacceptable.
During production you may work with uncompressed files at times though, which means you'll still need plenty of free space. It's also really slow, so you better have a beefy CPU or a lot of time.

The takeaway: You need to compress, period.

Lossy vs lossless

There are two types of compression algorithms: lossy and lossless. The difference is simple. Lossless compression algorithms retain all information, whereas lossy ones loose part of it. How much depends on the algorithm and how aggressive the settings are. What's crucial to understand is that this is a one way process. Once something has been compressed away, it's not coming back. There is no uncompressing a mp3 (lossy compressed) file into a wav (uncompressed) one. If you were to do that, all you would get is an inflated mp3 file. The data that was lost when compressing the original audio stream into the mp3 format doesn't come back.
You final video will use lossy compression.

The takeaway: You can only ever reduce the quality of your footage, but never increase it after the fact. Therefore your initial recordings must be of high quality, because if they look bad, you will need to rerecord. This applies to both, audio and video.

Intra- vs Inter-frame compression

There are two approaches to compressing video. Intra-frame compression and inter-frame compression. The former compresses each image separately. Gif works that way, which explains its inefficiency. Inter-frame compression takes an image and only stores what changed. This is a lot more efficient on average. However, it does have the downside of varying efficiency depending on what's happening on the screen. Your final video will use inter-frame compression, although you will use intra-frame compression during production.

The problem is the combination of lossy inter-frame compression (which you will be using for the most part). Lossy compression means that data is discarded, and intra-frame compression means that the efficiency depends on how much each frame changed from the last one. In short: the more stuff is going on on screen, the worse it will look. A live demonstration of this problem can be seen in this video.

The takeaway: The more small details you want to show, the higher quality you need to make your recording and final video.


  • Record in higher quality than you need to and compress at the end
  • You will need a lot of space during production. How much will vary, but you'll probably want a couple hundred Gigabytes to spare while working
  • Video encoding a lot of takes time
  • The busier your footage, the more space and time it will take to work with

Setting up OBS

We will assume you're going to record developer commentary, so a voice over will be recorded. Furthermore, we'll assume that you already set up your scene(s) for OBS depending on how you want to record (multi-monitor, etc.). One thing that is really important to ensure when recording a voice over is that the game's audio and your microphone are recorded into separate audio streams. Once they're mixed, you're not unmixing them cleanly again. Removing background noise, clicking and doing volume adjustments shouldn't affect your game audio.


  • Enable your desktop and microphone audio devices. For desktop audio the default value usually works. For the microphone, select the device explicitly.


  • Set the canvas resolution to the native resolution of the monitor you're recording from
  • Set the scaled resolution to whatever you want the final video to be in
  • The canvas resolution should be greater than or equal to the scaled resolution
  • For the most part, you'll want a 1080p video at 60 fps
  • The downscale filter should be at least bicubic. If you CPU cycles to spare, you can also set it to Lanczos.


  • Set up hotkeys to start and stop recording (not streaming)


  • Set the output mode to Advanced
  • In the Recording tab set the type to Standard
  • It makes sense to have a dedicated folder for raw recordings on a large disk with plenty of space
  • Set the recording format to mkv. The Matroska container is a pain to play back even, but is comfortable to work with due to its flexibility
  • Enable as many audio streams as you need. In our case 1 for the game's audio and 2 for your voice
  • Ideally you want to use the x264 encoder. However, it uses a lot of CPU power when compared to Nvidia's NVENC and the AMD equivalent. (The rest of the article will assume you went with x264)
  • Set the rate control to CBR
  • Choose an appropriate bitrate from Google's recommendations for YouTube uploads which can be found here under the Bitrate tab (SDR) based on your scaled resolution. Those values are good baselines. If you want to be sure, you can overestimate the complexity of your footage and add 20%. The values in the table are in Megabits/s whereas the value in OBS is in Kilobits/s. Therefore you have to multiply it by 1000 in OBS (or 1200, if you want 20% more headroom)
  • We will choose the best CPU preset later. For now, you can set it to medium
  • The remaining settings in this tab can remain untouched
  • In the Audio tab set the bitrate of all audio tracks you're going to use (1 and 2 in this example) to 320 Kilobit/s
  • You can add appropriate names like "Game" and "Commentary"

Main OBS window

  • In the Mixer section you should see your two audio streams
  • You can click on the cogwheel at the right end of either of them and open up the Advanced Audio Properties
  • Click the checkboxes on the right side to ensure that your Game (Desktop Audio) only records into stream 1
  • Make sure your microphone only records into stream 2

With those settings you're almost good to go. The only thing that still needs to be taken care of is finding the correct CPU preset for the x264 encoder.

Testing your CPU preset

CPU usage graph
Image Unavailable
Reasonable CPU utilization during recording

You need a way of monitoring the current load of each core of your CPU. The Windows Task Manager can do that in newer versions. However, you have to right click on the graph and ensure that it shows all of them separately (Logical processors) instead of one summarized graph (Overall utilization). On Linux you can use top for that, which should come with your distribution. Simply running the top command should give you a nice overview of the information we will need. It is extremely useful to have a second monitor for the next step. If you don't have one, you should look into how you can keep the monitoring tool in the foreground while the game is running. The Windows Task Manager has an Always on top option, which is disabled by default. On Linux the process will depend on the windowing system you're using. A tiling window manager won't count though, unless you don't plan to record your footage in full screen to begin with. Regardless of your platform, you might have to set your game to windowed mode, because some games have the annoying "feature" that minimizes them when you click inside another window. As long as your game runs at the correct (canvas) resolution it's fine and the title bar isn't an issue for this test run.

Run your game (or some other, preferably more demanding game) and start a recording while whatever monitoring tool you use is in the foreground. Obviously your game needs to run at at least 60 fps with vertical synchronization enabled. You absolutely don't want any tearing in the footage. While recording, keep an eye on the CPU core utilization. What you're looking for is your CPU choking on the workload. Ideally, no CPU core hits the 100% mark at any point, with plenty of room to spare. If all CPU cores sit at +80% at all times, you might want to use a faster CPU preset. If your game makes one or more cores go up to 100%, that's not the issue. The problems arise when your encoder does and the problem is you can't really tell.

If you just can't make it work, target a high quality 720p video instead. Refer to the table from earlier and reduce the bitrate. If that doesn't work, make it 720p at 30 fps. If you can't hit that target, I'm sorry you're stuck with a toaster.

After recording

Playing the recordings

When you try to open your footage in your media player of choice after recording, you might find out that it doesn't work properly. Maybe you can't activate the second audio stream, maybe you can't seek. That's to be expected with the mkv file that OBS produces while streaming. If you want to open the file with VLC for example, you have to use FFmpeg to copy the streams to a mp4 container. The command syntax is as follows:

# Two streams (1 video, 1 audio)
ffmpeg -i Before.mkv -c copy After.mp4

# Three streams (1 video, 2 audio)
ffmpeg -i Before.mkv -vcodec copy -acodec copy -map 0:0 -map 0:1 -map 0:2 After.mp4

Extracting the streams

Regardless of which tool you want to use for editing the footage, you should separate the streams into individual files. The FFmpeg commands for this are below:

# '-vn' is short for '-vcodec none', thus removing video
# '-an' is short for '-acodec none', thus removing audio

# Extract audio stream #1
ffmpeg -i File.mkv -vn -acodec copy -map 0:1 Game.m4a

# Extract audio stream #2
ffmpeg -i File.mkv -vn -acodec copy -map 0:2 Commentary.m4a

# Extract video stream
ffmpeg -i File.mkv -an -vcodec copy -map 0:0 Video.m4v

Preparing Blender for video editing

If you've never worked with Blender before, read the Beginner section of Blender Knowledge Base first. This tutorial will not go in depth into how to use its video editor, since plenty of resources on that are already available online. Instead, it will focus on setting it up properly. A dedicated tutorial on this wiki may follow at some point in the future.

Blender is more than just a 3d modeling application. In case you don't already use any other video editors, you'll get a crash course in video editing in Blender now. Before you can start using it though, it makes sense to prepare the UI for video editing use. Open up Blender and save the default file as "VideoEditTemplate.blend" somewhere save.

Setting up the video template

Screen Layouts

  • In your template file, delete all 3d objects (A,A,X while hovering over the 3D View Editor)
  • Delete all screen layouts except for Default and Video Editing using the Info Editor
  • In the Default screen layout, replace the 3D View with the Properties Editor, then collapse all other editors except for the Info editor, so that only those two remain
  • Rename the screen layout to "Settings & Export"
  • Save the file

Video Settings

We're not rendering, therefore most of the settings can be left alone since they're not used.

  • Find the Dimensions section in the Render tab of the Properties Editor (leftmost tab)
  • Set Resolution to your final (scaled) resolution. Typically that will be 1920*1080
  • Set Framerate to your final (usually same as recording) framerate. Typically that will be 60
  • Don't use values that are higher than those of your recordings

The percentage beneath the resolution settings is a temporary override. If you want to do a quick test export during production, but exporting a full resolution video takes too long, you can lower the percentage without changing the base resolution. For your final export this needs to be set to 100%. During production you can use whatever is good enough for you to work with. Lowering this value massively reduces export times, so don't just set it to 100% and leave it there. You'll loose a lot of time. Depending on what you're doing 30-50% should be good enough, although certain content may warrant going either lower or higher than that.

  • Find the Output section in the same tab
  • Clear the Output Path. The reason for that is to ensure you don't accidentally export two videos into the same folder, which would cause problems. You need to set this each time you use the template for a new project and use separate folders
  • Ensure that Overwrite and File Extensions are ticked
  • Set the export format to FFmpeg video, which gives us access to the audio settings
  • Scroll down to the Encoding section and locate the Audio Codec setting
  • Set it to FLAC (leave Bitrate and Volume unchanged)
  • Change the export format to PNG images with the RGB channels enabled and 8 Bit color depth
  • The compression setting is very important, since exporting to PNG images will need a lot of space. However, setting it to 100% to save the most space also greatly increases rendering time. You need to decide for yourself
  • Save the file

The file can now be used as a template. When you want to create a new video, simply open this file and save a copy of it somewhere else.

Exporting from Blender

After you're done editing your video, go back to the Properties Editor.

  • Make sure resolution, resolution percentage and frame rate are correct (Dimensions section)
  • Select an appropriate Output directory. The folder must be empty! (Output section)
  • Save the file and open it the Blender Render Controller
  • Under Options/Settings make sure the correct paths to your Blender and FFmpeg binary are set and "Delete chunks when done" is ticked
  • Start the render

This process may create dozens of Gigabytes of data! However, it will only be temporary. You probably don't want to punish an SSD with so many writing operations. It may take a considerable amount of time to finish. At times it can look like the program crashed, especially if you have set the compression to 100%. However, in all likelihood it still renders your video. To make sure, you can right click on the target directory and see if it grows. If it does, it means new images are being written. After it finished, you can press the Render Mixdown button to get the audio.

If everything worked, you now have a folder with PNG images for each frame and a FLAC audio file.

Encoding the final video

In order to turn the sequence of images into a video, you need FFmpeg. Since there are a lot of encoders and settings, we'll just cover creating an mp4 video with H.264. You may use vp8 or vp9 to create a WebM instead. More examples may be added below in future.


# All examples will assume that the images are in a folder called "Output/chunks" and named
# "Video-" followed by an unpadded number and ".png"
# You can adjust the number of threads to use by specifying "-threads x" as the first option
# Use one thread per logical core for optimal performance.
# All examples will assume a quadcore CPU without SMT, so 4 threads will be used
# All examples will assume a 60 FPS video

# Creating a high quality mp4 from your images
ffmpeg -threads 4 -framerate 60 -i Output/chunks/Video-%d.png -c:v libx264 -preset slow -crf 10 videostream.m4v

Technically the images aren't needed anymore after step. However, you might still want to keep them around until you get the final video so you don't need to render them again if anything went wrong during encoding.


For an mp4 video, we're going to use the AAC audio codec. For a webm you'd want to use either Vorbis or Opus instead.

# All examples will assume the source file is called "Video.flac"

# Encode the audio stream using AAC
ffmpeg -i Video.flac -c:a aac -b:a 192k audiostream.m4a


Now that we have the both streams in the required formats, it's time to merge them.

# All examples will assume that the streams are called audiostream and videostream
# The resulting file will be called final
# The extensions may vary based on which encoders were used

# Create a mp4 from an H.264 video- and AAC audiostream
ffmpeg -i videostream.m4v -i audiostream.m4a -c copy -movflags +faststart final.mp4

That's it! You should now have a video with good quality.

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License