VTubing on Fedora KDE 42 (Revised twice)
Introduction
I finally have a good desktop computer for streaming, and I have been testing streaming KDE documentation work on Twitch (with occasional Silksong speedrunning) while using a bunny avatar.
It’s time to announce it: I now stream mondays to fridays at 8 PM UTC (5 PM UTC-3 / Brasilia time) on twitch.tv/herzenschein. Come and say hi, ask about KDE stuff there!
Today I’m going to teach you how to make a fully working 3D VTubing setup on Fedora KDE 42. Realistically, this guide applies to any modern Fedora, really.
Things you’ll need:
- A computer with a GPU that can handle 3D models
- A webcam
- A VRM avatar
- SnekStudio or VSeeFace
- OBS Studio
- Codecs
- rpmfusion (for OBS virtual camera)
Things you’ll achieve:
- Streaming to OwnCast/YouTube/Twitch
- Optional: stream to all the above at the same time
- Using your 3D VTubing avatar as webcam
Things I’ll not detail here:
- Streaming with a 2D avatar
- Any other setup involving different software
Addendum
The next couple paragraphs were the result of a revision.
After posting this blog post I was made aware of SnekStudio which runs natively on Linux.
Originally, I was going to just do an honorary mention of the project.
However, I did some battle testing and I’ve determined that SnekStudio is good enough to replace my previous VSeeFace instructions, but I’ll keep the original instructions here for those who may happen to not have a good experience with SnekStudio for whatever reason (with VSeeFace, my model moved smoother for example).
Aside from the not-so-smooth movements, the only problems I had were actually preexisting issues in my model: twitchy hair and blinking issues when using wide eye mode. I didn’t really get any noteworthy issues with SnekStudio despite the fact that it’s alpha. It’s pretty good.
Notes about VTubing
VTubing means “virtual YouTubing”, and a VTuber is a “virtual YouTuber”, where you use a digital avatar as your online persona while making videos or streaming. There’s three types of avatars you can use: static PNGs, 2D rigged, and 3D rigged avatars.
Digital 3D avatars are usually in VRM format. That’s what I use.
PNGTubers actually use just still PNG images you can switch on the fly while streaming. 2D avatars usually come in Live2D format. I won’t talk about them here, but I know of two VTubers in the Linux space who use a 2D avatar:
- Asahi Lina, who worked on Asahi Linux
- Kelvin Shadewing, who made the game SuperTux Advance with a playable Konqi!
You can read more about this sort of thing in Streamlabs: VTubing for beginners.
The place where I got my current VRM avatar was on Gumroad, namely Rabbit by vr-zab, although there are many other stores you can use to find one, like Etsy or VRoid Hub. Oftentimes you can also find VRM avatars as a bonus for Resonite / VRChat avatars and assets.
Once you have a VRM avatar, if you want to make it more “you”, you can pay an artist to retexture your model (not too expensive) or to use it as a base for direct model modifications / rigging (expensive). A similar thing applies to Resonite / VRChat models, actually.
VRM avatars can contain metadata detailing the terms of use the author expects you to follow; a fairly common practice is for the model to not allow usage for commercial purposes for example.
You can stream in two ways:
- Directly from something like OBS Studio to some platform like YouTube or Twitch by using the RTMPS protocol
- By replacing your webcam output with a virtual camera
Setting the system up
If you’ve just installed Fedora, the first thing you should always do is update your system:
sudo dnf update
This is non-obvious, but the first update always triggers DNF to install the necessary free codecs. The free codecs are actually sufficient for streaming and work well, so you can just install obs-studio and skip rpmfusion if you want:
sudo dnf install obs-studio
However, if you plan on using the OBS virtual camera, you will need the v4l2loopback kernel module which is only available in rpmfusion.
To set up rpmfusion, you can use upstream Fedora’s instructions.
You probably also want to swap to the non-free codecs as per the upstream rpmfusion instructions.
After that, make sure to install obs-studio and v4l2loopback:
sudo dnf install obs-studio v4l2loopback
Reboot and you’re done.
If you prefer, you can also use the OBS Studio flatpak; its virtual camera will work too after you have v4l2loopback installed.
Running SnekStudio natively
Enable the Flathub flatpak repository if you haven’t already:
flatpak remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo
First, install SnekStudio. I recommend the flatpak (suffixed with _x86_64).
Download it by visiting its Download page or directly with the terminal:
wget https://github.com/ExpiredPopsicle/SnekStudio/releases/latest/download/SnekStudio_x86_64.flatpak
flatpak install SnekStudio_x86_64.flatpak
That’s it!
If you want to use microphone lip sync as well, you need to install the experimental version as mentioned in Setting up microphone lip sync.
Preparing SnekStudio for an OBS Scene
Upon installing and running SnekStudio, you will need to do a few things:
- Click on
Modules->Mod list...->MediaPipe Controllerto select your webcam - In
Eye Adjustments, enableLink eyes blinking - In
Poselk, setChest Yaw Rotation Scaleto either -0.3 or 0.3, whichever you prefer- This makes looking sideways more natural as the chest moves towards or against the blick gaze!
- In
BlendShapeScalingAndOffsetyou’ll find all of the settings for modifying your character model (depends on what the model creator made available)- The model shapes available will be shown in
MediaPipe, the facial expressions inVRM - You’ll most likely want to alter the second slider of each shape, namely the
offset - You’ll then want to use the third slider of each shape, the
smoothing, which changes the transition animation between movements (you probably want it all the way to the max)
- The model shapes available will be shown in
- Lastly, you’ll want to play with the lighting in
SceneBasic, specificallyAmbient Light EnergyandDirectional Light EnergyDirectional Light PitchandDirectional Light Yawlet you control where the lighting comes from
There’s also a few useful controls you’ll want to get used to:
- Whenever you open a dialog inside the program, by clicking the upper right
pyou’ll pop it out into a separate window - Use middle click drag or right click drag to rotate around the scene / your character
- Use Shift + Middle click drag or Right click drag to move your “in-camera” position
Lastly, you can go to Settings -> Window and enable Transparent background.
Now, KDE Plasma (the default desktop environment on Fedora KDE) has a nice feature we can use for convenient window placement: tiling.
You can toggle the tile editor with Windows key (otherwise known as Meta or Super) + T. If you’ve never played with this feature before, you will probably have a default tiling with a lean left tile, a large center tile, and a lean right tile.
On your adjacent monitor, expand the left tile a bit and delete the right tile.
If you don’t have an adjacent monitor where to hide your SnekStudio and OBS windows from stream, that’s fine. You can use a virtual desktop for this later.
Now, drag the SnekStudio window to the place where the left tile would be while also holding Shift. The SnekStudio window will have resized to that space and will always have the same window geometry when you move the window there. You can then click the weird X button to hide the UI and then hide the button with Space.
The SnekStudio setup is now complete.
Open OBS Studio, click on the plus ➕ button and select Screen Capture (Pipewire). Create a new screen capture and in the Screen Sharing: Choose what to share with OBS Studio window, select the Windows tab and find the SnekStudio window.
Right click your screen capture source and select Transform -> Edit Transform (or press Ctrl+E). You can then use the Crop left/right/top/bottom spinboxes to cut the window borders from your screen capture. The titlebar usually takes around 40 px to be removed, the left border around 6 px.
You might want to readjust your character in SnekStudio while in this mode.
If you’re using an ultra wide monitor, use your ultra wide Base (Canvas) Resolution output resolution. While 16:9 is a more standard and accessible resolution for streaming, the recommendation you find online (using the Scaling/Aspect Ratio filter) is for the most part outdated; platforms like Twitch do letterboxing (add black bars below and above the screen) on the ultra wide output so it looks ok on 16:9 screens.
Lastly, move the OBS window to the large right tile using Shift, as before.
The OBS setup is now complete.
Setting up microphone lip sync
This section was added in the second revision.
By default, SnekStudio does lip sync using your webcam. While that may suffice for you, it can often malfunction and cause your avatar’s mouth and jaw to open excessively or not at all, especially if you are under bad lighting conditions or have a bad webcam.
To address this, use microphone lip sync instead. This is currently only available as an experimental feature, which means you will need to install the SnekStudio nightly instead of the stable release.
Enable the Flathub flatpak repository if you haven’t already:
flatpak remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo
You can only download the experimental version from the git repository at https://github.com/ExpiredPopsicle/SnekStudio/releases/tag/nightly.
Then, in SnekStudio, go to Modules -> Mod list... and click the Add button at the bottom of the dialog. Add the LipSync module and, with it selected, click the arrow buttons to move it to after SceneBasic and before AnimationApplier.
By default the lip sync will use every sound that comes out of your microphone that sounds voice-like, which will be a lot, so your avatar will look like it’s talking on its own! To get more accurate lip sync, you will need noise reduction so that only your voice is detected. And you need that for SnekStudio, so you can’t set this in OBS.
You can easily do that by installing EasyEffects from Flathub:
flatpak install com.github.wwmm.easyeffects
In EasyEffects, click on Input at the top of the window, Effects at the bottom of the window, and then on Add Effect at the left of the window.
Select Noise Reduction. Voilà! It should apply immediately. You can then go to the application’s Preferences and set it to Autostart on login.
Running VSeeFace with Bottles
You can skip this if you’re using SnekStudio! If you really want VSeeFace, click this text.
Enable the Flathub flatpak repository if you haven’t already:
flatpak remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo
Install Bottles:
flatpak install com.usebottles.bottles
Start Bottles, then click on the upper left plus ➕ button that when hovered says “Create new Bottle”.
Write the name for the new Bottle and click on the Create button on the top right, using the defaults.
Click on the newly created Bottle and click on Add Shortcuts....
Don’t actually do anything there: just check out the location bar on the top and copy the path that you see, then close the dialog window.
This is the path to the hidden Bottles directory where your Bottle installation is stored. It should look something like this:
/home/yourusername/.var/app/com.usebottles.bottles/data/bottles/bottles/TheBottleYouJustCreated/
Open that directory in your file manager, Dolphin 🐬.
Download VSeeFace and extract it in the above directory + drive_c/Program Files/, so:
/home/yourusername/.var/app/com.usebottles.bottles/data/bottles/bottles/TheBottleYouJustCreated/drive_c/Program Files/
You can use F3 for split view or Ctrl+T for a new tab in Dolphin to make it easier to move files around.
Now you get back to Bottles, open the Bottle you made, click on Add Shortcuts... and select drive_c/Program Files/VSeeFace/VSeeFace.exe in there.
After that, you should see a play button ▶️ to the right. Upon clicking it, VSeeFace should show up! Just select the right webcam to get started, you can use the default VRM avatar for testing.
Back to Bottles real quick, you can click the hamburger menu (three dots) next to the play button ▶️ and click on “Add Desktop Entry”. You’ll get an entry in your menu for running VSeeFace directly, very handy.
Preparing VSeeFace for an OBS Scene
You can skip this if you’re using SnekStudio! If you really want VSeeFace, click this text.
I have a few recommendations for your VSeeFace settings:
- Use
Reset positionafter positioning yourself mostly neutral to the webcam with the mouth slightly open, lip detection works better this way - Enable
Mirror motion - Keep
Movement SmoothingandMovement Rangeat 0.30 - If your VRM avatar came with faulty eye tracking, enable
Auto blink - Set
Default Camera PositiontoCustom, choose a desirable angle for your character, then save
There’s also a few useful controls you’ll want to get used to:
- You can rotate your character with
Alt+left mouse drag - You can zoom in/out with
Alt+right mouse drag - You can move the character around with
Alt+middle mouse drag - You can control the lighting with
Ctrl+left mouse drag- up moves the light source to above and/or behind the character
- down moves the light source to below and/or the front of the character
- left/right rotates relative to the up/down axis
- I recommend putting the light source to the front and to the right of the character (your left), as this works well when in the bottom left corner of the screen while streaming
- You can hide the interface with the weird X button on the bottom right
- You can then hide the weird X button by pressing Space
Remember to save your camera position. The other settings should be saved upon close.
Now, KDE Plasma (the default desktop environment on Fedora KDE) has a nice feature we can use for convenient window placement: tiling.
You can toggle the tile editor with Windows key (otherwise known as Meta or Super) + T. If you’ve never played with this feature before, you will probably have a default tiling with a lean left tile, a large center tile, and a lean right tile.
On your adjacent monitor, expand the left tile a bit and delete the right tile.
If you don’t have an adjacent monitor where to hide your VSeeFace and OBS windows from stream, that’s fine. You can use a virtual desktop for this later.
Now, drag the VSeeFace window to the place where the left tile would be while also holding Shift. The VSeeFace window will have resized to that space and will always have the same window geometry when you move the window there. You can then click the weird X button to hide the UI and then hide the button with Space.
The VSeeFace setup is now complete.
Open OBS Studio, click on the plus ➕ button and select Screen Capture (Pipewire). Create a new screen capture and in the Screen Sharing: Choose what to share with OBS Studio window, select the Windows tab and find the VSeeFace window. Simple enough. You should see your VSeeFace avatar with a horrid gray background.
Left click on the screen capture Source you just created and then on Filters. Click on the plus ➕ button, add Color Key, then use Select Color -> Pick Screen Color to select the gray color from VSeeFace. Done! You should see a transparent background behind your character.
If your monitor is HDR capable, it will probably not work, though. That’s a known and expected issue in OBS. You’ll need to add another filter called Apply LUT and tick Passthrough Alpha to get it to work.
Now, if for some reason you prefer to capture the actual monitor output instead of the individual window, or if you can see your window borders, right click your screen capture source and select Transform -> Edit Transform (or press Ctrl+E). You can then use the Crop left/right/top/bottom spinboxes to cut the window borders from your screen capture. The titlebar usually takes around 40 px to be removed, the left border around 6 px.
You might want to readjust your character in VSeeFace while in this mode.
If you’re using an ultra wide monitor, use your ultra wide Base (Canvas) Resolution output resolution. While 16:9 is a more standard and accessible resolution for streaming, the recommendation you find online (using the Scaling/Aspect Ratio filter) is for the most part outdated; platforms like Twitch do letterboxing (add black bars below and above the screen) on the ultra wide output so it looks ok on 16:9 screens.
Lastly, move the OBS window to the large right tile using Shift, as before.
The OBS setup is now complete.
Connecting via RTMPS
On your streaming service of preference you should see a stream settings page of sort. For example, on Twitch, you navigate to Settings -> Stream, while on YouTube you navigate to ➕ Create -> Livestreaming -> Stream Settings. There, you need to copy your private Stream Key.
If you heavily dislike relying on a third party, properietary service for this, you can use Owncast, a FOSS, selfhosted, federated, single user Twitch-like streaming platform. I have an extremely easy way to set it up on port 8080 as a Podman Quadlet container, and you can use your own Owncast instance on your own computer to test drive your own stream, privately and securely.
Go to the OBS Settings -> Stream -> Service, select the service, then paste the copied key. Make sure to always keep it hidden.
Clicking on Start Streaming should, unimpressively, start streaming to the platform!
Connecting to everything at the same time via RTMPS
There is an interesting plugin for OBS called obs-multi-rtmp whose purpose is to allow you to stream to multiple platforms at the same time, so you can stream to Owncast, YouTube, Twitch with the same OBS stream (if your hardware is capable and you have the bandwidth).
Unfortunately the process is convoluted to do as you’ll need to compile the plugin using the instructions mentioned in one of the Github issues for the project.
Install obs-studio-devel:
sudo dnf install obs-studio-devel
Clone the project and enter it:
git clone https://github.com/sorayuki/obs-multi-rtmp.git
cd obs-multi-rtmp
Build it using the Ubuntu CI preset:
cmake -B build/ --install-prefix $PWD/obs-plugin-here --preset ubuntu-x86_64 --fresh
cmake --build build/ --parallel
cmake --install build/
If you have issues trying to compile the project, see Installing build dependencies to learn how to address them.
The plugin files will be installed to an obs-plugin-here/ directory right in the source directory for convenience. But it does not have a directory structure that OBS Studio uses for its plugins, so we need to move files around.
Create the directory structure necessary for the plugin:
mkdir --parents obs-multi-rtmp/bin/64bit
mkdir --parents obs-multi-rtmp/data/locale
Copy the necessary files:
cp obs-plugin-here/lib/x86_64-linux-gnu/obs-plugins/obs-multi-rtmp.so obs-multi-rtmp/bin/64bit/
cp obs-plugin-here/share/obs/obs-plugins/obs-multi-rtmp/locale/* obs-multi-rtmp/data/locale/
Lastly, copy the newly-created obs-multi-rtmp/ directory to your OBS plugins installation:
mkdir ~/.config/obs-studio/plugins
cp --recursive obs-multi-rtmp/ ~/.config/obs-studio/plugins/
Finally, open OBS Studio, click on Docks -> Multiple Output, then attach it to whatever region of OBS Studio you prefer.
Note that you will need to add the correct RTMPS server for services like YouTube and Twitch. This is ordinarily handled by OBS, but the plugin doesn’t have access to the list of servers that OBS has.
Fret not, because the list of streaming servers that OBS has are actually listed directly in the source code. Just find the best one for you and manually add new streaming targets to your Multiple Output plugin.
Kudos to AntonioGomes42 for figuring out and detailing how to build the plugin.
Using a virtual camera
A virtual camera can be useful to stream to absolutely any webcam-using service or platform using your digital avatar. This includes things like Jitsi, Slack, BigBlueButton, and Discord. Anything that can use your webcam, really. You can be a cute bunny instead of a cute human with your friend, or you may just be really privacy conscious.
To use the virtual camera, as mentioned before, you’ll need to install v4l2loopback which is only available in rpmfusion, so if you haven’t skipped anything in Setting the system up, including rebooting after installing it, you should have the Start Virtual Camera button in your OBS.
Fedora has changed the default webcam backend for Firefox from v4l2 to Pipewire libcamera. It’s cool and all, but apparently I ended up uncovering a weird bug in it that hinders the OBS virtual camera from showing up in Firefox.
Thankfully, there’s a simple workaround: go to about:config, set media.webrtc.camera.allow-pipewire to false (both instances of the config), then restart pipewire and wireplumber as per the blog post:
systemctl restart --user pipewire wireplumber
Make sure to restart Firefox and OBS after that.
I made a bug report about this recently, but I had already been somewhat in contact with the Fedora Firefox maintainer for a while about this and I’ve been helping investigate to my limited capacity. Eventually we should see this fixed upstream and no modification should be required in Firefox in the future.
Final thoughts
I had to figure out many pieces to get this working. Some things were not obvious at all, some things were much more complicated even months ago:
- There are two main ways to install software with Bottles; either running the installer, or by having the files inside the prefix (the case here)
- What was silently causing Firefox to not see the virtual camera
- The OBS Studio plugin directory structure
- The build failures that obs-multi-rtmp would get unless you used the CI preset
- I originally used the OpenSeeFace script to make VSeeFace use face tracking until I figured things out with VSeeFace
- The fact that VSeeFace only supports VRM0 instead of VRM1, so editing VRM models with Blender doesn’t quite work even with the correct plugins
- The rabbit hole I went down to edit my own VRM file (not mentioned here), not knowing the only functional way is to use a hella old Unity version
- Not knowing how camera devices are managed on Linux
- The HDR workaround was so cryptic
- I had to test Twitch letterboxing myself to even be sure it was a thing to begin with
Among other things. Some of these are not even Linux specific, they’re just obscure in general.
All because I wanted to stream as a cute femboy bnnuy online.
Hopefully with my instructions it is now not too painful to set things up, even something complicated like compiling a plugin. This blog post should be generally replicable on other distributions too. It was originally written to celebrate Fedora KDE 42 becoming an official edition alongside GNOME, but alas, I took too long and we’re already at Fedora KDE 43.
Maybe we’ll see more Linux VTubers now at least? :3c
Other resources
The next couple paragraphs were the result of a revision.
Only multiple days after I published this blog post did I hear about this one repository containing their own guides on how to run multiple pieces of VTubing software on Linux (mostly on WINE). And then this other website that has more resources as well.
This blog post was specifically designed to give you a very battle tested functional way to do VTubing on Linux, hence why there’s only one method highlighted (with an optional secondary method), I’m opinionated.
But it’s always worth linking to other people’s projects so they’re easier to find and support.
Here’s a honorary mention to the community maintained Linux-Guide-To-VTubing.
Here’s a honorary mention to the community maintained Linux VTubing Guide.