3.9 KiB
This is an ansible role intended to be ran as the final step in the corresponding playbook to do what it says on the tin
If this is not ran as a part of the cosmos playbook, then some considerations here are the expectations of docker and samba already being installed and configured
This ansible task automatically builds out a matt-cloud debian system to be a VCR Ripping device
There is a hardware requirement of USB RCA Capture device, and this device must be configured in defauts/main.yaml
There is an option to install a basic GUI and then display the feed in a browser kiosk
This platform requires too much processing power to be installed on even half-decent systems. An 8-core Ryzen 5 Mini-PC is incapable of running the entire stack. I have separated the implementation of this to a client and server model. I set up a new VM on my home server that can use nearly all the CPU cores when needed, and set that with a static IP. I created two new roles that call this role with a few different variables, and then the cosmos-server pipeline can be ran with a special server selected to build the default cosmos VHS capture stack. This can probably run on an Intel i9 Mini PC or a similarly beefy Ryzen. It turns out that just the A/V to RSTP job takes a non-trivial amount of compute. Trying to do that and the video encoding at the same time generally takes more than a small CPU can offer. Now there is a VCR server VM that does the capturing of both the preview stream and the capture stream. This also saves it to the network in a spot that the website can automatically see. The client just streams to the server and displays the feed that is running from the server.
This process uses stages of different software to accomplish the goal of making capturing a VHS tape as automatic as possible
The various pertinent services live at these ports:
- Control Panel at port 8081
- Local Read-Only File Index at 8080
- Preview livestream at 8888
- Jellyfin at 8096
Overview of layers:
- ffmpeg service to combine the video and audio of the USB capture device into a RTMP stream and mp4 file
- The RTMP stream is so the capture can be live-previewed
- This feed is pointed at the MediaMTX service
- Mediamtx to monitor current capture
- This service will only be used for its ability to view a RTMP stream live in a web browser
- Docker container with apache+php for the control site
- This runs the PHP site that controls the ffmpeg capture service
- Python API service to control the ffmpeg streaming service
- This Python API is used by the PHP site for controlling the service
- It also has the duration variable storage control API here
- This is stored locally in a JSON file
- Small helper script to monitor the elapsed time and stop streaming after the selected time
- This script kills the capture service after the duration has been reached
- It does this by reading the JSON file managed by the Python API
- Playbook to mount and format additional storage when present
- When a secondary storage is detected it will be mapped at the media storage path
- If there is a blank storage attached, it will be formatted so BE CAREFUL
- If there is no secondary storage, the videos will be stored on the root path
- Playbook to install the GUI when certain hardware is detected
- Right now it will just be a 2nd Gen MS Surface I have
- It will be identified by the "System Info" dmidecode data
- There is a variable in the defaults file for a list of these strings
Some additional notes on the storage handling task working_folder.yaml. This will mount additional local storage if present. It expects there to be a single additional storage device and will work with an SD Card or an NVMe. It will map if a valid GPT ext4 volume is present, and it will create a new ext4 volume if there are no volumes present on this storage. It should just fail in edge cases which is fine. It would be a bad idea to run this playbook on an inappropriate endpoint.