Compare commits

...

89 Commits
v0.1.0 ... main

Author SHA1 Message Date
Paul van Tilburg 49728ea6dd
Bump the version to 0.5.3
Check and lint using Cargo / Check and lint (push) Successful in 2m39s Details
Release / Release (push) Successful in 1m24s Details
Release / Release crate (push) Successful in 4m13s Details
Release / Release Debian package (push) Successful in 6m34s Details
2024-02-27 13:54:20 +01:00
Paul van Tilburg 97b8a0b8bd
Update the changelog 2024-02-27 13:53:20 +01:00
Paul van Tilburg 36cfa2d0ff Bump the dependency on cached to 0.49.2
Check and lint using Cargo / Check and lint (push) Successful in 2m58s Details
2024-02-26 21:19:56 +01:00
Paul van Tilburg a9e0e2417d Cargo update; fixes several security advisories
Fixes RUSTSEC-2024-0003, RUSTSEC-2023-0072 and RUSTSEC-2023-0074.
2024-02-26 21:18:19 +01:00
Paul van Tilburg 263d8272da
Handle paging information begin absent (closes: #17)
Check and lint using Cargo / Check and lint (push) Successful in 3m11s Details
2024-02-16 20:50:23 +01:00
Paul van Tilburg db2d7f3f6c
Add missing date
Check and lint using Cargo / Check and lint (push) Successful in 3m50s Details
2023-11-03 11:52:46 +01:00
Paul van Tilburg 1a8f8d67fa
Bump the version to 0.5.2
Check and lint using Cargo / Check and lint (push) Successful in 2m56s Details
Release / Release (push) Successful in 1m7s Details
Release / Release crate (push) Successful in 4m35s Details
Release / Release Debian package (push) Successful in 7m10s Details
2023-11-03 11:24:44 +01:00
Paul van Tilburg f4f9578c0e
Update the changelog 2023-11-03 11:23:38 +01:00
Paul van Tilburg 6d6895066f
Bump the dependency on cached to 0.46.0 2023-11-03 11:21:49 +01:00
Paul van Tilburg f3c4c5071f
Cargo update; fixes RUSTSEC-2020-0071
This switches to Rocket 0.5-rc.4
Also fix the usage of a deprecated method.
2023-11-03 11:20:18 +01:00
Paul van Tilburg b0cb9d984a
Bump the version to 0.5.1
Check and lint using Cargo / Check and lint (push) Successful in 3m0s Details
Release / Release (push) Successful in 1m29s Details
Release / Release crate (push) Successful in 5m14s Details
Release / Release Debian package (push) Successful in 7m16s Details
2023-08-25 22:09:32 +02:00
Paul van Tilburg 0c49df352d
Update the changelog 2023-08-25 22:08:43 +02:00
Paul van Tilburg 64ee93c553
Build and release a Debian package in a separate job
Release it to the package repository instead of attaching to the
release. Also add the relevant part of the changelog as release notes to
the release and fix some schema-related issues.
2023-08-25 22:08:00 +02:00
Paul van Tilburg 613d50bf30
Bump the dependency on youtube_dl to 0.9.0 2023-08-25 22:03:47 +02:00
Paul van Tilburg fd4a26715e
Cargo update 2023-08-25 22:02:42 +02:00
Paul van Tilburg 8850e16c4a
Cargo update
Check and lint using Cargo / Check and lint (push) Successful in 4m33s Details
2023-06-08 11:11:14 +02:00
Paul van Tilburg 06e0a5ecd5
Bump the dependency on cached to 0.44.0 2023-06-08 11:10:57 +02:00
Paul van Tilburg 29f3975d62
Use the personal Cargo token
Check and lint using Cargo / Check and lint (push) Successful in 3m23s Details
2023-06-08 10:58:50 +02:00
Paul van Tilburg a05106fecf
Bump the version to 0.5.0
Check and lint using Cargo / Check and lint (push) Successful in 4m23s Details
Release / Release (push) Successful in 8m38s Details
Release / Release crate (push) Failing after 4m36s Details
2023-06-08 10:36:38 +02:00
Paul van Tilburg c128bfea62
Update the changelog 2023-06-08 10:36:17 +02:00
Paul van Tilburg f7a5477804
Differentiate between publish and update time
Check and lint using Cargo / Check and lint (push) Successful in 6m42s Details
The `pubDate` field of an item in the feed is meant to be time the item
was published. It should not be bumped if the item is updated in the
backend.

* Introduce a new `published_at` field on `Item`
* Update the Mixcloud and YouTube backends to fill this field
* Use the `published_at` field on `Item` for the `<pubData>` item
  subelement
2023-06-08 10:10:34 +02:00
Paul van Tilburg 9fc9990c27
No longer configure using a sparse Cargo index for crates.io
Check and lint using Cargo / Check and lint (push) Successful in 3m50s Details
This is the default since Rust 1.70.
2023-06-06 07:46:24 +02:00
Paul van Tilburg 05f88dbb9e
Add a full release workflow
Check and lint using Cargo / Check and lint (push) Successful in 3m19s Details
2023-05-22 20:11:19 +02:00
Paul van Tilburg 409a69604e
Tweak step name 2023-05-22 20:10:45 +02:00
Paul van Tilburg b958734e92
Simplify Gitea Actions check and lint workflow
Check and lint using Cargo / Check and lint (push) Successful in 3m31s Details
2023-04-25 16:36:49 +02:00
Paul van Tilburg cb43d91b64
Bump the version to 0.4.1
Check Details
Lints Details
2023-04-11 19:41:59 +02:00
Paul van Tilburg f75fc513f9
Update the changelog 2023-04-11 19:40:38 +02:00
Paul van Tilburg 5e9486e81a
Cargo update 2023-04-11 19:38:23 +02:00
Paul van Tilburg 0ff54dbf03
Select direct HTTP audio streams only
Select the well-supported, almost always available MP4 container format
that is directly available (so no HLS or DASH). This unfortunately
does reduce the bitrate to 64 kbps.
2023-04-11 19:37:29 +02:00
Paul van Tilburg 1af19270cc
Add missing security fixes
Check Details
Lints Details
2023-03-24 19:34:50 +01:00
Paul van Tilburg 133bc0ac27
Add missing repository field 2023-03-24 19:29:21 +01:00
Paul van Tilburg 51c3874820
Bump the version to 0.4.0
Check Details
Lints Details
2023-03-24 19:25:52 +01:00
Paul van Tilburg fd4e1b00a1
Update the changelog 2023-03-24 19:24:13 +01:00
Paul van Tilburg bae34b6858
Bump dependencies on cached and youtube_dl 2023-03-24 19:20:55 +01:00
Paul van Tilburg 501bd9329c
Cargo update; fixes several security advisories
Fixes RUSTSEC-2021-0145, RUSTSEC-2020-0016. RUSTSEC-2023-0001,
RUSTSEC-2023-0005, RUSTSEC-2023-0018. RUSTSEC-2023-0022,
RUSTSEC-2023-0023 and RUSTSEC-2023-0024.
2023-03-24 19:18:08 +01:00
Paul van Tilburg 81979cd5e0
Update to Rocket 0.5.0-rc.3 2023-03-24 19:17:00 +01:00
Paul van Tilburg 8e4045572c
Add Gitea Actions workflow for cargo
Check Details
Lints Details
2023-03-21 11:54:58 +01:00
Paul van Tilburg 83d025c785
Bump the dependency on ytextract (closes: #14)
This fixes the issue where JSON cannot be serialized due to changes
in YouTube (a new player UI button in particular).
2023-01-30 19:53:25 +01:00
Paul van Tilburg 7f1120fd47
Select MP4 audio streams only (experimental)
The filter used to select the stream with the highest bitrate, but this
may result in a stream with a codec/container that is not supported by
all podcast clients, such as WEBM. Select the (almost always available)
highest stream using the MP4 container instead.
2023-01-29 14:27:47 +01:00
Paul van Tilburg 371b758962 Strip parameters from MIME types
Some podcast clients are scared of them and they are not really
necessary either.
2022-12-31 14:32:58 +01:00
Paul van Tilburg 2cd756254b
Fix typo 2022-12-30 11:17:30 +01:00
Paul van Tilburg b7a923c918
Bump the version to 0.3.0 2022-12-24 13:31:27 +01:00
Paul van Tilburg 6284f6327a
Update the changelog 2022-12-24 13:31:09 +01:00
Paul van Tilburg 4244fbc6d2
Bump dependencies; cargo update 2022-12-24 13:22:54 +01:00
Paul van Tilburg bec7fa850c Merge pull request 'Implement YouTube back-end' (#12) from 5-add-youtube-backend into main
Add support for creating podcast feeds of YouTube channels and playlists.

* Add the YouTube back-end
* Update the documentation
* Use the MIME DB to determine the download URL file extensions

Reviewed-on: #12
2022-12-24 13:19:51 +01:00
Paul van Tilburg a6c9275d93
Add more channel & item metadata
This includes categories (from hashtags), descriptions and keywords.
2022-12-23 22:17:56 +01:00
Paul van Tilburg cd831a5145
Update documentation 2022-12-23 22:17:56 +01:00
Paul van Tilburg a855c98399
Always apply limit after filtering successful streams 2022-12-23 22:17:56 +01:00
Paul van Tilburg 4177e1c6f9
Set updated at timestamp for videos
Since the metadata only provides a date, set the time part to 12:00:00
(UTC).

Also fix up the deprecation warning for the creation of the initial zero
last build timestamp.
2022-12-23 22:17:56 +01:00
Paul van Tilburg 9f88f4f9a3
Bump the depend on ytextract
This newer version is able to correctly parse the date of streamed
videos.

Also use the full `ytextract::Video` structs which should have have all
the metadata.
2022-12-23 22:17:55 +01:00
Paul van Tilburg 94121c0828
Apply a default item limit of 50 2022-12-23 22:17:55 +01:00
Paul van Tilburg 8e73deb042
Mention YouTube support in the public documentation 2022-12-23 22:17:55 +01:00
Paul van Tilburg 3a3fbc96f4
Use a MIME DB to determine the download URL file extensions
* Also apply it to the default MIME type for Mixcloud posts
* Add a dependency on the `mime_db` crate
2022-12-23 22:17:55 +01:00
Paul van Tilburg 59e1f8a987
Add first version of the YouTube back-end 2022-12-23 22:17:52 +01:00
Paul van Tilburg 66452cc96d
Add more lints
Not enabling the `trivial_casts` lint, because the `uri!` seems to
trigger it.
2022-10-17 20:10:06 +02:00
Paul van Tilburg 32040f3b0f
Cargo update 2022-10-17 19:51:33 +02:00
Paul van Tilburg bde6135f70
Use public URL instead of URL in configuration
Change the name of the `url` config key to `public_url` to be more clear
about what it is for.
2022-10-17 19:51:24 +02:00
Paul van Tilburg fa8fc40b58
Add missing trait derives on back-end types 2022-08-15 21:07:53 +02:00
Paul van Tilburg 101df7d486
Add missing/fix cache-related comments 2022-08-15 20:24:13 +02:00
Paul van Tilburg 76f1e01657
Make channel/item image optional; change item length type
This allows more back-ends to be compatible.
2022-08-15 20:22:15 +02:00
Paul van Tilburg 49e0e47ba2
Introduce enum and enum dispatching for backends
This way handlers don't need to do case matching on backend ID strings
anymore.

* Rename `backend` to `backend_id` where we have a backend ID
* Add `get` function and `Backends` enum to the `backend` module
* Add a depend on the `enum_dispatch` crate
2022-08-15 20:21:42 +02:00
Paul van Tilburg cb40f6b192
Split off feed generation to feed module
Also rename the handler function names so they don't conflict with
(current and future) modules.
2022-08-14 10:16:05 +02:00
Paul van Tilburg bc9a9e307d
Add back-end abstraction; refactor Mixcloud back-end (closes: #10)
* Add a `backend` module `Backend` trait and necessary abstract types
* Refactor handlers to use the back-end abstraction
* Directly serialize to URLs where necessary in Mixcloud back-end
* Require `serde` feature for the url crate
2022-08-14 09:03:58 +02:00
Paul van Tilburg 218e714b03 Bump dependency on cached to 0.38.0
This fixes the unused `*_prime_cache` compile warnings.
2022-08-12 09:53:56 +02:00
Paul van Tilburg 5cb476c7e2 Cargo update 2022-08-12 09:53:29 +02:00
Paul van Tilburg 01ca8165e1 Fix string type 2022-08-12 09:52:23 +02:00
Paul van Tilburg 45cb7faed9
Cargo update 2022-07-17 16:29:36 +02:00
Paul van Tilburg 8e77e35690
Add missing error documentation; tweak messages 2022-06-05 21:58:02 +02:00
Paul van Tilburg e585a8cf59
Fix (doc)test failure 2022-06-05 20:56:17 +02:00
Paul van Tilburg 9ae7ea8eb4
Simplify launching Rocket 2022-06-05 20:54:48 +02:00
Paul van Tilburg 679a73ab63
Refactor limit handling to be more readable 2022-05-28 09:29:31 +02:00
Paul van Tilburg 45fca01e27
Fix typo in the changelog 2022-05-27 23:01:26 +02:00
Paul van Tilburg 61db7c4b76
Update the changelog 2022-05-27 22:59:29 +02:00
Paul van Tilburg 8c0bfd766a
Bump the version to 0.2.0 2022-05-27 22:59:29 +02:00
Paul van Tilburg 27a40b1a91
Increase TTL for redirect URI to 24 hours 2022-05-27 22:59:28 +02:00
Paul van Tilburg dafcdc009b Merge pull request 'Implement paging' (#9) from 2-implement-paging into main
Reviewed-on: #9
2022-05-27 22:50:51 +02:00
Paul van Tilburg 0701088fbc
Update the documentation 2022-05-27 22:47:52 +02:00
Paul van Tilburg c13ce71c69
Add feed item limit support
* The feed item limit defaults to the default page size (50) if not
  provided
* Move caching from response to URL fetch results; add helper functions
* Add a helper function to set the paging query of an URL
* Modify paging so we don't retrieve more than the feed item limit
2022-05-27 22:47:52 +02:00
Paul van Tilburg 78fc93fedf
Retrieve all pages by following the next URL
* Derserialize the paging information
* Parse each next URL; handle URL parse errors
* Use a default page size of 50; pass offset 0 to count by item index
2022-05-27 22:47:52 +02:00
Paul van Tilburg 09ee0b9ba9
Make default_file_type functions const 2022-05-27 22:39:53 +02:00
Paul van Tilburg 10bbd9b495
Cargo update 2022-05-26 22:12:31 +02:00
Paul van Tilburg cde2a34e91
Fix documentation 2022-05-26 22:12:01 +02:00
Paul van Tilburg 11c78a6cc8
Drop the get_ prefix for functions 2022-05-26 21:25:37 +02:00
Paul van Tilburg 11b32acfb4
Use an URL parser for the URL passed to youtube-dl 2022-05-26 21:23:38 +02:00
Paul van Tilburg 451c07a09e
Implement caching (closes: #3)
* Enable the `async` feature for the `cached` crate
* Make the types that we cache implement `Clone`
* Rename the argument of mixcloud::redirect_url` because of this issue:
  https://github.com/jaemk/cached/issues/114
2022-05-26 21:20:10 +02:00
Paul van Tilburg b4c0188fba
Drop some unnecessary bloat/unused crates 2022-05-26 20:40:18 +02:00
Paul van Tilburg 3ec1879932
Replace own youtube-dl impl by youtube_dl crate (refs: #8)
* Drop the depend on the `tokio` crate, because we don't need to
  run our own processes anymore.
* Remove unnecessary error variant for command failure
2022-05-26 20:38:51 +02:00
Paul van Tilburg a67df934bf
Enable some more lints; fix clippy issues 2022-05-26 20:03:44 +02:00
Paul van Tilburg b53365a293
Implement proper error logging and handling (closes: #6)
* Use the `thiserror` crate to make our own error type
* Implement Rocket's `Responder` type for the error type
* Adjust all methods to use the error type
* Small documentation tweaks
2022-05-26 19:57:24 +02:00
15 changed files with 2617 additions and 1313 deletions

View File

@ -0,0 +1,46 @@
name: "Check and lint using Cargo"
on:
- pull_request
- push
- workflow_dispatch
jobs:
check_lint:
name: Check and lint
runs-on: debian-latest
steps:
- name: Checkout sources
uses: actions/checkout@v3
- name: Install Rust stable toolchain
uses: https://github.com/actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
components: rustfmt, clippy
- name: Run cargo check
uses: https://github.com/actions-rs/cargo@v1
with:
command: check
- name: Run cargo clippy
uses: https://github.com/actions-rs/cargo@v1
with:
command: clippy
args: -- -D warnings
- name: Run cargo fmt
uses: https://github.com/actions-rs/cargo@v1
with:
command: fmt
args: --all -- --check
# TODO: Add a test suite first!
# - name: Run cargo test
# uses: https://github.com/actions-rs/cargo@v1
# with:
# command: test
# args: --all-features

View File

@ -0,0 +1,112 @@
name: "Release"
on:
push:
tags:
- "v*"
jobs:
release:
name: "Release"
runs-on: debian-latest
steps:
- name: Checkout sources
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Determine the version of the release
run: |
VERSION=${GITHUB_REF_NAME#v}
echo "Releasing version: $VERSION"
echo "VERSION=$VERSION" >> $GITHUB_ENV
- name: Get the release notes from the changelog
run: |
EOF=$(dd if=/dev/urandom bs=15 count=1 status=none | base64)
RELEASE_NOTES=$(sed -n -e "/^## \[$VERSION\]/,/^## \[/{//"'!'"p;}" CHANGELOG.md | sed -e '1d;$d')
echo "Release notes:"
echo
echo "$RELEASE_NOTES"
echo "RELEASE_NOTES<<$EOF" >> "$GITHUB_ENV"
echo "$RELEASE_NOTES" >> "$GITHUB_ENV"
echo "$EOF" >> "$GITHUB_ENV"
- name: Install Go
uses: actions/setup-go@v4
with:
go-version: '>=1.20.1'
- name: Release to Gitea
uses: actions/release-action@main
with:
# This is available by default.
api_key: '${{ secrets.RELEASE_TOKEN }}'
files: FIXME
title: 'Release ${{ env.VERSION }}'
body: '${{ env.RELEASE_NOTES }}'
release-crate:
name: "Release crate"
runs-on: debian-latest
steps:
- name: Checkout sources
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Install Rust stable toolchain
uses: https://github.com/actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
- name: Use sparse Cargo index for crates.io
run: echo -e '[registries.crates-io]\nprotocol = "sparse"' >> /root/.cargo/config.toml
- name: Register the Gitea crate registry with Cargo
run: echo -e '[registries.luon]\nindex = "https://git.luon.net/paul/_cargo-index.git"' >> /root/.cargo/config.toml
- name: Run cargo publish
uses: https://github.com/actions-rs/cargo@v1
env:
# This needs to be provided for the repository; no login necessary as a result.
CARGO_REGISTRIES_LUON_TOKEN: '${{ secrets.CARGO_TOKEN }}'
with:
command: publish
args: --registry luon
release-deb:
name: "Release Debian package"
runs-on: debian-latest
steps:
- name: Checkout sources
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Install Rust stable toolchain
uses: https://github.com/actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
override: true
- name: Install cargo-deb
uses: https://github.com/brndnmtthws/rust-action-cargo-binstall@v1
with:
packages: cargo-deb
- name: Run cargo-deb
uses: https://github.com/actions-rs/cargo@v1
with:
command: deb
- name: Publish Debian package
env:
DEB_REPO_TOKEN: '${{ secrets.DEB_REPO_TOKEN }}'
run: |
curl --config <(printf "user=%s:%s" paul "${DEB_REPO_TOKEN}") \
--upload-file target/debian/podbringer*.deb \
https://git.luon.net/api/packages/paul/debian/pool/bookworm/main/upload

View File

@ -7,9 +7,156 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
## [0.5.3] - 2024-02-27
### Changed
* Update dependency on `cached`
### Security
* Update dependencies, fixes security advisories:
* [RUSTSEC-2024-0003](https://rustsec.org/advisories/RUSTSEC-2024-0003)
* [RUSTSEC-2023-0072](https://rustsec.org/advisories/RUSTSEC-2024-0072)
* [RUSTSEC-2023-0074](https://rustsec.org/advisories/RUSTSEC-2024-0072)
### Fixed
* Handle paging information begin absent; fixes short feeds for Mixcloud (#17)
## [0.5.2] - 2023-11-03
### Security
* Update dependencies
([RUSTSEC-2020-0071](https://rustsec.org/advisories/RUSTSEC-2020-0071.html))
### Changed
* Switch to Rocket 0.5 RC4
* Update dependency on `cached`
## [0.5.1] - 2023-08-25
### Changed
* Bump the dependency on `youtube_dl`
* Update release Gitea Actions workflow; add seperate job to release Debian
package to the new repository
### Security
* Update dependencies
([RUSTSEC-2023-0034](https://rustsec.org/advisories/RUSTSEC-2023-0034),
[RUSTSEC-2023-0044](https://rustsec.org/advisories/RUSTSEC-2023-0044),
[RUSTSEC-2023-0052](https://rustsec.org/advisories/RUSTSEC-2023-0052))
## [0.5.0] - 2023-06-08
### Added
* Add full release Gitea Actions workflow
### Changed
* Simplify GItea Actions check and lint workflow
### Fixed
* Differentiate between publish and update time for items
## [0.4.1] - 2023-04-11
### Changed
* Select only direct HTTP MP4 audio streams for the Mixcloud back-end
## [0.4.0] - 2023-03-24
### Added
* Add Gitea Actions workflow for cargo
### Changed
* Update dependencies on `cached` and `youtube_dl`
* Update to `rocket` version 0.5.0-rc.3
* Select only MP4 audio streams for the YouTube back-end (experimental)
* Remove parameters from MIME types to prevent clients tripping over them
### Fixed
* Bump the dependency on `ytextract` (#14)
* Fix typo in the documentation
### Security
* Update dependencies
([RUSTSEC-2021-0145](https://rustsec.org/advisories/RUSTSEC-2021-0145.html),
[RUSTSEC-2020-0016](https://rustsec.org/advisories/RUSTSEC-2020-0016.html),
[RUSTSEC-2023-0001](https://rustsec.org/advisories/RUSTSEC-2023-0001.html),
[RUSTSEC-2023-0005](https://rustsec.org/advisories/RUSTSEC-2023-0005.html),
[RUSTSEC-2023-0018](https://rustsec.org/advisories/RUSTSEC-2023-0018.html),
[RUSTSEC-2023-0022](https://rustsec.org/advisories/RUSTSEC-2023-0022.html),
[RUSTSEC-2023-0023](https://rustsec.org/advisories/RUSTSEC-2023-0023.html),
[RUSTSEC-2023-0024](https://rustsec.org/advisories/RUSTSEC-2023-0024.html))
## [0.3.0] - 2022-12-24
### Added
* Add abstraction that will support multiple back-ends
* Add YouTube back-end for generating feeds of YouTube channels and
playlists (#5)
### Changed
* Change the name of the `url` to `public_url` in the configuration file
`Rocket.toml`
* Make feed channel and item images optional
* Simplify how Rocket is launched
* Split off feed generation to a separate module
* Improve documentation
### Fixed
* Some code refactoring
### Security
* Update/bump dependencies
## [0.2.0] - 2022-05-27
### Added
* Add support for paging, i.e. retrieving more that 50 past items (#9)
* Introduce the `limit` parameter to get more/less than 50 feed items
* Add caching; all Mixcloud user, cloudcasts and download URL requests are
cached for 24 hours (#3)
### Changed
* Implemented proper error logging and handling (#6)
* Replaces own youtube-dl command running implementation by `youtub_dl`
crate (#8)
* Several code and documentation improvements & fixes
### Removed
* Drop dependencies on some unnecessary/unused crates
## [0.1.0] - 2022-05-24
Initial release.
[Unreleased]: https://git.luon.net/paul/podbringer/compare/v0.1.0...HEAD
[Unreleased]: https://git.luon.net/paul/podbringer/compare/v0.5.3...HEAD
[0.5.3]: https://git.luon.net/paul/podbringer/compare/v0.5.1..v0.5.2
[0.5.2]: https://git.luon.net/paul/podbringer/compare/v0.5.1..v0.5.2
[0.5.1]: https://git.luon.net/paul/podbringer/compare/v0.5.0..v0.5.1
[0.5.0]: https://git.luon.net/paul/podbringer/compare/v0.4.1..v0.5.0
[0.4.1]: https://git.luon.net/paul/podbringer/compare/v0.4.0..v0.4.1
[0.4.0]: https://git.luon.net/paul/podbringer/compare/v0.3.0..v0.4.0
[0.3.0]: https://git.luon.net/paul/podbringer/compare/v0.2.0..v0.3.0
[0.2.0]: https://git.luon.net/paul/podbringer/compare/v0.1.0..v0.2.0
[0.1.0]: https://git.luon.net/paul/podbringer/commits/tag/v0.1.0

2165
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -1,22 +1,27 @@
[package]
name = "podbringer"
version = "0.1.0"
version = "0.5.3"
authors = ["Paul van Tilburg <paul@luon.net>"]
edition = "2021"
description = "Web service that provides podcasts for services that don't offer them (anymore)"
readme = "README.md"
repository = "https://git.luon.net/paul/podbringer"
license = "MIT"
[dependencies]
cached = "0.34.0"
async-trait = "0.1.57"
cached = { version = "0.49.2", features = ["async"] }
chrono = { version = "0.4.19", features = ["serde"] }
color-eyre = "0.6.1"
enum_dispatch = "0.3.8"
mime-db = "1.6.0"
reqwest = { version = "0.11.10", features = ["json"] }
rocket = { version = "0.5.0-rc.2", features = ["json"] }
rocket = { version = "0.5.0-rc.3", features = ["json"] }
rocket_dyn_templates = { version = "0.1.0-rc.2", features = ["tera"] }
rss = "2.0.1"
tempfile = "3"
tokio = { version = "1.6.1", features = ["process"] }
thiserror = "1.0.31"
url = { version = "2.2.2", features = ["serde"] }
youtube_dl = { version = "0.9.0", features = ["tokio"] }
ytextract = "0.11.2"
[package.metadata.deb]
maintainer = "Paul van Tilburg <paul@luon.net>"
@ -26,8 +31,9 @@ extended-description = """\
Podbringer is a web service that provides podcasts for services that don't
offer them (anymore). It provides a way to get the RSS feed for your podcast
client and it facilites the downloads of the pods (enclosures).
It currently only supports [Mixcloud](https://mixcloud.com).
It currently only supports [Mixcloud](https://www.mixcloud.com) and
[YouTube](https://www.youtube.com).
Other back-ends might be added in the future.
"""
section = "net"

View File

@ -4,7 +4,8 @@ Podbringer is a web service that provides podcasts for services that don't
offer them (anymore). It provides a way to get the RSS feed for your podcast
client and it facilites the downloads of the pods (enclosures).
It currently only supports [Mixcloud](https://mixcloud.com).
It currently only supports [Mixcloud](https://www.mixcloud.com) and
[YouTube](https://www.youtube.com).
Other back-ends might be added in the future.
## Building & running
@ -25,8 +26,8 @@ builds when you don't add `--release`.)
### Configuration
For now, you will need to provide Rocket with configuration to tell it at which
URL Podbringer is hosted. This needs to be done even if you are not using a
reverse proxy, in which case you need to provide it with the proxied URL. You
public URL Podbringer is hosted. This needs to be done even if you are not using
a reverse proxy, in which case you need to provide it with the proxied URL. You
can also use the configuration to configure a different address and/or port.
Just create a `Rocket.toml` file that contains (or copy `Rocket.toml.example`):
@ -34,7 +35,7 @@ Just create a `Rocket.toml` file that contains (or copy `Rocket.toml.example`):
[default]
address = "0.0.0.0"
port = 7062
url = "https://my.domain.tld/podbringer"
public_url = "https://my.domain.tld/podbringer"
```
This will work independent of the type of build. For more about Rocket's
@ -44,15 +45,60 @@ configuration, see: <https://rocket.rs/v0.5-rc/guide/configuration/>.
Podbringer currently has no front-end or web interface yet that can help you
use it. Until then, you just have to enter the right service-specific RSS feed
URL in your favorite podcast client to start using it.
URL in your favorite podcast client to start using it. For example:
Given the Mixcloud URL <https://www.mixcloud.com/myfavouriteband/>, the URL you
need to use for Podbringer is comprised of the following parts:
```
```text
https://my.domain.tld/podbringer/feed/mixcloud/myfavouriteband
|------------------------------| |-------||--------------|
The Podbringer location URL Service User @ service
|------------------------------| |------| |-------------|
The Podbringer public URL Service Service ID
```
So, the URL consists of the location of Podbringer, the fact that you want the feed,
the name of the service and the ID that identifies something list on that service.
### Feed item limit
To prevent feeds with a very large number of items, any feed that is returned
contains at most 50 items by default. If you want to have more (or less) items,
provide the limit in the URL by setting the `limit` parameter.
For example, to get up until 1000 items the URL becomes:
```text
https://my.domain.tld/podbringer/feed/mixcloud/myfavouriteband?limit=1000
```
### Service: Mixcloud
For Mixcloud, a feed can be constructed of everything that a user posted.
Given the Mixcloud URL like <https://www.mixcloud.com/myfavouriteband/>, the
`myfavouriteband` part of the URL is the Mixcloud username and can be used as
the service ID.
```text
https://my.domain.tld/podbringer/feed/mixcloud/myfavouriteband
|------------------------------| |------| |-------------|
The Podbringer public URL Service Username
```
### Service: YouTube
For YouTube, a feed can either be constructed of a channel or a playlist.
Given the YouTube channel URL like <https://www.youtube.com/c/favouritechannel>,
the `favouritechannel` part of the URL is the YouTube channel ID.
Given the YouTube playlist URL
<https://www.youtube.com/playlist?list=PLsomeplaylistidentifier>, the
`PLsomeplaylistidentifier` part of the URL is the YouTube playlist ID.
Either the channel or playlist ID can be used as the service ID.
```text
https://my.domain.tld/podbringer/feed/youtube/favouritechannel
|------------------------------| |-----| |--------------|
The Podbringer public URL Service Channel ID
https://my.domain.tld/podbringer/feed/youtube/PLsomeplaylistidentifier
|------------------------------| |-----| |----------------------|
The Podbringer public URL Service Playlist ID
```
## License

View File

@ -1,4 +1,4 @@
[default]
address = "0.0.0.0"
port = 7062
url = "https://my.domain.tld/podbringer"
public_url = "https://my.domain.tld/podbringer"

133
src/backends.rs Normal file
View File

@ -0,0 +1,133 @@
//! The supported content back-ends.
//!
//! A content back-end should provide two kinds of objects: channels and their (content) items.
//! It must provide a methods to retrieve a channel and its items and a method to return the
//! redirect URL for some path that points to media within context of the back-end.
use std::collections::HashMap;
use std::path::{Path, PathBuf};
use async_trait::async_trait;
use chrono::{DateTime, Utc};
use enum_dispatch::enum_dispatch;
use reqwest::Url;
use crate::{Error, Result};
pub(crate) mod mixcloud;
pub(crate) mod youtube;
/// Retrieves the back-end for the provided ID (if supported).
pub(crate) fn get(backend: &str) -> Result<Backends> {
match backend {
"mixcloud" => Ok(Backends::Mixcloud(mixcloud::backend())),
"youtube" => Ok(Backends::YouTube(youtube::backend())),
_ => Err(Error::UnsupportedBackend(backend.to_string())),
}
}
/// The supported back-ends.
#[enum_dispatch(Backend)]
pub(crate) enum Backends {
/// Mixcloud (<https://www.mixcloud.com>)
Mixcloud(mixcloud::Backend),
/// YouTube (<https://www.youtube.com>)
YouTube(youtube::Backend),
}
/// Functionality of a content back-end.
#[async_trait]
#[enum_dispatch]
pub(crate) trait Backend {
/// Returns the name of the backend.
fn name(&self) -> &'static str;
/// Returns the channel with its currently contained content items.
async fn channel(&self, channel_id: &str, item_limit: Option<usize>) -> Result<Channel>;
/// Returns the redirect URL for the provided download file path.
async fn redirect_url(&self, file: &Path) -> Result<String>;
}
/// The metadata of a collection of content items.
#[derive(Clone, Debug)]
pub(crate) struct Channel {
/// The title of the channel.
pub(crate) title: String,
/// The link to the channel.
pub(crate) link: Url,
/// The description of the channel.
pub(crate) description: String,
/// The author/composer/creator of the channel.
pub(crate) author: Option<String>,
/// The categories associated with the channel.
///
/// The first category is considered to be the "main" category.
pub(crate) categories: Vec<String>,
/// The URL of the image/logo/avatar of a channel.
pub(crate) image: Option<Url>,
/// The contained content items.
pub(crate) items: Vec<Item>,
}
/// A content item belonging to a channel.
#[derive(Clone, Debug)]
pub(crate) struct Item {
/// The title of the item.
pub(crate) title: String,
/// The direct link to the item.
pub(crate) link: Url,
/// The description of the item.
pub(crate) description: Option<String>,
/// The categories of the items (and their domain URLs).
pub(crate) categories: HashMap<String, Url>,
/// The enclosed media content of the item,
pub(crate) enclosure: Enclosure,
/// The duration of the media content (in seconds).
pub(crate) duration: Option<u32>,
/// The global UID of the item.
///
/// This GUID is not considered nor needs to be a permalink.
pub(crate) guid: String,
/// The keywords associated with the item.
pub(crate) keywords: Vec<String>,
/// The URL of the image of the item.
pub(crate) image: Option<Url>,
/// The timestamp the item was published.
pub(crate) published_at: DateTime<Utc>,
/// The timestamp the item was last updated.
pub(crate) updated_at: DateTime<Utc>,
}
/// The enclosed media content of an item.
#[derive(Clone, Debug)]
pub(crate) struct Enclosure {
/// The path of the download file associated with the item enclosure.
///
/// This is used as a part of the enclosure URL of the item and will be passed to
/// [`Backend::redirect_url`] later when a client wants to download the media content.
pub(crate) file: PathBuf,
/// The MIME type of the download file path associated with the item enclosure.
pub(crate) mime_type: String,
/// The length of the enclosed media content (in bytes).
pub(crate) length: u64,
}

323
src/backends/mixcloud.rs Normal file
View File

@ -0,0 +1,323 @@
//! The Mixcloud back-end.
//!
//! It uses the Mixcloud API to retrieve the feed (user) and items (cloudcasts)).
//! See also: <https://www.mixcloud.com/developers/>
use std::path::{Path, PathBuf};
use async_trait::async_trait;
use cached::proc_macro::cached;
use chrono::{DateTime, Utc};
use reqwest::Url;
use rocket::serde::Deserialize;
use youtube_dl::{YoutubeDl, YoutubeDlOutput};
use super::{Channel, Enclosure, Item};
use crate::{Error, Result};
/// The base URL for the Mixcloud API.
const API_BASE_URL: &str = "https://api.mixcloud.com";
/// The base URL for downloading Mixcloud files.
const FILES_BASE_URL: &str = "https://www.mixcloud.com";
/// The default bitrate used by Mixcloud.
const DEFAULT_BITRATE: u64 = 64 * 1024;
/// The default file (MIME) type used by Mixcloud.
const DEFAULT_FILE_TYPE: &str = "audio/mp4";
/// The default page size.
const DEFAULT_PAGE_SIZE: usize = 50;
/// Creates a Mixcloud back-end.
pub(crate) fn backend() -> Backend {
Backend
}
/// The Mixcloud back-end.
pub struct Backend;
#[async_trait]
impl super::Backend for Backend {
fn name(&self) -> &'static str {
"Mixcloud"
}
async fn channel(&self, channel_id: &str, item_limit: Option<usize>) -> Result<Channel> {
// For Mixcloud a channel ID is some user name.
let mut user_url = Url::parse(API_BASE_URL).expect("URL can always be parsed");
user_url.set_path(channel_id);
println!("⏬ Retrieving user {channel_id} from {user_url}...");
let user = fetch_user(user_url).await?;
// The items of a channel are the user's cloudcasts.
let mut limit = item_limit.unwrap_or(DEFAULT_PAGE_SIZE);
let mut offset = 0;
let mut cloudcasts_url = Url::parse(API_BASE_URL).expect("URL can always be parsed");
cloudcasts_url.set_path(&format!("{channel_id}/cloudcasts/"));
println!("⏬ Retrieving cloudcasts of user {channel_id} from {cloudcasts_url}...");
set_paging_query(&mut cloudcasts_url, limit, offset);
let mut cloudcasts = Vec::with_capacity(50); // The initial limit
loop {
let cloudcasts_res: CloudcastsResponse = fetch_cloudcasts(cloudcasts_url).await?;
let count = cloudcasts_res.items.len();
cloudcasts.extend(cloudcasts_res.items);
// Check if any paging information is present.
let Some(paging) = cloudcasts_res.paging else {
break;
};
// Continue onto the next URL in the paging, if there is one and the limit was not
// reached.
limit = limit.saturating_sub(count);
offset += count;
match (limit, paging.next) {
(0, Some(_)) => break,
(_, Some(next_url)) => {
cloudcasts_url = Url::parse(&next_url)?;
set_paging_query(&mut cloudcasts_url, limit, offset);
}
(_, None) => break,
}
}
Ok(Channel::from(UserWithCloudcasts(user, cloudcasts)))
}
async fn redirect_url(&self, file: &Path) -> Result<String> {
let key = format!("/{}/", file.with_extension("").to_string_lossy());
retrieve_redirect_url(&key).await
}
}
/// A Mixcloud user with its cloudcasts.
pub(crate) struct UserWithCloudcasts(User, Vec<Cloudcast>);
/// A Mixcloud user (response).
#[derive(Clone, Debug, Deserialize)]
#[serde(crate = "rocket::serde")]
pub(crate) struct User {
/// The name of the user.
pub(crate) name: String,
/// The bio (description) of the user.
pub(crate) biog: String,
/// The picture URLs associated with the user.
pub(crate) pictures: Pictures,
/// The original URL of the user.
pub(crate) url: Url,
}
/// A collection of different sizes/variants of a picture.
#[derive(Clone, Debug, Deserialize)]
#[serde(crate = "rocket::serde")]
pub(crate) struct Pictures {
/// The URL of a large picture of the user.
pub(crate) large: Url,
}
/// The Mixcloud cloudcasts response.
#[derive(Clone, Debug, Deserialize)]
#[serde(crate = "rocket::serde")]
pub(crate) struct CloudcastsResponse {
/// The contained cloudcast items.
#[serde(rename = "data")]
items: Vec<Cloudcast>,
/// The paging information (if any).
paging: Option<CloudcastsPaging>,
}
/// The Mixcloud paging info.
#[derive(Clone, Debug, Deserialize)]
#[serde(crate = "rocket::serde")]
pub(crate) struct CloudcastsPaging {
/// The API URL of the next page.
next: Option<String>,
}
/// A Mixcloud cloudcast.
#[derive(Clone, Debug, Deserialize)]
#[serde(crate = "rocket::serde")]
pub(crate) struct Cloudcast {
/// The key of the cloudcast.
pub(crate) key: String,
/// The name of the cloudcast.
pub(crate) name: String,
/// The slug of the cloudcast (used for the enclosure).
pub(crate) slug: String,
/// The picture URLs associated with the cloudcast.
pub(crate) pictures: Pictures,
/// The tags of the cloudcast.
pub(crate) tags: Vec<Tag>,
/// The time the feed was created.
pub(crate) created_time: DateTime<Utc>,
/// The time the feed was updated.
pub(crate) updated_time: DateTime<Utc>,
/// The original URL of the cloudcast.
pub(crate) url: Url,
/// The length of the cloudcast (in seconds).
pub(crate) audio_length: u32,
}
/// A Mixcloud cloudcast tag.
#[derive(Clone, Debug, Deserialize)]
#[serde(crate = "rocket::serde")]
pub(crate) struct Tag {
/// The name of the tag.
pub(crate) name: String,
/// The URL of the tag.
pub(crate) url: Url,
}
impl From<UserWithCloudcasts> for Channel {
fn from(UserWithCloudcasts(user, cloudcasts): UserWithCloudcasts) -> Self {
// FIXME: Don't hardcode the category!
let categories = Vec::from([String::from("Music")]);
let items = cloudcasts.into_iter().map(From::from).collect();
Channel {
title: format!("{0} (via Mixcloud)", user.name),
link: user.url,
description: user.biog,
author: Some(user.name),
categories,
image: Some(user.pictures.large),
items,
}
}
}
impl From<Cloudcast> for Item {
fn from(cloudcast: Cloudcast) -> Self {
let mut file = PathBuf::from(cloudcast.key.trim_end_matches('/'));
let extension = mime_db::extension(DEFAULT_FILE_TYPE).expect("MIME type has extension");
file.set_extension(extension);
// FIXME: Don't hardcode the description!
let description = Some(format!("Taken from Mixcloud: {0}", cloudcast.url));
let categories = cloudcast
.tags
.iter()
.cloned()
.map(|tag| (tag.name, tag.url))
.collect();
let enclosure = Enclosure {
file,
mime_type: String::from(DEFAULT_FILE_TYPE),
length: estimated_file_size(cloudcast.audio_length),
};
let keywords = cloudcast.tags.into_iter().map(|tag| tag.name).collect();
Item {
title: cloudcast.name,
link: cloudcast.url,
description,
categories,
enclosure,
duration: Some(cloudcast.audio_length),
guid: cloudcast.slug,
keywords,
image: Some(cloudcast.pictures.large),
published_at: cloudcast.created_time,
updated_at: cloudcast.updated_time,
}
}
}
/// Returns the estimated file size in bytes for a given duration.
///
/// This uses the default bitrate (see [`DEFAULT_BITRATE`]) which is in B/s.
fn estimated_file_size(duration: u32) -> u64 {
DEFAULT_BITRATE * duration as u64 / 8
}
/// Fetches the user from the URL.
///
/// If the result is [`Ok`], the user will be cached for 24 hours for the given URL.
#[cached(
key = "String",
convert = r#"{ url.to_string() }"#,
time = 86400,
result = true
)]
///
/// If the result is [`Ok`], the user will be cached for 24 hours for the given username.
async fn fetch_user(url: Url) -> Result<User> {
let response = reqwest::get(url).await?.error_for_status()?;
let user = response.json().await?;
Ok(user)
}
/// Fetches cloudcasts from the URL.
///
/// If the result is [`Ok`], the cloudcasts will be cached for 24 hours for the given URL.
#[cached(
key = "String",
convert = r#"{ url.to_string() }"#,
time = 86400,
result = true
)]
async fn fetch_cloudcasts(url: Url) -> Result<CloudcastsResponse> {
let response = reqwest::get(url).await?.error_for_status()?;
let cloudcasts_res = response.json().await?;
Ok(cloudcasts_res)
}
/// Set paging query pairs for URL.
///
/// The limit is capped to the default page size. Another request will be necessary to retrieve
/// more.
fn set_paging_query(url: &mut Url, limit: usize, offset: usize) {
url.query_pairs_mut()
.clear()
.append_pair(
"limit",
&format!("{}", std::cmp::min(limit, DEFAULT_PAGE_SIZE)),
)
.append_pair("offset", &format!("{}", offset));
}
/// Retrieves the redirect URL for the provided Mixcloud cloudcast key.
///
/// If the result is [`Ok`], the redirect URL will be cached for 24 hours for the given cloudcast
/// key.
#[cached(
key = "String",
convert = r#"{ download_key.to_owned() }"#,
time = 86400,
result = true
)]
async fn retrieve_redirect_url(download_key: &str) -> Result<String> {
let mut url = Url::parse(FILES_BASE_URL).expect("URL can always be parsed");
url.set_path(download_key);
println!("🌍 Determining direct URL for {download_key}...");
// Select the well-supported, almost always available MP4 container format that is directly
// available (so no HLS or DASH). This unfortunately does reduce the bitrate to 64 kbps.
let output = YoutubeDl::new(url).format("http").run_async().await?;
if let YoutubeDlOutput::SingleVideo(yt_item) = output {
yt_item.url.ok_or(Error::NoRedirectUrlFound)
} else {
Err(Error::NoRedirectUrlFound)
}
}

352
src/backends/youtube.rs Normal file
View File

@ -0,0 +1,352 @@
//! The YouTube back-end.
//!
//! It uses the `ytextract` crate to retrieve the feed (channel or playlist) and items (videos).
use std::path::{Path, PathBuf};
use async_trait::async_trait;
use cached::proc_macro::cached;
use chrono::{TimeZone, Utc};
use reqwest::Url;
use rocket::futures::StreamExt;
use ytextract::playlist::video::{Error as YouTubeVideoError, Video as YouTubePlaylistVideo};
use ytextract::{
Channel as YouTubeChannel, Client, Playlist as YouTubePlaylist, Stream as YouTubeStream,
Video as YouTubeVideo,
};
use super::{Channel, Enclosure, Item};
use crate::{Error, Result};
/// The base URL for YouTube channels.
const CHANNEL_BASE_URL: &str = "https://www.youtube.com/channel";
/// The default item limit.
const DEFAULT_ITEM_LIMIT: usize = 50;
/// The base URL for YouTube playlists.
const PLAYLIST_BASE_URL: &str = "https://www.youtube.com/channel";
/// The base URL for YouTube videos.
const VIDEO_BASE_URL: &str = "https://www.youtube.com/watch";
/// Creates a YouTube back-end.
pub(crate) fn backend() -> Backend {
Backend::new()
}
/// The YouTube back-end.
pub struct Backend {
/// The client capable of interacting with YouTube.
client: Client,
}
impl Backend {
/// Creates a new YouTube back-end.
fn new() -> Self {
let client = Client::new();
Self { client }
}
}
#[async_trait]
impl super::Backend for Backend {
fn name(&self) -> &'static str {
"YouTube"
}
async fn channel(&self, channel_id: &str, item_limit: Option<usize>) -> Result<Channel> {
// We assume it is a YouTube playlist ID if the channel ID starts with
// "PL"/"OLAK"/"RDCLAK"; it is considered to be a YouTube channel ID otherwise.
if channel_id.starts_with("PL")
|| channel_id.starts_with("OLAK")
|| channel_id.starts_with("RDCLAK")
{
let (yt_playlist, yt_videos_w_streams) =
fetch_playlist_videos(&self.client, channel_id, item_limit).await?;
Ok(Channel::from(YouTubePlaylistWithVideos(
yt_playlist,
yt_videos_w_streams,
)))
} else {
let (yt_channel, yt_videos_w_streams) =
fetch_channel_videos(&self.client, channel_id, item_limit).await?;
Ok(Channel::from(YouTubeChannelWithVideos(
yt_channel,
yt_videos_w_streams,
)))
}
}
async fn redirect_url(&self, file: &Path) -> Result<String> {
let id_part = file.with_extension("");
let video_id = id_part.to_string_lossy();
retrieve_redirect_url(&self.client, &video_id).await
}
}
/// A YouTube playlist with its videos.
#[derive(Clone, Debug)]
pub(crate) struct YouTubePlaylistWithVideos(YouTubePlaylist, Vec<YouTubeVideoWithStream>);
/// A YouTube channel with its videos.
#[derive(Clone, Debug)]
pub(crate) struct YouTubeChannelWithVideos(YouTubeChannel, Vec<YouTubeVideoWithStream>);
/// A YouTube video with its stream.
#[derive(Clone, Debug)]
struct YouTubeVideoWithStream {
/// The information of the YouTube video.
video: YouTubeVideo,
/// The metadata of the selected YouTube stream.
stream: YouTubeStream,
/// The content of the selected YouTube stream.
content_length: u64,
}
impl From<YouTubeChannelWithVideos> for Channel {
fn from(
YouTubeChannelWithVideos(yt_channel, yt_videos_w_streams): YouTubeChannelWithVideos,
) -> Self {
let mut link = Url::parse(CHANNEL_BASE_URL).expect("valid URL");
let title = format!("{0} (via YouTube)", yt_channel.name());
let description = yt_channel.description().to_string();
link.path_segments_mut()
.expect("valid URL")
.push(&yt_channel.id());
let author = Some(yt_channel.name().to_string());
// FIXME: Don't hardcode the category!
let categories = Vec::from([String::from("Channel")]);
let image = yt_channel
.avatar()
.max_by_key(|av| av.width * av.height)
.map(|av| av.url.clone());
let items = yt_videos_w_streams.into_iter().map(Item::from).collect();
Channel {
title,
link,
description,
author,
categories,
image,
items,
}
}
}
impl From<YouTubePlaylistWithVideos> for Channel {
fn from(
YouTubePlaylistWithVideos(yt_playlist, yt_videos_w_streams): YouTubePlaylistWithVideos,
) -> Self {
let title = format!("{0} (via YouTube)", yt_playlist.title());
let mut link = Url::parse(PLAYLIST_BASE_URL).expect("valid URL");
let description = yt_playlist.description().to_string();
link.query_pairs_mut()
.append_pair("list", &yt_playlist.id().to_string());
let author = yt_playlist.channel().map(|chan| chan.name().to_string());
// FIXME: Don't hardcode the category!
let categories = Vec::from([String::from("Playlist")]);
let image = yt_playlist
.thumbnails()
.iter()
.max_by_key(|tn| tn.width * tn.height)
.map(|tn| tn.url.clone());
let items = yt_videos_w_streams.into_iter().map(Item::from).collect();
Channel {
title,
link,
description,
author,
categories,
image,
items,
}
}
}
impl From<YouTubeVideoWithStream> for Item {
fn from(
YouTubeVideoWithStream {
video,
stream,
content_length: length,
}: YouTubeVideoWithStream,
) -> Self {
let id = video.id().to_string();
// Strip parameters from MIME type; some clients are scared of them and they are no
// necessary.
let mut mime_type = stream.mime_type().to_string();
if let Some(sep_idx) = mime_type.find(';') {
mime_type.truncate(sep_idx);
}
let extension = mime_db::extension(&mime_type).unwrap_or_default();
let file = PathBuf::from(&id).with_extension(extension);
let enclosure = Enclosure {
file,
mime_type,
length,
};
let mut link = Url::parse(VIDEO_BASE_URL).expect("valid URL");
link.query_pairs_mut().append_pair("v", &id);
let video_description = video.description();
let description = Some(format!("{video_description}\n\nTaken from YouTube: {link}"));
let categories = video
.hashtags()
.filter(|hashtag| !hashtag.trim().is_empty())
.map(|hashtag| {
let url = Url::parse(&format!(
"https://www.youtube.com/hashtag/{}",
hashtag.trim_start_matches('#')
))
.expect("valid URL");
(hashtag.to_string(), url)
})
.collect();
let duration = Some(video.duration().as_secs() as u32);
let keywords = video.keywords().clone();
let image = video
.thumbnails()
.iter()
.max_by_key(|tn| tn.width * tn.height)
.map(|tn| tn.url.clone());
let timestamp = video
.date()
.and_hms_opt(12, 0, 0)
.expect("Invalid hour, minute and/or second");
let published_at = Utc.from_utc_datetime(&timestamp);
// There is no updated at timestamp available, really.
let updated_at = published_at;
Item {
title: video.title().to_string(),
link,
description,
categories,
enclosure,
duration,
guid: id,
keywords,
image,
published_at,
updated_at,
}
}
}
/// Fetches the YouTube playlist videos for the given ID.
///
/// If the result is [`Ok`], the playlist will be cached for 24 hours for the given playlist ID.
#[cached(
key = "(String, Option<usize>)",
convert = r#"{ (playlist_id.to_owned(), item_limit) }"#,
time = 86400,
result = true
)]
async fn fetch_playlist_videos(
client: &Client,
playlist_id: &str,
item_limit: Option<usize>,
) -> Result<(YouTubePlaylist, Vec<YouTubeVideoWithStream>)> {
let id = playlist_id.parse()?;
let limit = item_limit.unwrap_or(DEFAULT_ITEM_LIMIT);
let yt_playlist = client.playlist(id).await?;
let yt_videos_w_streams = yt_playlist
.videos()
.filter_map(fetch_stream)
.take(limit)
.collect()
.await;
Ok((yt_playlist, yt_videos_w_streams))
}
/// Fetches the YouTube channel videos for the given ID.
#[cached(
key = "(String, Option<usize>)",
convert = r#"{ (channel_id.to_owned(), item_limit) }"#,
time = 86400,
result = true
)]
async fn fetch_channel_videos(
client: &Client,
channel_id: &str,
item_limit: Option<usize>,
) -> Result<(YouTubeChannel, Vec<YouTubeVideoWithStream>)> {
let id = channel_id.parse()?;
let limit = item_limit.unwrap_or(DEFAULT_ITEM_LIMIT);
let yt_channel = client.channel(id).await?;
let yt_videos_w_streams = yt_channel
.uploads()
.await?
.filter_map(fetch_stream)
.take(limit)
.collect()
.await;
Ok((yt_channel, yt_videos_w_streams))
}
/// Fetches the stream and relevant metadata for a YouTube video result.
///
/// If there is a error retrieving the metadata, the video is discarded/ignored.
/// If there are problems retrieving the streams or metadata, the video is also discarded.
async fn fetch_stream(
yt_video: Result<YouTubePlaylistVideo, YouTubeVideoError>,
) -> Option<YouTubeVideoWithStream> {
match yt_video {
Ok(video) => {
let video = video.upgrade().await.ok()?;
let stream = video
.streams()
.await
.ok()?
// Select the well-supported, almost always available MP4 container format with
// only an audio stream and then the one with the highest bitrate.
.filter(|v| v.is_audio() && v.mime_type().contains("mp4"))
.max_by_key(|v| v.bitrate())?;
let content_length = stream.content_length().await.ok()?;
Some(YouTubeVideoWithStream {
video,
stream,
content_length,
})
}
Err(_) => None,
}
}
/// Retrieves the redirect URL for the provided YouTube video ID.
///
/// If the result is [`Ok`], the redirect URL will be cached for 24 hours for the given video ID.
#[cached(
key = "String",
convert = r#"{ video_id.to_owned() }"#,
time = 86400,
result = true
)]
async fn retrieve_redirect_url(client: &Client, video_id: &str) -> Result<String> {
let video_id = video_id.parse()?;
let video = client.video(video_id).await?;
let stream = video
.streams()
.await?
// Select the well-supported, almost always available MP4 container format with only an
// audio stream and then the one with the highest bitrate.
.filter(|v| v.is_audio() && v.mime_type().contains("mp4"))
.max_by_key(|v| v.bitrate())
.ok_or(Error::NoRedirectUrlFound)?;
Ok(stream.url().to_string())
}

131
src/feed.rs Normal file
View File

@ -0,0 +1,131 @@
//! Helper functions for constructing RSS feeds.
use std::path::PathBuf;
use chrono::{DateTime, NaiveDateTime, TimeZone, Utc};
use rocket::http::uri::Absolute;
use rocket::uri;
use rss::extension::itunes::{
ITunesCategoryBuilder, ITunesChannelExtensionBuilder, ITunesItemExtensionBuilder,
};
use rss::{
CategoryBuilder, ChannelBuilder, EnclosureBuilder, GuidBuilder, ImageBuilder, ItemBuilder,
};
use crate::backends::{Channel, Item};
use crate::Config;
/// Constructs a feed as string from a back-end channel using the `rss` crate.
///
/// It requires the backend and configuration to be able to construct download URLs.
pub(crate) fn construct(backend_id: &str, config: &Config, channel: Channel) -> rss::Channel {
let category = CategoryBuilder::default()
.name(
channel
.categories
.first()
.map(Clone::clone)
.unwrap_or_default(),
)
.build();
let unix_timestamp = NaiveDateTime::from_timestamp_opt(0, 0)
.expect("Out-of-range seconds or invalid nanoseconds");
let mut last_build = Utc.from_utc_datetime(&unix_timestamp);
let generator = String::from(concat!(
env!("CARGO_PKG_NAME"),
" ",
env!("CARGO_PKG_VERSION")
));
let image = channel
.image
.clone()
.map(|url| ImageBuilder::default().link(url.clone()).url(url).build());
let items = channel
.items
.into_iter()
.map(|item| construct_item(backend_id, config, item, &mut last_build))
.collect::<Vec<_>>();
let itunes_ext = ITunesChannelExtensionBuilder::default()
.author(channel.author)
.categories(
channel
.categories
.into_iter()
.map(|cat| ITunesCategoryBuilder::default().text(cat).build())
.collect::<Vec<_>>(),
)
.image(channel.image.map(String::from))
.explicit(Some(String::from("no")))
.summary(Some(channel.description.clone()))
.build();
ChannelBuilder::default()
.title(channel.title)
.link(channel.link)
.description(channel.description)
.category(category)
.last_build_date(Some(last_build.to_rfc2822()))
.generator(Some(generator))
.image(image)
.items(items)
.itunes_ext(Some(itunes_ext))
.build()
}
/// Constructs an RSS feed item from a back-end item using the `rss` crate.
///
/// It requires the backend and configuration to be able to construct download URLs.
/// It also bumps the last build timestamp if the last updated timestamp is later than the current
/// value.
fn construct_item(
backend_id: &str,
config: &Config,
item: Item,
last_build: &mut DateTime<Utc>,
) -> rss::Item {
let categories = item
.categories
.into_iter()
.map(|(cat_name, cat_url)| {
CategoryBuilder::default()
.name(cat_name)
.domain(Some(cat_url.to_string()))
.build()
})
.collect::<Vec<_>>();
let url = uri!(
Absolute::parse(&config.public_url).expect("valid URL"),
crate::get_download(backend_id = backend_id, file = item.enclosure.file)
);
let enclosure = EnclosureBuilder::default()
.url(url.to_string())
.length(item.enclosure.length.to_string())
.mime_type(item.enclosure.mime_type)
.build();
let guid = GuidBuilder::default()
.value(item.guid)
.permalink(false)
.build();
let keywords = item.keywords.join(", ");
let itunes_ext = ITunesItemExtensionBuilder::default()
.image(item.image.map(String::from))
.duration(item.duration.map(|dur| format!("{dur}")))
.subtitle(item.description.clone())
.keywords(Some(keywords))
.build();
if item.updated_at > *last_build {
*last_build = item.updated_at;
}
ItemBuilder::default()
.title(Some(item.title))
.link(Some(item.link.to_string()))
.description(item.description)
.categories(categories)
.enclosure(Some(enclosure))
.guid(Some(guid))
.pub_date(Some(item.published_at.to_rfc2822()))
.itunes_ext(Some(itunes_ext))
.build()
}

View File

@ -1,37 +1,102 @@
#![doc = include_str!("../README.md")]
#![warn(
clippy::all,
missing_copy_implementations,
missing_debug_implementations,
rust_2018_idioms,
rustdoc::broken_intra_doc_links
rustdoc::broken_intra_doc_links,
trivial_numeric_casts,
renamed_and_removed_lints,
unsafe_code,
unstable_features,
unused_import_braces,
unused_qualifications
)]
#![deny(missing_docs)]
use std::path::PathBuf;
use chrono::{DateTime, NaiveDateTime, Utc};
use rocket::fairing::AdHoc;
use rocket::http::uri::Absolute;
use rocket::http::Status;
use rocket::response::Redirect;
use rocket::serde::{Deserialize, Serialize};
use rocket::{get, routes, uri, Build, Responder, Rocket, State};
use rocket::{get, routes, Build, Request, Responder, Rocket, State};
use rocket_dyn_templates::{context, Template};
use rss::extension::itunes::{
ITunesCategoryBuilder, ITunesChannelExtensionBuilder, ITunesItemExtensionBuilder,
};
use rss::{
CategoryBuilder, ChannelBuilder, EnclosureBuilder, GuidBuilder, ImageBuilder, ItemBuilder,
};
pub(crate) mod mixcloud;
use crate::backends::Backend;
pub(crate) mod backends;
pub(crate) mod feed;
/// The possible errors that can occur.
#[derive(Debug, thiserror::Error)]
pub(crate) enum Error {
/// A standard I/O error occurred.
#[error("IO error: {0}")]
Io(#[from] std::io::Error),
/// No redirect URL found in item metadata.
#[error("No redirect URL found")]
NoRedirectUrlFound,
/// A (reqwest) HTTP error occurred.
#[error("HTTP error: {0}")]
Request(#[from] reqwest::Error),
/// Unsupported back-end encountered.
#[error("Unsupported back-end: {0}")]
UnsupportedBackend(String),
/// A URL parse error occurred.
#[error("URL parse error: {0}")]
UrlParse(#[from] url::ParseError),
/// An error occurred in youtube-dl.
#[error("Youtube-dl failed: {0}")]
YoutubeDl(#[from] youtube_dl::Error),
/// An YouTube extract error occured.
#[error("YouTube extract error: {0}")]
YtExtract(#[from] ytextract::Error),
/// An YouTube extract ID parsing error occured.
#[error("YouTube extract ID parsing error: {0}")]
YtExtractId0(#[from] ytextract::error::Id<0>),
/// An YouTube extract ID parsing error occured.
#[error("YouTube extract ID parsing error: {0}")]
YtExtractId11(#[from] ytextract::error::Id<11>),
/// An YouTube extract ID parsing error occured.
#[error("YouTube extract ID parsing error: {0}")]
YtExtractId24(#[from] ytextract::error::Id<24>),
/// An YouTube extract playlist video error occured.
#[error("YouTube extract playlist video error: {0}")]
YtExtractPlaylistVideo(#[from] ytextract::playlist::video::Error),
}
impl<'r, 'o: 'r> rocket::response::Responder<'r, 'o> for Error {
fn respond_to(self, _request: &'r Request<'_>) -> rocket::response::Result<'o> {
eprintln!("💥 Encountered error: {}", self);
match self {
Error::NoRedirectUrlFound => Err(Status::NotFound),
_ => Err(Status::InternalServerError),
}
}
}
/// Result type that defaults to [`Error`] as the default error type.
pub(crate) type Result<T, E = Error> = std::result::Result<T, E>;
/// The extra application specific configuration.
#[derive(Debug, Deserialize, Serialize)]
#[serde(crate = "rocket::serde")]
pub(crate) struct Config {
/// The URL at which the application is hosted or proxied from.
/// The public URL at which the application is hosted or proxied from.
#[serde(default)]
url: String,
public_url: String,
}
/// A Rocket responder wrapper type for RSS feeds.
@ -39,136 +104,41 @@ pub(crate) struct Config {
#[response(content_type = "application/xml")]
struct RssFeed(String);
/// Retrieves a download using youtube-dl and redirection.
#[get("/download/<backend>/<file..>")]
pub(crate) async fn download(file: PathBuf, backend: &str) -> Option<Redirect> {
match backend {
"mixcloud" => {
let key = format!("/{}/", file.with_extension("").to_string_lossy());
/// Retrieves a download by redirecting to the URL resolved by the selected back-end.
#[get("/download/<backend_id>/<file..>")]
pub(crate) async fn get_download(file: PathBuf, backend_id: &str) -> Result<Redirect> {
let backend = backends::get(backend_id)?;
mixcloud::redirect_url(&key).await.map(Redirect::to)
}
_ => None,
}
backend.redirect_url(&file).await.map(Redirect::to)
}
/// Handler for retrieving the RSS feed of an Mixcloud user.
#[get("/feed/<backend>/<username>")]
async fn feed(backend: &str, username: &str, config: &State<Config>) -> Option<RssFeed> {
let user = mixcloud::get_user(username).await?;
let cloudcasts = mixcloud::get_cloudcasts(username).await?;
let mut last_build = DateTime::<Utc>::from_utc(NaiveDateTime::from_timestamp(0, 0), Utc);
/// Handler for retrieving the RSS feed of a channel on a certain back-end.
///
/// The limit parameter determines the maximum of items that can be in the feed.
#[get("/feed/<backend_id>/<channel_id>?<limit>")]
async fn get_feed(
backend_id: &str,
channel_id: &str,
limit: Option<usize>,
config: &State<Config>,
) -> Result<RssFeed> {
let backend = backends::get(backend_id)?;
let channel = backend.channel(channel_id, limit).await?;
let feed = feed::construct(backend_id, config, channel);
let category = CategoryBuilder::default()
.name(String::from("Music")) // FIXME: Don't hardcode the category!
.build();
let generator = String::from(concat!(
env!("CARGO_PKG_NAME"),
" ",
env!("CARGO_PKG_VERSION")
));
let image = ImageBuilder::default()
.link(user.pictures.large.clone())
.url(user.pictures.large.clone())
.build();
let items = cloudcasts
.into_iter()
.map(|cloudcast| {
let mut file = PathBuf::from(cloudcast.key.trim_end_matches('/'));
file.set_extension("m4a"); // FIXME: Don't hardcode the extension!
let url = uri!(
Absolute::parse(&config.url).expect("valid URL"),
download(backend = backend, file = file)
);
// FIXME: Don't hardcode the description!
let description = format!("Taken from Mixcloud: {}", cloudcast.url);
let keywords = cloudcast
.tags
.iter()
.map(|tag| &tag.name)
.cloned()
.collect::<Vec<_>>()
.join(", ");
let categories = cloudcast
.tags
.into_iter()
.map(|tag| {
CategoryBuilder::default()
.name(tag.name)
.domain(Some(tag.url))
.build()
})
.collect::<Vec<_>>();
let length = mixcloud::estimated_file_size(cloudcast.audio_length);
let enclosure = EnclosureBuilder::default()
.url(url.to_string())
.length(format!("{}", length))
.mime_type(String::from(mixcloud::default_file_type()))
.build();
let guid = GuidBuilder::default()
.value(cloudcast.slug)
.permalink(false)
.build();
let itunes_ext = ITunesItemExtensionBuilder::default()
.image(Some(cloudcast.pictures.large))
.duration(Some(format!("{}", cloudcast.audio_length)))
.subtitle(Some(description.clone()))
.keywords(Some(keywords))
.build();
if cloudcast.updated_time > last_build {
last_build = cloudcast.updated_time;
}
ItemBuilder::default()
.title(Some(cloudcast.name))
.link(Some(cloudcast.url))
.description(Some(description))
.categories(categories)
.enclosure(Some(enclosure))
.guid(Some(guid))
.pub_date(Some(cloudcast.updated_time.to_rfc2822()))
.itunes_ext(Some(itunes_ext))
.build()
})
.collect::<Vec<_>>();
let itunes_ext = ITunesChannelExtensionBuilder::default()
.author(Some(user.name.clone()))
.categories(Vec::from([ITunesCategoryBuilder::default()
.text("Music")
.build()])) // FIXME: Don't hardcode the category!
.image(Some(user.pictures.large))
.explicit(Some(String::from("no")))
.summary(Some(user.biog.clone()))
.build();
let channel = ChannelBuilder::default()
.title(&format!("{} (via Mixcloud)", user.name))
.link(&user.url)
.description(&user.biog)
.category(category)
.last_build_date(Some(last_build.to_rfc2822()))
.generator(Some(generator))
.image(Some(image))
.items(items)
.itunes_ext(Some(itunes_ext))
.build();
let feed = RssFeed(channel.to_string());
Some(feed)
Ok(RssFeed(feed.to_string()))
}
/// Returns a simple index page that explains the usage.
#[get("/")]
pub(crate) async fn index(config: &State<Config>) -> Template {
Template::render("index", context! { url: &config.url })
pub(crate) async fn get_index(config: &State<Config>) -> Template {
Template::render("index", context! { url: &config.public_url })
}
/// Sets up Rocket.
pub fn setup() -> Rocket<Build> {
rocket::build()
.mount("/", routes![download, feed, index])
.mount("/", routes![get_download, get_feed, get_index])
.attach(AdHoc::config::<Config>())
.attach(Template::fairing())
}

View File

@ -1,21 +1,21 @@
#![doc = include_str!("../README.md")]
#![warn(
clippy::all,
missing_copy_implementations,
missing_debug_implementations,
rust_2018_idioms,
rustdoc::broken_intra_doc_links
rustdoc::broken_intra_doc_links,
trivial_numeric_casts,
renamed_and_removed_lints,
unsafe_code,
unstable_features,
unused_import_braces,
unused_qualifications
)]
#![deny(missing_docs)]
use color_eyre::Result;
/// Sets up and launches Rocket.
#[rocket::main]
async fn main() -> Result<()> {
color_eyre::install()?;
let rocket = podbringer::setup();
let _ = rocket.ignite().await?.launch().await?;
Ok(())
#[rocket::launch]
fn rocket() -> _ {
podbringer::setup()
}

View File

@ -1,159 +0,0 @@
//! The Mixcloud back-end.
//!
//! It uses the Mixcloud API to retrieve the feed (user) and items (cloudcasts)).
//! See also: <https://www.mixcloud.com/developers/>
use std::process::Stdio;
use chrono::{DateTime, Utc};
use reqwest::Url;
use rocket::serde::Deserialize;
use tokio::process::Command;
/// A Mixcloud user.
#[derive(Debug, Deserialize)]
#[serde(crate = "rocket::serde")]
pub(crate) struct User {
/// The name of the user.
pub(crate) name: String,
/// The bio (description) of the user.
pub(crate) biog: String,
/// The picture URLs associated with the user.
pub(crate) pictures: Pictures,
/// The original URL of the user.
pub(crate) url: String,
}
/// A collection of different sizes/variants of a picture.
#[derive(Debug, Deserialize)]
#[serde(crate = "rocket::serde")]
pub(crate) struct Pictures {
/// The large picture of the user.
pub(crate) large: String,
}
/// The Mixcloud cloudcasts container.
#[derive(Debug, Deserialize)]
#[serde(crate = "rocket::serde")]
pub(crate) struct CloudcastData {
/// The contained cloudcasts.
data: Vec<Cloudcast>,
}
/// A Mixcloud cloudcast.
#[derive(Debug, Deserialize)]
#[serde(crate = "rocket::serde")]
pub(crate) struct Cloudcast {
/// The key of the cloudcast.
pub(crate) key: String,
/// The name of the cloudcast.
pub(crate) name: String,
/// The slug of the cloudcast (used for the enclosure).
pub(crate) slug: String,
/// The picture URLs associated with the cloudcast.
pub(crate) pictures: Pictures,
/// The tags of the cloudcast.
pub(crate) tags: Vec<Tag>,
/// The time the feed was created/started.
pub(crate) updated_time: DateTime<Utc>,
/// The original URL of the cloudcast.
pub(crate) url: String,
/// The length of the cloudcast (in seconds).
pub(crate) audio_length: u32,
}
/// A Mixcloud cloudcast tag.
#[derive(Debug, Deserialize)]
#[serde(crate = "rocket::serde")]
pub(crate) struct Tag {
/// The name of the tag.
pub(crate) name: String,
/// The URL of the tag.
pub(crate) url: String,
}
/// The base URL for the Mixcloud API.
const API_BASE_URL: &str = "https://api.mixcloud.com";
/// The base URL for downloading Mixcloud files.
const FILES_BASE_URL: &str = "https://www.mixcloud.com";
/// The default bitrate used by
const DEFAULT_BITRATE: u32 = 64 * 1024;
/// The default file (MIME) type.
const DEFAULT_FILE_TYPE: &str = "audio/mpeg";
/// Returns the default file type used by Mixcloud.
pub(crate) fn default_file_type() -> &'static str {
DEFAULT_FILE_TYPE
}
/// Returns the estimated file size in bytes for a given duration.
///
/// This uses the default bitrate (see [`DEFAULT_BITRATE`]) which is in B/s.
pub(crate) fn estimated_file_size(duration: u32) -> u32 {
DEFAULT_BITRATE * duration / 8
}
/// Retrieves the user data using the Mixcloud API.
pub(crate) async fn get_user(username: &str) -> Option<User> {
let mut url = Url::parse(API_BASE_URL).unwrap();
url.set_path(username);
println!("⏬ Retrieving user {username} from {url}...");
let response = reqwest::get(url).await.ok()?;
let user = match response.error_for_status() {
Ok(res) => res.json().await.ok()?,
Err(_err) => return None,
};
Some(user)
}
/// Retrieves the cloudcasts of the user using the Mixcloud API.
pub(crate) async fn get_cloudcasts(username: &str) -> Option<Vec<Cloudcast>> {
let mut url = Url::parse(API_BASE_URL).unwrap();
url.set_path(&format!("{username}/cloudcasts/"));
println!("⏬ Retrieving cloudcasts of user {username} from {url}...");
let response = reqwest::get(url).await.ok()?;
let cloudcasts: CloudcastData = match response.error_for_status() {
Ok(res) => res.json().await.ok()?,
Err(_err) => return None,
};
Some(cloudcasts.data)
}
/// Retrieves the redirect URL for the provided Mixcloud cloudcast key.
pub(crate) async fn redirect_url(key: &str) -> Option<String> {
let mut cmd = Command::new("youtube-dl");
cmd.args(&["--format", "http"])
.arg("--get-url")
.arg(&format!("{FILES_BASE_URL}{key}"))
.stdout(Stdio::piped());
let output = cmd.output().await.ok()?;
if output.status.success() {
let direct_url = String::from_utf8_lossy(&output.stdout)
.trim_end()
.to_owned();
println!("🌍 Determined direct URL for {key}: {direct_url}...");
Some(direct_url)
} else {
None
}
}

View File

@ -5,15 +5,21 @@
URL in your favorite podcast client to start using it.
</p>
<p>
Given the Mixcloud URL <https://www.mixcloud.com/myfavouriteband/>, the URL you
need to use for Podbringer is comprised of the following parts:
The URL you need to use for Podbringer is comprised of the following parts:
<pre>
https://my.domain.tld/podbringer/feed/mixcloud/myfavouriteband
|------------------------------| |-------||--------------|
The Podbringer location URL Service User @ service
|------------------------------| |------| |-------------|
The Podbringer public URL Service Service ID
</pre>
</p>
<p>
The Podbringer location URL of this instance is: {{ url }}
Supported services are:
<ul>
<li>Mixcloud (service ID is Mixcloud username)</li>
<li>YouTube (service ID is YouTube channel or playlist ID)</li>
</ul>
</p>
<p>
The Podbringer location URL of this instance is: <a href="{{ url }}">{{ url }}</a>.
</p>