Request for Guide: synchronized music streaming with Phoenix

@doliver posted about a super interesting side project named LiveCue used for syncing music playback across devices.

The underlying technique would make an amazing guide, I think many people would love to build something like turntable.fm, and “get media from one place, play it in multiple” is a good general purpose Elixir clustering example that can be reapplied.

If @doliver wants to tackle this one, it’s his! If not, we want it anyway. :wink:

1 Like

Oh, cool. :slight_smile: To check we’re not talking at cross purposes, as it stands LiveCue relies on files being locally readable by all nodes. (My brother and I keep our music file collections synced.) E.g, play_track phx-click events are handled by broadcasting track info (album ID and track number) to a channel, and then all connected nodes, including the initiating person’s one, handle that info by starting a task which simply runs a cmus-remote play command. (cmus does all the actual playing.)

I wonder if it’s actually streaming audio data from one node to all other nodes and playing in-browser that you have in mind? If so, I could have a look into that.

Edit: if not, and my simpler existing scenario is what you had in mind, yes, I could create a simplified repo and write an article. Thanks!

Oh I think streaming an mp3 across an Elixir cluster is more what I had in mind! I suspect it’s not very hard, but I could be wrong.

Maybe a good intermediate step would be synchronized playback of one or two music files deployed with an app. This could easily be a two part project:

  1. Synchronize music playback with Phoenix (maybe LiveView?)
  2. Now do it with user uploaded files

I suspected so - thanks very much for creating this and clarifying! Sounds good to me, and I’ll have a look into how it might be done over the weekend if that’s okay.

If memory serves, I did start to look at going that way when starting LiveCue but backed away due to things seeming beyond me, but I shall have another look.

Having started a little research I think that with some learning (channels, etc.) I should be able to handle this, so if you’d still like me to I’d be very happy to take it on.

Repo with fly.io deploy instructions and article/blog post as per the latency triangulation guide?

Yes do it!

Exactly. Feel free to post in progress work here and we can look at it early, if it’s helpful.

Okay, great - thanks!

Just wanted to check in to say it’s in progress. I believe I’ve now got the basic backend approach to reading a file on a remote node in place.

I’m thinking each “DJ” will have their mp3(s) uploaded to a Fly.io volume attached to whichever node they’re connected, and that “listeners”, who may be connected to the same or another node, will have each mp3 served to them via their node, which will get it via the Elixir cluster, via the Fly.io network, if necessary.

Please let me know if any of this sounds problematic. :slight_smile:

That doesn’t sound problematic! But it might be bigger than necessary. If it makes things simpler, I think it would be ok to:

  1. Just put one mp3 in /tmp/ and play that
  2. Skip volumes
  3. Let DJs just upload a new mp3 at any given time to replace what’s there

I think most devs can imagine how to do more than that, and the trick on this type of guide is doing the minimum possible to demonstrate the concept. This is useful so you make a reasonable hourly rate too. :slight_smile:

Okay - thanks. I would like to get some form of basic user-controlled cueing of multiple files in place if possible, but will try to keep the scope focused.

Is it okay for the audio to be delivered to the browser via <audio> elements manipulated via LiveView and JavaScript, avoiding complications involved with generating and delivering a stream? This simpler approach would be in line with turntable.fm’s approach, which you mentioned.

These are the main aspects that the guide could run through:

  • A GenServer Genservers keeping track of mp3 file/io_device pids, making the music available to all nodes in the cluster;
  • Phoenix’s Endpoint’s use of PubSub for sharing playlists/the chosen mp3 file and start/stop messages between connected users;
  • LiveView + a little JavaScript interop:
    • cueing/uploading mp3s (depending on how advanced we end up making things);
    • updating the UI when playing is in progress, etc.

Is it okay for the audio to be delivered to the browser via <audio> elements manipulated via LiveView and JavaScript, avoiding complications involved with generating and delivering a stream?

Yeah that sounds like a great simplification.

This seems perfect.