Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transmit support #124

Open
quentinmit opened this issue Dec 23, 2018 · 4 comments
Open

Transmit support #124

quentinmit opened this issue Dec 23, 2018 · 4 comments

Comments

@quentinmit
Copy link
Contributor

I don't actually see an open issue about this. Feel free to close as a dupe if I missed it.

ShinySDR has a variety of bits of transmit support (Osmo TX driver, modulators, etc.) but none of it appears to be wired up into a functional UI yet.

I'd like to take a stab at this. Do you have any notes or thoughts about how it was intended to be implemented before I start working on it?

@quentinmit
Copy link
Contributor Author

Here's what I'm thinking:

  • Receiver should probably be split into a Transceiver parent object with common cells, a Receiver child object with a demodulator, and a Transmitter child object with a modulator; I think it makes sense for a transmitter to always be linked to a receiver. This change likely affects a bunch of JavaScript.
  • Top._do_connect needs to connect Transceivers that are valid to sources' get_tx_drivers.
  • audiomux.AudioManager needs to track audio sources, ideally paired with audio sinks but possibly it would be easier to use two instances of AudioManager.
  • audiomux needs to gain a VectorAudioSource for server-side audio.
  • Transceiver needs a transmitting cell that calls set_transmitting on the correct device.

Then to support client-side audio,

  • Top.add_audio_callback needs to return a callback for export_ws
  • export_ws.AudioStreamInner needs to accept incoming data and send it to the callback it got from Top
  • audio/ws-stream needs to be able to instantiate an audio/client-source and send it to export_ws using the agreed-upon protocol
  • The audio stream should only be sent when at least one client-sourced Transmitter is active to save on bandwidth
  • Bind a simple keystroke like SPACE or TAB to toggle TX on the selected receiver (what is the selected receiver? maybe use the letter corresponding to the receiver?)

@kpreid
Copy link
Owner

kpreid commented Dec 23, 2018

There's some overdue core refactoring #111 that I want to get in before adding any major features like transmit support that might lock in the current architecture more. (Basically, on the receive side, the hardcoded graph design that Top implements is going to go away in favor of receivers and similar declaring their requested inputs. Transmitting may or may not use similar structure.) I've got some prototype code going but it hasn't been put to use yet.

That said, some responses:

Receiver should probably be split into a Transceiver parent object with common cells, a Receiver child object with a demodulator, and a Transmitter child object with a modulator; I think it makes sense for a transmitter to always be linked to a receiver.

The picture that I had in mind was that there would be an object that is less "transceiver" and more "frequency of interest" with optionally attached receiver and transmitter. Unfortunately, I don't currently remember the rationale. It might have to do with the future of frequency database interaction (it needs to be more server-side integrated than it is).

Top._do_connect needs to connect Transceivers that are valid to sources' get_tx_drivers.

Because GR flow graph reconfiguration is disruptive, and transmit sample timing need not have anything to do with receive sample timing, I believe it will be best to have a flow graph for each transmitter, separate from the receivers'. (Syncing them would be relevant if one wants to, say, implement a repeater, but I'm going to call that out of scope.)

audiomux.AudioManager needs to track audio sources, ideally paired with audio sinks but possibly it would be easier to use two instances of AudioManager.

AudioManager has the very specific job of mixing and resampling audio to many destinations. Transmitting does neither mixing nor multiple destinations, at least in straightforward cases. Furthermore, AudioManager is going to go away with the #111 refactoring because the dependency graph will make its job implicit.

@quentinmit
Copy link
Contributor Author

There's some overdue core refactoring #111 that I want to get in before adding any major features like transmit support that might lock in the current architecture more. (Basically, on the receive side, the hardcoded graph design that Top implements is going to go away in favor of receivers and similar declaring their requested inputs. Transmitting may or may not use similar structure.) I've got some prototype code going but it hasn't been put to use yet.

I think the changes I proposed here do not actually make the current architecture locked in more; it will add a bit of wiring to Top but that will be in the same place as all the wiring you're already going to have to touch around receivers.

That said, some responses:

Receiver should probably be split into a Transceiver parent object with common cells, a Receiver child object with a demodulator, and a Transmitter child object with a modulator; I think it makes sense for a transmitter to always be linked to a receiver.

The picture that I had in mind was that there would be an object that is less "transceiver" and more "frequency of interest" with optionally attached receiver and transmitter. Unfortunately, I don't currently remember the rationale. It might have to do with the future of frequency database interaction (it needs to be more server-side integrated than it is).

That's sort of what I was thinking - the Transceiver object would track the frequency information and the selected mode, with optionally attached receiver and transmitter. It sounds like in your mind tuning would involve making a new Frequency object and reattaching the receiver and transmitter to it?

Top._do_connect needs to connect Transceivers that are valid to sources' get_tx_drivers.

Because GR flow graph reconfiguration is disruptive, and transmit sample timing need not have anything to do with receive sample timing, I believe it will be best to have a flow graph for each transmitter, separate from the receivers'. (Syncing them would be relevant if one wants to, say, implement a repeater, but I'm going to call that out of scope.)

Oh, good point. I didn't realize Top is actually a gr.top_block. Yes, I think we would want separate flowgraphs to the extent possible. But can we assume that GR drivers can actually be used from two different flowgraphs at once? I don't know what the contract is on opening the transmit and receive halves of a device at the same time.

I thought about the repeater usecase and it's certainly interesting but I think it's fine to assume that connection would be made outside GR (e.g. with a Pulseaudio loopback device).

audiomux.AudioManager needs to track audio sources, ideally paired with audio sinks but possibly it would be easier to use two instances of AudioManager.

AudioManager has the very specific job of mixing and resampling audio to many destinations. Transmitting does neither mixing nor multiple destinations, at least in straightforward cases. Furthermore, AudioManager is going to go away with the #111 refactoring because the dependency graph will make its job implicit.

Why does AudioManager go away with #111? You still need the moral equivalent to do mixing and resampling.

@kpreid
Copy link
Owner

kpreid commented Dec 24, 2018

It sounds like in your mind tuning would involve making a new Frequency object and reattaching the receiver and transmitter to it?

No, that would be a bad modeling of possibly continuous change. Sorry, as I said I don't remember exactly what the rationale was. In practice I'll do whatever fits in well when the refactoring is in progress.

But can we assume that GR drivers can actually be used from two different flowgraphs at once? I don't know what the contract is on opening the transmit and receive halves of a device at the same time.

This is one of those under-specified things. Audio devices that might be attached to a transceiver don't care. gr-osmosdr when used with the HackRF will fail to switch over unless you ensure the source block is destroyed before you open the sink and vice versa, which is why the osmosdr plugin has support for doing that. But this is independent of whether the blocks are in separate flow graphs.

Why does AudioManager go away with #111? You still need the moral equivalent to do mixing and resampling.

Because instead of having a thing dedicated to making audio resampling connections, each audio sink('s managing wrapper) will be able to specify "I want a sum of these receivers' audio outputs at this sample rate" and the dependency engine will construct the necessary intermediate blocks based on that specification. It's not that AudioManager's job will be replaced, but it will be distributed among generic algorithms and independent units of task-specific (audio) rules.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants