Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MIDI 2.0 Support #625

Open
8 of 10 tasks
derselbst opened this issue Mar 4, 2020 · 10 comments
Open
8 of 10 tasks

MIDI 2.0 Support #625

derselbst opened this issue Mar 4, 2020 · 10 comments

Comments

@derselbst
Copy link
Member

derselbst commented Mar 4, 2020

The MIDI 2.0 specification has been released:

https://www.midi.org/articles-old/details-about-midi-2-0-midi-ci-profiles-and-property-exchange

This issue serves as a placeholder to figure out what's new and whether / how it can be adopted for fluidsynth.

  • Total midi channels increased to 256 (this is already the maximum supported by fluidsynth)
  • Note velocities widened to 16bits
    • event API needs to be integer-promoted
    • synth API needs to be integer-promoted
  • Controller/Pressure/Bend/RPN/NRPN values widened to 32bits
    • event API needs to be integer-promoted
    • synth API needs to be integer-promoted
  • Per-note controllers
  • Per-note pitch values and pitch bends (?supported via microtonal tuning?)
  • Relative changes to RPNs/NRPNs
@ghost
Copy link

ghost commented Aug 30, 2020

It looks like the more important new things are:

  • Total midi channels increased to 256
  • Note velocities widened to 16bits
  • Controller/Pressure/Bend/RPN/NRPN values widened to 32bits
  • Per-note controllers
  • Per-note pitch values and pitch bends
  • Relative changes to RPNs/NRPNs

@derselbst
Copy link
Member Author

Thanks for this summary. We will have an SOVERSION bump next release. In a first step I think it's worth going through the API to verify that our data types are wide enough. I'll do so.

@atsushieno
Copy link
Contributor

I was observing this issue for a while until I could finally start working on MIDI 2.0 support in my Android FluidsynthMidiDeviceService project for fun: atsushieno/fluidsynth-midi-service-j@74459a4 . Now I'm at the stage where I can invoke fluid_synth_noteon() etc. with MIDI 2.0 values (such as 16-bit velocity). It is based on my Kotlin library and unlikely very familiar with C/C++ app developers, but I'm kind of sure that anyone can achieve something similar using the latest JUCE UMP implementation or even my tiny C library.

So what is doable now is -- I can invoke the function, but it crashes because fluid_synth_noteon() does not accept velocity higher than 128 - and we send values which are most likely beyond that value, because when we convert MIDI 1.0 velocity to MIDI 2.0 velocity, we multiply the value by 0x200, to conform to what MIDI 2.0 UMP Specification Appendix D.3.1 suggests (the new value range is 0..65535). The argument value range at fluid_synth_noteon() needs to be relaxed, but not simply, because it will have to handle MIDI1 note-on message in backward-compatible way.

What would be a good solution from here now? I can think of two (there would be probably more solutions):

  • There would be new MIDI2-oriented functions e.g. fluid_synth_noteon2() ? That would be fairly straightforward, but the API will be bloated to some extent. Also it brings uncertainty what happens we mix calls to MIDI1 and MIDI2 functions.
  • Implement MIDI CI (namely Set New Protocol) support, switch MIDI1/2 modes internally, and use the existing set of those synth functions? You will have to convert all the internals based on MIDI2 values and convert those function argument values to match MIDI2 ranges. The API will be consistent, but since the actual semantics of those argument values vary depending on the internal MIDI1/MIDI2 mode, it may bring confusion to users (like, "I'm sending 0x7F, the highest velocity and the velocity is still too little...oh, it was interpreted as 0x7F within 0..0xFFFF" kind of mistake).

@derselbst
Copy link
Member Author

Good news, thanks for the update.

What would be a good solution from here now?

My suggestion: When creating a fluid_synth_t the user should decide between MIDI 1 or MIDI 2. This could either be done via fluid_settings or via an explicit API function or flag. In any case, once set it remains effective for the entire lifetime of the synth. I.e. mixing MIDI1/2 is not considered to be use-case. If one really has to, one should create and use two synth instances.

Internally, we can do whatever it needs to support MIDI 1 and MIDI 2 via the same API that we already have. Example: We could switch to 16 bit velocities in fluidsynth internally. If the synth has been set up with legacy MIDI 1.0, the velocities would be converted appropriately when passed via functions like fluid_synth_noteon(). If the synth is already configured as MIDI 2, velocities would simply be passed through.


P.S.: I'm still not really familiar with MIDI 2 as I'm lacking a use-case myself. So I welcome anybody to contribute to get this implemented.

@atsushieno
Copy link
Contributor

I thought more about the actual implementation (for example, fluid_synth_noteon()) on "mixing" MIDI1 and MIDI2 operations.

In MIDI 2.0 UMP specification, there are various "message types", and there are MIDI1 message type (0x2g) and MIDI2 message type (0x4g) (g denotes "group" here). For MIDI1 message type, the UMP consists of MIDI1-compatible message (status byte and channel, MSB, and LSB) for 9xh .. Exh (x for channel here). Those two types of the messages can be sent to recipients within the same UMP stream.

That is, when we are dealing with MIDI 2.0 UMP streams, mixing MIDI1 and MIDI2 necessarily happens anyways, regardless of either application or fluidsynth is responsible to convert MIDI1 messages.

Since it will happen when we implement MIDI2 based fluid_sequencer_t equivalents, I would rather suggest to have both MIDI1 and MIDI2 function entrypoints (like fluid_synth_noteon2()) in the public API.

I'd try to hack some proof-of-concept implementation while I'm interested in it.

@derselbst
Copy link
Member Author

derselbst commented Jul 17, 2021

I'd try to hack some proof-of-concept implementation while I'm interested in it.

Sure, go ahead. But please keep in mind that those "MIDI messages" you're referring to are based on the MIDI protocol, i.e. the lowest-level representation of MIDI to be sent around between hardware devices. As such it probably contains many quirks and workarounds to remain backward compatibility with MIDI1. The synth on the other hand is a high-level API. And I really appreciate to keep this API as simple as possible (esp. without having to duplicate every API-function for MIDI2).

Since it will happen when we implement MIDI2 based fluid_sequencer_t equivalents

What has the sequencer to do with MIDI2? It's just a class that receives and sends events around. It doesn't care about their contents. If there is a need to introduce new event types, we can talk about that. But again: Just duplicating the existing event types for MIDI2 does not seem like a viable way to go.

@atsushieno
Copy link
Contributor

atsushieno commented Jul 17, 2021

What has the sequencer to do with MIDI2?

Considering that a sequencer like fluid_seq that prcesses streams of MIDI events, it is a straightforward assumption that there will be MIDI2 SMF-like player (there is no corresponding file format for MIDI2 yet; it is said under development at MMA).

I would discuss with the actual code to understand how a MIDI 2 sequencer that calls synth functions would look like (excuse my Kotlin code here!) :

    private fun sendMidi2Immediate(msg: ByteArray, offset: Int, count: Int) {
        for (ump in iterateAsUmp(msg, offset, count)) {
            when (ump.messageType) {
                MidiMessageType.MIDI1 -> {
                    val channel = ump.group * 16 + ump.channelInGroup
                    when (ump.eventType) {
                        MidiChannelStatus.NOTE_OFF -> syn.noteOff(channel, ump.midi1Note)
                        MidiChannelStatus.NOTE_ON -> syn.noteOn(channel, ump.midi1Note, ump.midi1Velocity)
(...)
                    }
                }
                MidiMessageType.MIDI2 -> {
                    val channel = ump.group * 16 + ump.channelInGroup
                    when (ump.eventType) {
                        MidiChannelStatus.NOTE_OFF -> syn.noteOff(channel, ump.midi2Note)
                        MidiChannelStatus.NOTE_ON -> syn.noteOn(channel, ump.midi2Note, ump.midi2Velocity16)
(...)

(This implementation is clearly wrong at the moment, as fluid_synth_noteon() does not really differentiate MIDI1 mode and MIDI2 mode, so do not take it as a referencible implementation.)

IF, fluid_synth_noteon() used above is still for both MIDI1 and MIDI2, it will be app developer's responsibility to calculate the arguments (e.g. 7-bit velocity to 16-bti velocity), while it is quite common to every app developers and does not make sense that all app developers implement the same wheel.

Another point I noticed afterwards is, since MIDI-CI "Set New Protocol" messages are sent as Universal SysEx messages, the specification assumes that it can be dynamically changed at run time while the device is connected. We do not have to become compatible with MIDI-CI, but to become compatible with OS-provided MIDI 2.0 APIs (such as CoreMIDI) it will be either required or at least become more straightforward to follow their presumed design. Fixing the internal MIDI protocol at instantiation time however makes it impossible.

Even if we don't have distinct fluid_synth_noteon() functions for MIDI1 and MIDI2 at signature level, the behavior would still have to be documented, explaining how they behave differently depending on the internal MIDI protocol states, that can be changed either settings or via Universal SysEx... I'm sure it will be unnecessarily confusing.

Just duplicating the existing event types for MIDI2 does not seem like a viable way to go.

This only partially applies; fluid_synth_noteon() does not deal with "note attributes" which for example contains per-note pitch (Pitch 7.9) support which is not replaceable by other mechanisms such as MTS. We will have to add another noteon function anyways. And it is not something app developer should implement for each.

@derselbst
Copy link
Member Author

What has the sequencer to do with MIDI2?

Considering that a sequencer like fluid_seq that prcesses streams of MIDI events

The sequencer processes fluid_event_t. This is fluidsynth's custom event type which is not related to MIDI (even though it has many event-types that you find in MIDI as well). However, its main purpose is to encapsulate calls to the synth. Therefore dealing with the sequencer and its events should be done once the synth API is ready.

IF, fluid_synth_noteon() used above is still for both MIDI1 and MIDI2, it will be app developer's responsibility to calculate the arguments (e.g. 7-bit velocity to 16-bti velocity), while it is quite common to every app developers and does not make sense that all app developers implement the same wheel.

fluidsynth could expose helper functions to mitigate this particular conversion trouble. E.g. fluid_midi_vel16() would convert the 7bit velocity to 16 bit. And the simple usage on client side would be

synth = new_fluid_synth(/*as MIDI2 synth*/);
fluid_synth_noteon(synth, chan, key, is_midi2 ? vel16 : fluid_midi_vel16(vel7));

Just duplicating the existing event types for MIDI2 does not seem like a viable way to go.

This only partially applies; fluid_synth_noteon() does not deal with "note attributes" which for example contains per-note pitch (Pitch 7.9) support which is not replaceable by other mechanisms such as MTS. We will have to add another noteon function

Fluidsynth doesn't know about per-note controllers. It's one of the open points in the first comment. So ofc, we need a dedicated way to support this new functionality. I was talking about existing functionality. It could very well be that we'll have to add a behaviour switch enum or something. But the conversion argument alone is not enough to justify this, IMO. Perhaps give me some time to read about MIDI-CI and Universal SysEx that you mentioned and we'll see...

@atsushieno
Copy link
Contributor

So ofc, we need a dedicated way to support this new functionality. I was talking about existing functionality. It could very well be that we'll have to add a behaviour switch enum or something. But the conversion argument alone is not enough to justify this, IMO.

Alright. Things should become clearer with some working implementation, and then we'd benefit from these discussions. So far, every new bits could be hidden in the nonpublic API, so we could discuss how useful or useless the API signatures would be when some implementation gets ready.

@jimhen3ry
Copy link
Contributor

I am going to make what are likely to be naïve comments as someone who does not fully understand MIDI 2.0 and probably not even MIDI 1.0. It is not my intent to diminish anything being discussed by those who have taken the time to develop a more thorough understanding of MIDI 2.0. I am focusing on what is in:

MIDI 2.0 Specification Overview with Minimum Requirements, MIDI Association Document: M2-100-U, Document Version 1.1, Draft Date May 11, 2023, Published June 15, 2023

MIDI Capability Inquiry (MIDI-CI) Bidirectional Negotiations for MIDI Devices, MIDI Association Document: M2-101-UM, Document Version 1.2, Draft Date May 11, 2023, Published June 15, 2023

Universal MIDI Packet (UMP) Format and MIDI 2.0 Protocol With MIDI 1.0 Protocol in UMP Format, MIDI Association Document: M2-104-UM, Document Version 1.1.1, Draft Date 2023-07-19, Published 2023-07-19

To be totally transparent, I am ambivalent about MIDI 2.0. I have a nagging feeling that it a solution in search of a problem that was concocted by a committee of corporate software hotshots and that it is too convoluted to ever capture the interest of the vast majority of working musicians who use MIDI 1.0. I sense that there are those within MMA who have the same feeling. The Executive Summary in MIDI Capability Inquiry begins:

MIDI is a longstanding and entrenched specification. The features of MIDI 1.0 continue to work well after many years. The basic semantic language of music does not change and as a result the existing definitions of MIDI as musical control messages continue to work remarkably well.

Recognizing that MIDI 2.0 specifications are still drafts I would encourage the FluidSynth developers to move slowly with regard to adding MIDI 2.0 capabilities. It might be helpful to state the goals for FluidSynth with reference to the MIDI 2.0 Specification Overview, 5 Minimum Compatibility Requirements of MIDI 2.0. At least one of the following is needed to claim MIDI 2.0 capability:

A. MIDI-CI to at least its minimum requirements
B. UMP Data Format to at least its minimum requirements

Depending on whether A or B is supported there are additional things required.

Bidirectional communication is an important element of MIDI 2.0. I don't think this will pose any particular difficulty for FluidSynth but it is worth keeping in mind.

What might be more important is the classification as a Receiver and/or Sender. I believe FluidSynth is primarily a Receiver. AFAIK only the MIDI File Player when used to send MIDI Messages causes FluidSynth to act as a Sender. I think it might be worthwhile to consider at least conceptually separating the MIDI File Player as a separate MIDI Device so there are no complications from having to treat FluidSynth as a MIDI Device that is capable of being both a Receiver and a Sender. Perhaps Function Blocks as described in Universal MIDI Packet Section 6, p 28, address this issue?

I think Universal MIDI Packet Section 6.2 MIDI 1.0 Byte Stream Ports, p 30, deserves careful study. MIDI 1.0 compatibility is something the MMA has given a lot of thought. Any work done adding MIDI 2.0 capabilities to FluidSynth should follow the guidance given in the MIDI 2.0 specs for maintaining MIDI 1.0 compatibility.

Again, I apologize for rehashing things that most people participating in this thread have probably thought about and moved beyond long ago. But hopefully having to explain these things to someone slow will provide additional clarity and focus to the work you are doing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants