Skip to content
This repository has been archived by the owner on Dec 17, 2023. It is now read-only.

docs: clarified meaning of the legacy editions #426

Merged
merged 2 commits into from Oct 21, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
11 changes: 6 additions & 5 deletions google/cloud/dialogflow_v2/services/sessions/async_client.py
Expand Up @@ -382,11 +382,12 @@ def streaming_detect_intent(

Multiple response messages can be returned in order:

1. If the input was set to streaming audio, the first
one or more messages contain recognition_result.
Each recognition_result represents a more complete
transcript of what the user said. The last
recognition_result has is_final set to true.
1. If the StreamingDetectIntentRequest.input_audio
field was set, the recognition_result field is
populated for one or more messages. See the
[StreamingRecognitionResult][google.cloud.dialogflow.v2.StreamingRecognitionResult]
message for details about the result message
sequence.
2. The next message contains response_id,
query_result and optionally webhook_status if a
WebHook was called.
Expand Down
11 changes: 6 additions & 5 deletions google/cloud/dialogflow_v2/services/sessions/client.py
Expand Up @@ -592,11 +592,12 @@ def streaming_detect_intent(

Multiple response messages can be returned in order:

1. If the input was set to streaming audio, the first
one or more messages contain recognition_result.
Each recognition_result represents a more complete
transcript of what the user said. The last
recognition_result has is_final set to true.
1. If the StreamingDetectIntentRequest.input_audio
field was set, the recognition_result field is
populated for one or more messages. See the
[StreamingRecognitionResult][google.cloud.dialogflow.v2.StreamingRecognitionResult]
message for details about the result message
sequence.
2. The next message contains response_id,
query_result and optionally webhook_status if a
WebHook was called.
Expand Down
73 changes: 41 additions & 32 deletions google/cloud/dialogflow_v2/types/session.py
Expand Up @@ -515,11 +515,11 @@ class StreamingDetectIntentResponse(proto.Message):

Multiple response messages can be returned in order:

1. If the input was set to streaming audio, the first one or more
messages contain ``recognition_result``. Each
``recognition_result`` represents a more complete transcript of
what the user said. The last ``recognition_result`` has
``is_final`` set to ``true``.
1. If the ``StreamingDetectIntentRequest.input_audio`` field was
set, the ``recognition_result`` field is populated for one or
more messages. See the
[StreamingRecognitionResult][google.cloud.dialogflow.v2.StreamingRecognitionResult]
message for details about the result message sequence.

2. The next message contains ``response_id``, ``query_result`` and
optionally ``webhook_status`` if a WebHook was called.
Expand Down Expand Up @@ -570,33 +570,42 @@ class StreamingRecognitionResult(proto.Message):
the audio that is currently being processed or an indication that
this is the end of the single requested utterance.

Example:

1. transcript: "tube"

2. transcript: "to be a"

3. transcript: "to be"

4. transcript: "to be or not to be" is_final: true

5. transcript: " that's"

6. transcript: " that is"

7. message_type: ``END_OF_SINGLE_UTTERANCE``

8. transcript: " that is the question" is_final: true

Only two of the responses contain final results (#4 and #8 indicated
by ``is_final: true``). Concatenating these generates the full
transcript: "to be or not to be that is the question".

In each response we populate:

- for ``TRANSCRIPT``: ``transcript`` and possibly ``is_final``.

- for ``END_OF_SINGLE_UTTERANCE``: only ``message_type``.
While end-user audio is being processed, Dialogflow sends a series
of results. Each result may contain a ``transcript`` value. A
transcript represents a portion of the utterance. While the
recognizer is processing audio, transcript values may be interim
values or finalized values. Once a transcript is finalized, the
``is_final`` value is set to true and processing continues for the
next transcript.

If
``StreamingDetectIntentRequest.query_input.audio_config.single_utterance``
was true, and the recognizer has completed processing audio, the
``message_type`` value is set to \`END_OF_SINGLE_UTTERANCE and the
following (last) result contains the last finalized transcript.

The complete end-user utterance is determined by concatenating the
finalized transcript values received for the series of results.

In the following example, single utterance is enabled. In the case
where single utterance is not enabled, result 7 would not occur.

::

Num | transcript | message_type | is_final
--- | ----------------------- | ----------------------- | --------
1 | "tube" | TRANSCRIPT | false
2 | "to be a" | TRANSCRIPT | false
3 | "to be" | TRANSCRIPT | false
4 | "to be or not to be" | TRANSCRIPT | true
5 | "that's" | TRANSCRIPT | false
6 | "that is | TRANSCRIPT | false
7 | unset | END_OF_SINGLE_UTTERANCE | unset
8 | " that is the question" | TRANSCRIPT | true

Concatenating the finalized transcripts with ``is_final`` set to
true, the complete utterance becomes "to be or not to be that is the
question".

Attributes:
message_type (google.cloud.dialogflow_v2.types.StreamingRecognitionResult.MessageType):
Expand Down