Skip to content

cristianbregant/azure_speech_recognition

Repository files navigation

AzureSpeechRecognition

Demonstrates how to use the AzureSpeechRecognition plugin.

Getting Started

This project is a starting point for using the Azure Speech Recognition Services.

To use this plugin you must have already create an account on the cognitive service page.

Installation

To install the package use the latest:

azure_speech_recognition: ^0.8.2

Usage

import 'package:azure_speech_recognition/azure_speech_recognition.dart';

Initialize

There are 2 type of initializer:

Simple initializer

It should be used in any case other than the IntentRecognition. The language default setting is "en-EN" but you could use what you want (if it is supported).

AzureSpeechRecognition.initialize("your_subscription_key", "your_server_region",lang: "it-IT");

Intent initializer

It should be used only in IntentRecognition. The language default setting is "en-EN" but you could use what you want (if it is supported).

AzureSpeechRecognition.initializeLanguageUnderstading("your_language_subscription_key", "your_language_server_region", "your_language_appId",lang:"it-IT");

Types of recognitions

Simple voice recognition

The response is given at the end of the recognition.

AzureSpeechRecognition _speechAzure;
String subKey = "your_key";
String region = "your_server_region";
String lang = "it-IT";

void activateSpeechRecognizer(){
    // MANDATORY INITIALIZATION
  AzureSpeechRecognition.initialize(subKey, region,lang: lang);
  
  _speechAzure.setFinalTranscription((text) {
    // do what you want with your final transcription
  });

  _speechAzure.setRecognitionStartedHandler(() {
   // called at the start of recognition (it could also not be used)
  });

}

  @override
  void initState() {
    
    _speechAzure = new AzureSpeechRecognition();

    activateSpeechRecognizer();

    super.initState();
  }



Future recognizeVoice() async {
    try {
      AzureSpeechRecognition.simpleVoiceRecognition();
    } on PlatformException catch (e) {
      print("Failed start the recognition: '${e.message}'.");
    }
  }

Voice recognition with microphone streaming

It returns in the recognitionResultHandler the temporary phrases that it understand and at the end the final response is returned by the setFinalTranscription method.

void activateSpeechRecognizer(){
    // MANDATORY INITIALIZATION
  AzureSpeechRecognition.initialize(subKey, region,lang: lang);
  
  _speechAzure.setFinalTranscription((text) {
    // do what you want with your final transcription
  });

  _speechAzure.setRecognitionResultHandler((text) {
    // do what you want with your partial transcription (this one is called every time a word is recognized)
    // if you have a string that is displayed you could call here setState() to updated with the partial result
  });

  _speechAzure.setRecognitionStartedHandler(() {
   // called at the start of recognition (it could also not be used)
  });

}


Future recognizeVoiceMicStreaming() async {
    try {
      AzureSpeechRecognition.micStream();
    } on PlatformException catch (e) {
      print("Failed start the recognition: '${e.message}'.");
    }
  }

Voice recognition continuously : CURRENTLY NOT WORKING

It returns in the recognitionResultHandler the temporary phrases that it understand and at when the function is called again the final response is returned by the setFinalTranscription method.

Voice intent recognition

It returns in the recognitionResultHandler the temporary phrases that it understand and at the end the final response is returned by the setFinalTranscription method.

void activateSpeechRecognizer(){
    // MANDATORY INITIALIZATION
  AzureSpeechRecognition.initializeLanguageUnderstading(subKey, region, appId, lang: lang);
  
  _speechAzure.setFinalTranscription((text) {
    // do what you want with your final transcription
  });

  _speechAzure.setRecognitionResultHandler((text) {
    // do what you want with your partial transcription (this one is called every time a word is recognized)
    // if you have a string that is displayed you could call here setState() to updated with the partial result
  });

  _speechAzure.setRecognitionStartedHandler(() {
   // called at the start of recognition (it could also not be used)
  });

}


Future speechIntentRecognizer() async {
    try {
      AzureSpeechRecognition.intentRecognizer();
    } on PlatformException catch (e) {
      print("Failed start the recognition: '${e.message}'.");
    }
  }

Voice recognition with keyword : CURRENTLY NOT WORKING

This method require the keywords file to be put in the asset folder. The mandatory parameter is the name of that file. It returns in the recognitionResultHandler the temporary phrases that it understand and at the end the final response is returned by the setFinalTranscription method.

Contributing

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate.

About

Implementation of Azure Cognitive Speech Recognition for Flutter

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published