6/recent/ticker-posts

Speech to Text in Flutter Application


Speech-to-text functionality in Flutter provides a convenient and intuitive way for users to interact with applications using their voice. By converting spoken words into text, users can perform various tasks, such as dictating messages, entering text in forms, or controlling the app through voice commands. This feature enhances accessibility, allowing individuals with physical disabilities or those who prefer voice input to use the app effectively.

To utilize speech-to-text in a Flutter application, we need to follow a few steps. Firstly, we include the necessary dependencies, such as the avatar_glow package for UI enhancements and the speech_to_text package for speech recognition capabilities. These packages can be added to the pubspec.yaml file.

After setting up the dependencies, we initialize the speech recognition engine. This step ensures that the necessary resources are prepared and the device’s microphone is ready for input. By calling the speechToText.initialize() method, we can check if the device supports speech recognition and handle any potential errors.

Next, we build the user interface to provide a seamless user experience. We can create a visually appealing interface using Flutter’s widget system, incorporating components like buttons and text displays. The AvatarGlow widget is particularly useful for adding a glowing effect to the microphone button, visually indicating the app’s listening state.

Once the user interface is set up, we can implement the speech recognition functionality. By utilizing the speechToText.listen() method, we start listening for speech input. The method includes parameters such as the duration to listen for and a callback function to handle the recognized words. We can control the start and stop of speech recognition based on user interactions, such as tapping the microphone button.

As the user speaks, we update the user interface with the recognized words. By using Flutter’s state management system, we can dynamically display the recognized text in a designated area. This ensures that the user can see the converted speech in real-time, providing feedback and improving the overall user experience.

1.Import the necessary dependencies:

speech_to_text: ^6.1.1
avatar_glow: ^2.0.2

2. Import this necessary permission

Android

<uses-permission android:name="android.permission.RECORD_AUDIO"/>
<uses-permission android:name="android.permission.INTERNET"/>
<uses-permission android:name="android.permission.BLUETOOTH"/>
<uses-permission android:name="android.permission.BLUETOOTH_ADMIN"/>
<uses-permission android:name="android.permission.BLUETOOTH_CONNECT"/>

Open the file: android/app/src/build.gradle and first fine the line: compileSdkVersion flutter.compileSdkVersion. Replace this line with compileSdkVersion 32.

Next, find the following two lines:

minSdkVersion flutter.minSdkVersion
targetSdkVersion flutter.targetSdkVersion

Update these to the versions shown in the example below:

minSdkVersion 24
targetSdkVersion 32

iOS

For your application to access the microphone on your iPhone or iPad, you’ll need to grant permission to this component. Inside your Podfile, locate the line: flutter_additional_ios_build_settings(target) and below this add the following:

target.build_configurations.each do |config|
config.build_settings['GCC_PREPROCESSOR_DEFINITIONS'] ||= [
'$(inherited)',
# dart: PermissionGroup.microphone
'PERMISSION_MICROPHONE=1',
]
end

Then inside your Info.plist, within the <dict></dict> block, add the following two lines:

<key>NSMicrophoneUsageDescription</key>
<string>microphone</string>

3. Create a StatefulWidget:

class Stt extends StatefulWidget {
const Stt({Key? key}) : super(key: key);
@override
State<Stt> createState()
=> _SttState();
}

This code defines a StatefulWidget called Stt. The Stt class extends StatefulWidget and overrides the createState() method to create the corresponding state.

4. Create State class:

class _SttState extends State<Stt> {
var text = "Hold the button and start speaking";
var isListening = false;
Color bgColor = const Color(0xff00A67E);

SpeechToText speechToText = SpeechToText();
@override
void initState() {
super.initState();
checkMicrophoneAvailability();
}
// Rest of the code...
}

This code defines the state class _SttState, which extends the State class. It contains the state variables like textisListening, and bgColor. It also initializes an instance of the SpeechToText class for speech recognition functionality.

The initState() method is overridden to call the checkMicrophoneAvailability() method and initialize the microphone availability.

5. Check Microphone Availability:

void checkMicrophoneAvailability() async {
bool available = await speechToText.initialize();
if (available) {
setState(() {
if (kDebugMode) {
print('Microphone available: $available');
}
});
} else {
if (kDebugMode) {
print("The user has denied the use of speech recognition.");
}
}
}

This method checks the availability of the microphone for speech recognition. It initializes the speech recognition functionality using speechToText.initialize() and updates the UI state based on the availability.

6. Gesture Detector and Speech Recognition:

GestureDetector(
onTap: () async {
if (!isListening) {
var available = await speechToText.initialize();
if (available) {
setState(() {
isListening = true;
});
speechToText.listen(
listenFor: const Duration(days: 1),
onResult: (result)
{
setState(() {
text = result.recognizedWords;
});
});
}
} else {
setState(() {
isListening = false;
});
speechToText.stop();
}
},
child: CircleAvatar(
// Rest of the code...
),
),

This code defines an onTap callback for the GestureDetector wrapped around the microphone button. When the button is tapped, it checks if the app is currently listening for speech. If not, it initializes the speech recognition using speechToText.initialize(), starts listening for speech using speechToText.listen(), and updates the UI with recognized words.

If the app is already listening, it stops the speech recognition using speechToText.stop().

Code :

main.dart

import 'package:flutter/material.dart';
import 'package:flutter_tts/speech_to_text.dart';
import 'package:flutter_tts/text_to_speech.dart';

void main() {
runApp(const MyApp());
}

class MyApp extends StatelessWidget {
const MyApp({super.key});

// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
debugShowCheckedModeBanner: false,
theme: ThemeData(
fontFamily: 'Poppins',
primarySwatch: Colors.green,
),
home: const Stt(),
);
}
}

speechToText.dart

import 'package:avatar_glow/avatar_glow.dart';
import 'package:flutter/foundation.dart';
import 'package:flutter/material.dart';
import 'package:speech_to_text/speech_to_text.dart';

class Stt extends StatefulWidget {
const Stt({Key? key}) : super(key: key);

@override
State<Stt> createState() => _SttState();
}

class _SttState extends State<Stt> {
var text = "Hold the button and start speaking";
var isListening = false;
Color bgColor = const Color(0xff00A67E);

SpeechToText speechToText = SpeechToText();

@override
void initState() {
super.initState();
checkMicrophoneAvailability();
}

void checkMicrophoneAvailability() async {
bool available = await speechToText.initialize();
if (available) {
setState(() {
if (kDebugMode) {
print('Microphone available: $available');
}
});
} else {
if (kDebugMode) {
print("The user has denied the use of speech recognition.");
}
}
}

@override
Widget build(BuildContext context) {
return Scaffold(
floatingActionButtonLocation: FloatingActionButtonLocation.centerFloat,
floatingActionButton: AvatarGlow(
endRadius: 75.0,
animate: isListening,
duration: const Duration(milliseconds: 2000),
glowColor: bgColor,
repeat: true,
repeatPauseDuration: const Duration(milliseconds: 100),
showTwoGlows: true,
child: GestureDetector(
onTap: () async {
if (!isListening) {
var available = await speechToText.initialize();
if (available) {
setState(() {
isListening = true;
});
speechToText.listen(
listenFor: const Duration(days: 1),
onResult: (result) {
setState(() {
text = result.recognizedWords;
});
});
}
} else {
setState(() {
isListening = false;
});
speechToText.stop();
}
},
child: CircleAvatar(
backgroundColor: bgColor,
radius: 30,
child: Icon(
isListening ? Icons.mic : Icons.mic_off,
color: Colors.white,
),
),
),
),
appBar: AppBar(
leading: const Icon(Icons.sort_rounded, color: Colors.white),
centerTitle: true,
backgroundColor: bgColor,
title: const Text('Speech to Text'),
),
body: GestureDetector(
behavior: HitTestBehavior.translucent,
onTap: () {
// unfocus the text when user taps outside the container
FocusScope.of(context).unfocus();
},
child: SingleChildScrollView(
reverse: true,
physics: const BouncingScrollPhysics(),
child: Container(
height: MediaQuery.of(context).size.height * 0.7,
width: MediaQuery.of(context).size.width,
alignment: Alignment.center,
padding: const EdgeInsets.symmetric(horizontal: 24, vertical: 16),
margin: const EdgeInsets.only(bottom: 150),
child: SelectableText(text,
style: TextStyle(
fontSize: 18,
color: isListening ? Colors.black87 : Colors.black54),
),
),
),
),
);
}
}

Output :



follow my Instagram account :

Reference

Post a Comment

0 Comments