6/recent/ticker-posts

Face Detection in Flutter

Welcome to this comprehensive tutorial on implementing face detection in a Flutter application using the google_ml_kit and image_picker packages. Face detection is an essential feature in computer vision applications as it allows us to identify and locate human faces in images. By leveraging the capabilities of the Google ML Kit library, which provides a ready-to-use face detection API, we can easily integrate this functionality into our Flutter projects.

Follow this steps :

  1. Add the required dependencies to your pubspec.yaml file:
dependencies:
flutter:
sdk: flutter
google_ml_kit: ^0.6.0
image_picker: ^0.8.7+4

2. Import the necessary packages and classes in your Dart file:

import 'dart:io';
import 'package:flutter/material.dart';
import 'package:image_picker/image_picker.dart';
import 'package:google_ml_kit/google_ml_kit.dart';

3. Create a stateful widget called DetectWithImage that extends StatefulWidget and implements the UI and logic for face detection:

class DetectWithImage extends StatefulWidget {
const DetectWithImage({Key? key}) : super(key: key);
@override
State<DetectWithImage> createState()
=> _DetectWithImageState();
}

4. Create a state class _DetectWithImageState that extends State<DetectWithImage> and contains the state variables and methods for face detection:

class _DetectWithImageState extends State<DetectWithImage> {
final picker = ImagePicker();
final FaceDetector faceDetector = GoogleMlKit.vision.faceDetector();
File? _imageFile;
List<Face> _faces = [];
Future<void> _getImageAndDetectFaces() async {
final pickedFile = await picker.getImage(source: ImageSource.gallery);
if (pickedFile != null) {
setState(() {
_imageFile = File(pickedFile.path);
});
try {
final inputImage = InputImage.fromFile(_imageFile!);
final faces = await faceDetector.processImage(inputImage);
setState(() {
_faces = faces;
});
} catch (e) {
print('Error detecting faces: $e');
}
} else {
print('No image selected.');
}
}
@override
void dispose() {
faceDetector.close();
super.dispose();
}
@override
Widget build(BuildContext context) {
// Build the UI here
}
}

5. Implement the build method of the _DetectWithImageState class to create the UI layout:

@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text('Face Detection Demo'),
),
body: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
if (_imageFile != null)
Image.file(
_imageFile!,
height: 300,
width: double.infinity,
fit: BoxFit.cover,
),
SizedBox(height: 16),
ElevatedButton(
onPressed: _getImageAndDetectFaces,
child: Text('Select Image'),
),
SizedBox(height: 16),
Text('Detected Faces: ${_faces.length}'),
SizedBox(height: 16),
Expanded(
child: ListView.builder(
itemCount: _faces.length,
itemBuilder: (context, index) {
final face = _faces[index];
return ListTile(
leading: Icon(Icons.face),
title: Text('Face ${index + 1}'),
subtitle: Text(
'Bounding Box: ${face.boundingBox}\nHead Elevation: ${face.headEulerAngleZ}\nHead Tilt: ${face.headEulerAngleY}',
),
trailing: Icon(Icons.arrow_forward_ios),
);
},
),
),
],
),
);
}

6. Don’t forget to add the necessary permissions in the AndroidManifest.xml file located in the android/app directory:

<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="your.package.name">

<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<!-- Other permissions -->
<application>
<!-- Other application configurations -->
</application>
</manifest>

Make sure to replace "your.package.name" with the actual package name of your Flutter app.

That’s it! You’ve implemented a face detection feature using the google_ml_kit and image_picker packages in Flutter. When the user selects an image from the gallery, it will be displayed, and the face detection process will be performed on the image. The detected faces will be shown with their bounding box coordinates and head angles.

Code :

main.dart

import 'package:flutter/material.dart';
import 'faceDetectionWithImage.dart';

void main() {
runApp(const MyApp());
}

class MyApp extends StatelessWidget {
const MyApp({super.key});

// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
debugShowCheckedModeBanner: false,
theme: ThemeData(
fontFamily: 'Poppins',
primarySwatch: Colors.blue,
),
home: const DetectWithImage(),
)
;
}
}

faceDetectionWithImage.dart

import 'dart:io';
import 'package:flutter/material.dart';
import 'package:image_picker/image_picker.dart';
import 'package:google_ml_kit/google_ml_kit.dart';

class DetectWithImage extends StatefulWidget {
const DetectWithImage({Key? key}) : super(key: key);

@override
State<DetectWithImage> createState() => _DetectWithImageState();
}

class _DetectWithImageState extends State<DetectWithImage> {
final picker = ImagePicker();
final FaceDetector faceDetector = GoogleMlKit.vision.faceDetector();
File? _imageFile;
List<Face> _faces = [];

Future<void> _getImageAndDetectFaces() async {
final pickedFile = await picker.getImage(source: ImageSource.gallery);

if (pickedFile != null) {
setState(() {
_imageFile = File(pickedFile.path);
});

try {
final inputImage = InputImage.fromFile(_imageFile!);
final faces = await faceDetector.processImage(inputImage);

setState(() {
_faces = faces;
});
} catch (e) {
print('Error detecting faces: $e');
}
} else {
print('No image selected.');
}
}

@override
void dispose() {
faceDetector.close();
super.dispose();
}

@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text('Face Detection Demo'),
),
body: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
if (_imageFile != null)
Image.file(
_imageFile!,
height: 300,
width: double.infinity,
fit: BoxFit.cover,
),
SizedBox(height: 16),
ElevatedButton(
onPressed: _getImageAndDetectFaces,
child: Text('Select Image'),
),
SizedBox(height: 16),
Text('Detected Faces: ${_faces.length}'),
SizedBox(height: 16),
Expanded(
child: ListView.builder(
itemCount: _faces.length,
itemBuilder: (context, index) {
final face = _faces[index];
return ListTile(
leading: Icon(Icons.face),
title: Text('Face ${index + 1}'),
subtitle: Text(
'Bounding Box: ${face.boundingBox}\nHead Elevation: ${face.headEulerAngleZ}\nHead Tilt: ${face.headEulerAngleY}',
),
trailing: Icon(Icons.arrow_forward_ios),
);
},
),
),
],
),
);
}
}

Implementation :












follow my Instagram account :

https://www.instagram.com/nature.pulse_/

Conclusion :

we have successfully implemented face detection in Flutter using the google_ml_kit and image_picker packages. The DetectWithImage widget allows the user to select an image from the gallery, after which the selected image is displayed on the screen. The faceDetector from the google_ml_kit package processes the image and detects the faces present in it. The detected faces are then displayed in a ListView, showing the face number, bounding box coordinates, and head angles.

By following the step-by-step guide and providing the necessary permissions in the AndroidManifest.xml file, we have created a functional face detection feature in our Flutter app. This implementation can be further enhanced by incorporating additional functionality, such as face recognition or emotion detection, using the capabilities provided by the google_ml_kit package.

Post a Comment

0 Comments