Integrating FFmpeg in Android Without Losing Your Mind
A practical guide to using FFmpeg on Android — from choosing the right library to surviving native crashes, background processing, and real-time progress tracking.
FFmpeg is the Swiss Army knife of video processing. It can compress, convert, trim, merge, extract audio, and do about a thousand other things. When I built Vixit, my video compressor app, I needed all of that on Android. What I didn't expect was how painful the integration would be.
This is what I wish someone had told me before I started.
Choosing the Right FFmpeg Library
There are several ways to use FFmpeg on Android. You can compile it yourself from source using the NDK (don't — unless you enjoy suffering). You can use mobile-ffmpeg (deprecated). Or you can use FFmpegKit, which is the maintained successor and what I'd recommend.
FFmpegKit provides pre-built binaries for multiple architectures (arm64-v8a, armeabi-v7a, x86, x86_64). You pick a package based on what codecs you need. The full package supports everything but adds about 30 MB to your APK. The min package covers H.264, AAC, and the basics at around 8 MB. For Vixit, I went with min-gpl which includes x264 encoding — the sweet spot of size versus capability.
Adding it to your project is one Gradle dependency. Getting it to actually work reliably is another story.
The Native Crash Nightmare
My first week with FFmpegKit was smooth. Compression worked, conversion worked, everything looked great. Then I pushed a beta to a group of testers and within 24 hours, I had three crash reports that looked like this:
Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR)
That's it. No Kotlin stack trace. No helpful exception message. Just a native segfault deep inside the FFmpeg binary. Firebase Crashlytics captured the crash but the stack trace was just memory addresses inside libavcodec.so.
The crash only happened on Samsung devices running Android 12. I spent two days trying to reproduce it on my emulator (which doesn't use the same ARM architecture). The fix turned out to be a known memory alignment issue where FFmpegKit's buffer allocation didn't match Samsung's custom memory management layer. I found the solution buried in a GitHub issue from 2019 with three upvotes.
What I learned: When working with native libraries, your standard Android debugging tools are almost useless. You need to learn how to read tombstone files, understand addr2line to convert memory addresses to function names, and get very comfortable searching GitHub issues with exact error signatures.
Background Processing with WorkManager
Video compression is slow. A 2-minute 1080p video takes 30-90 seconds to compress depending on the device. You absolutely cannot run this on the main thread, and you can't use a simple coroutine either — if the user leaves the app, the process gets killed.
I used WorkManager with a ForegroundWorker that shows a persistent notification with progress. The setup looks straightforward in the docs, but there are gotchas. WorkManager requires the work to be serializable — you pass input as Data objects, which only support primitives and strings. That means you can't pass complex FFmpeg command objects; you serialize the command as a string array and reconstruct it inside the worker.
The other issue is cancellation. When a user taps "Cancel" on the notification, you need to send a cancel signal to the running FFmpeg session. FFmpegKit supports cancel(), but there's a race condition — if you cancel while FFmpeg is writing to a file, you get a corrupted output. I added a cleanup step that deletes partial output files after any cancellation or error.
Real-Time Progress Tracking
FFmpeg doesn't have a built-in progress API. Instead, it outputs statistics to stderr as it runs — frame count, bitrate, speed, and timestamp. FFmpegKit provides a StatisticsCallback that fires every time FFmpeg outputs a stats line.
The trick is converting FFmpeg's progress into a percentage. You need to know the total duration of the input video first (which you get by running FFprobeKit.getMediaInformation()), then compare the current timestamp from the statistics callback against that total. I wrapped this in a Flow<Float> that emits values from 0.0 to 1.0, which feeds directly into a Compose progress indicator.
One thing that tripped me up: the statistics callback fires on a background thread, but you can't update the WorkManager notification from just any thread. I had to collect the flow on the main dispatcher and batch updates to avoid flooding the notification system — updating at most once per second.
Handling Different Devices
Android fragmentation is real, and it hits hardest with video processing. Some devices report video dimensions incorrectly (rotated by 90 degrees in metadata but not in pixels). Some devices have hardware encoders that produce slightly different output than software encoding. Some cheap devices have so little RAM that FFmpeg gets OOM-killed during compression of 4K video.
I built a device capability check that runs before starting any operation. It reads the available memory, checks for hardware codec support, and adjusts the FFmpeg command accordingly. On low-memory devices, I force a resolution downscale to 720p before compression. On devices without hardware encoding, I reduce the encoding preset from medium to ultrafast (worse compression ratio, but it actually finishes).
The Command Builder Pattern
FFmpeg commands are just strings, which makes them incredibly error-prone. A single wrong flag can produce silent corruption — the video looks fine until you seek to a specific timestamp and get artifacts.
I built a Kotlin DSL for constructing FFmpeg commands. Instead of writing raw strings like -i input.mp4 -vcodec libx264 -crf 28 -preset medium output.mp4, I have typed builders: VideoCommand.compress { input(uri); quality(Quality.MEDIUM); output(outputPath) }. The builder validates parameters before generating the command string — it catches things like missing input files, invalid CRF ranges, and incompatible codec combinations at compile time.
What I'd Do Differently
I'd start with a smaller scope. Vixit launched with compression, conversion, trimming, merging, and audio extraction. Each feature multiplied the testing matrix — five operations across four architectures across three Android versions. The initial release should have been compression only, with other features added after the core was stable.
I'd also invest in automated testing earlier. Video processing is hard to unit test (you need actual video files and they're slow), but I could have built a suite of integration tests that run on CI with known input/output pairs. Instead, I tested manually for the first three months, which meant bugs slipped through.
The bottom line: FFmpeg on Android is powerful but hostile. Budget twice as much time as you think for native crashes, device fragmentation, and background processing edge cases. When it works, it's magical. Getting it to work reliably is the hard part.