A mobile application combining computer vision and machine learning to detect pool tables, locate ball positions, and map them onto ShotStudio-style overlays.
This project was derived from the pix2pockets dataset and research as a starting point for ball detection training.
Tableizer consists of three main components:
- Native C++ Vision Engine (
lib/) - Core detection and image processing with ONNX Runtime - Flutter Mobile App (
app/) - iOS and Android real-time capture interface - Python Tooling (
python/) - Model training, dataset transforms, and debugging utilities
tableizer/
├── app/ # Flutter mobile application
│ ├── lib/ # Dart source code
│ │ ├── controllers/ # Business logic controllers
│ │ ├── models/ # Data structures
│ │ ├── screens/ # UI screens
│ │ ├── services/ # FFI services and detection logic
│ │ └── widgets/ # Reusable UI components
│ ├── ios/ # iOS native configuration
│ ├── android/ # Android native configuration
│ └── assets/ # App assets (models, images)
├── lib/ # C++ native library
│ ├── src/ # Source files
│ ├── include/ # Header files
│ ├── libs/ # Dependencies (OpenCV, ONNX Runtime)
│ ├── build_ios.sh # iOS build script
│ └── build_android.sh # Android build script
├── python/ # Python tooling
├── tableizer/ # Trained YOLO models
└── data/ # Datasets and training images
- Flutter SDK >= 3.8.1
- CMake >= 3.10
- Ninja build system (for OpenCV builds)
- Xcode >= 15.0 (with command line tools)
- CocoaPods (
sudo gem install cocoapods) - Homebrew with
opencvandonnxruntimeinstalled
- Android Studio with NDK installed
- Android NDK r29+ (set
ANDROID_NDK_HOMEenvironment variable) - Android SDK with platform 24+
- Python 3.10+
- Virtual environment recommended
The Flutter app requires pre-built native libraries for table and ball detection.
cd lib
# Build OpenCV and Tableizer for iOS (device + simulator)
./build_ios.shThis script:
- Builds OpenCV static libraries for iOS device (arm64) and simulator
- Builds the
libtableizer_lib.dylibshared library - Copies libraries to
app/ios/
Output files:
app/ios/libtableizer_lib.dylib(device, default)app/ios/libtableizer_lib_device.dylibapp/ios/libtableizer_lib_sim.dylib
# Set the NDK path (adjust version as needed)
export ANDROID_NDK_HOME=$ANDROID_SDK_ROOT/ndk/29.0.13599879
cd lib
# Build OpenCV and Tableizer for Android
./build_android.shThis script:
- Builds OpenCV shared libraries for Android (arm64-v8a)
- Builds
libtableizer_lib.so - Copies all
.sofiles toapp/android/app/src/main/jniLibs/arm64-v8a/
Required libraries in jniLibs:
libtableizer_lib.solibopencv_*.so(core, imgproc, dnn, etc.)libonnxruntime.so
cd app
# Get Flutter packages
flutter pub getcd app/ios
# Install CocoaPods dependencies (includes ONNX Runtime)
pod installImportant: The iOS build uses ONNX Runtime from CocoaPods (pod 'onnxruntime-c').
cd app
flutter run -d <ios-device-id>Or build for release:
flutter build ios --releaseFor simulator, configure the app to use the simulator library:
- The library loader in
app/lib/native/library_loader.darthandles device vs simulator detection - Ensure
libtableizer_lib_sim.dylibis available inapp/ios/
flutter run -d <ios-simulator-id>cd app
flutter run -d <android-device-id>Or build APK:
flutter build apk --debug
# or
flutter build apk --releasecd lib
mkdir build && cd build
cmake ..
makeThis creates:
tableizer_app- CLI executable for testinglibtableizer_lib.a- Static library
cd lib/build
ctestThe native library exposes these functions for Flutter FFI:
| Function | Description |
|---|---|
initialize_detector(modelPath) |
Load YOLO ONNX model |
detect_table_bgra(...) |
Detect table quadrilateral in image |
detect_balls_bgra(...) |
Detect balls within table region |
transform_points_using_quad(...) |
Transform coordinates to table space |
normalize_image_bgra(...) |
Apply perspective correction |
# Create virtual environment
python3 -m venv .venv
source .venv/bin/activate
# Install dependencies
pip install -r requirements.txt| Script | Purpose |
|---|---|
python/model_table.py |
Main training script - trains YOLO ball detection models |
python/transform_dataset.py |
Dataset transformation - perspective correction and label mapping |
python/detect_table.py |
Table detection testing using C++ FFI |
python/tableizer_ffi.py |
Python FFI bindings for the C++ library |
Train a new YOLO model using model_table.py:
-
Edit the
CONFIGdict inmodel_table.pyto point to your local dataset:CONFIG = { "srcImgDir": "data/my_dataset/images", # Your images "srcLblDir": "data/my_dataset/labels", # Your YOLO labels ... }
-
Run training:
cd python python model_table.py
Export trained model to ONNX for mobile deployment:
yolo export model=tableizer/expN/weights/best.pt format=onnx device=cpu imgsz=1280 simplify=True dynamic=False opset=17 half=False
# Copy to Flutter app
cp tableizer/expN/weights/best.onnx ../app/assets/detection_model.onnxTrained models are stored in tableizer/ (current production model: combined4).
See python/README.md for detailed training instructions.
Camera Frame (CameraAwesome)
↓
TableDetectionController.processImage()
↓
TableDetectionService → Isolate
↓
[C++ FFI] detect_table_bgra()
├── Table Detection (color analysis, contours)
├── Quad Analysis (orientation detection)
└── Image Normalization (perspective transform)
↓
BallDetectionController.detectBalls()
↓
[C++ FFI] detect_balls_bgra() + YOLO inference
↓
TableResultsScreen (visualization)
Flutter:
camerawesome- Live camera feedpermission_handler- Camera permissionswakelock_plus- Keep screen on
C++:
- OpenCV - Image processing
- ONNX Runtime - YOLO inference
- Hold phone in landscape orientation (16:9 aspect ratio) for best detection
- Ensure adequate lighting on the pool table
- Position camera to capture entire table surface
- Some modules contain hard-coded absolute paths (see
todo.md) - Frame rate depends on image resolution
- 45-degree camera angles may affect orientation detection
Private project - not for redistribution.