Mask Annotator
Overlay segmentation masks on images with adjustable opacity.
Overview
The mask() annotator overlays segmentation masks on detected objects. It supports both binary mask arrays and polygon coordinate formats, with automatic detection and conversion between them. The opacity parameter controls how transparent the masks appear over the original image.
pf.annotators.mask(image, detections)
Function Signature
def mask( frame: np.ndarray, detections: Detections, opacity: float = 0.5, colors: Optional[List[tuple]] = None) -> np.ndarray
Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
| frame | np.ndarray | required | Input image in BGR format. Modified in-place. |
| detections | Detections | required | PixelFlow detections containing mask data. |
| opacity | float | 0.5 | Mask transparency (0.0 = invisible, 1.0 = opaque). |
| colors | List[tuple] or None | None | List of BGR colors. None = use default palette. |
Returns:np.ndarray - The input image with masks overlaid (same array, modified in-place).
Basic Usage
import cv2import pixelflow as pffrom ultralytics import YOLO# Load image and run segmentation modelimage = cv2.imread("street.jpg")model = YOLO("yolo11n-seg.pt") # Segmentation modelresults = model.predict(image)detections = pf.from_ultralytics(results)# Draw segmentation masksimage = pf.annotators.mask(image, detections)cv2.imshow("Result", image)cv2.waitKey(0)
Note: The mask() annotator requires detections with mask data. Use a segmentation model (e.g., yolo11n-seg.pt) or ensure your detections include the masks attribute.
Opacity Control
The opacity parameter controls mask transparency. This is useful for balancing visibility of the mask against the underlying image details.
| Value | Effect | Use Case |
|---|---|---|
| 0.0 | Invisible (no mask shown) | Debugging/testing |
| 0.2 - 0.3 | Subtle overlay | Preserve image details |
| 0.5 | Balanced (default) | General visualization |
| 0.7 - 0.8 | Prominent masks | Emphasize segmentation |
| 1.0 | Fully opaque | Solid color regions |
# Subtle overlay - preserve background detailsimage = pf.annotators.mask(image, detections, opacity=0.3)# Default semi-transparentimage = pf.annotators.mask(image, detections, opacity=0.5)# Solid opaque masksimage = pf.annotators.mask(image, detections, opacity=1.0)
Custom Colors
# Override with specific colors (BGR format)custom_colors = [ (0, 255, 0), # Green - first class (0, 0, 255), # Red - second class (255, 0, 0), # Blue - third class]image = pf.annotators.mask(image, detections, colors=custom_colors)
Colors are mapped to class_id values. The same class always gets the same color for visual consistency across frames.
Supported Mask Formats
The mask() annotator automatically handles two mask formats:
| Format | Data Type | Description |
|---|---|---|
| Binary Mask | np.ndarray (bool/uint8) | 2D array matching image dimensions. True/1 = masked pixel. |
| Polygon | List[tuple] | List of (x, y) coordinates defining mask boundary. |
# Binary mask format (from segmentation models)# detection.masks = [np.array([[True, False, ...], ...])]# Polygon format (from annotation tools)# detection.masks = [[(x1,y1), (x2,y2), (x3,y3), ...]]# Both formats work automatically - no conversion neededimage = pf.annotators.mask(image, detections)
Common Patterns
Full Segmentation Visualization
# Complete segmentation display: mask + box + labelimage = pf.annotators.mask(image, detections, opacity=0.4)image = pf.annotators.box(image, detections)image = pf.annotators.label(image, detections)
Tip: Apply mask() before box() and label() so the boxes and labels appear on top of the masks.
Mask Only (No Boxes)
# Clean segmentation without bounding boxesimage = pf.annotators.mask(image, detections, opacity=0.6)
With Polygon Outlines
# Filled masks with polygon outlines for clear boundariesimage = pf.annotators.mask(image, detections, opacity=0.3)image = pf.annotators.polygon(image, detections)
Video Processing
import pixelflow as pffrom ultralytics import YOLOmodel = YOLO("yolo11n-seg.pt")media = pf.Media("video.mp4")for frame in media.frames: results = model.predict(frame, verbose=False) detections = pf.from_ultralytics(results) frame = pf.annotators.mask(frame, detections) frame = pf.annotators.box(frame, detections) pf.show_frame("Segmentation", frame)
Notes
-
In-place modification: The input image is modified directly. Use
image.copy()if you need the original preserved. -
Requires mask data: Detections must have the
masksattribute populated. Use a segmentation model or converter that provides masks. -
Dimension matching: Binary masks must match the image dimensions exactly. Mismatched dimensions will raise a
ValueError. -
Performance: Uses single-pass rendering and OpenCV's optimized
addWeightedfor efficient blending. - Skip behavior: Detections without mask data are automatically skipped without error.
See Also
-
polygon()- Draw polygon outlines instead of filled masks -
box()- Draw bounding boxes around detections -
blur()- Apply blur effect to detected regions -
from_sam()- Convert Segment Anything Model results