lavaflow package

The lavaflow package contains all of the code required for free surface velocity tracking.

lavaflow.boundary module

Functions for boundary detection and masking.

class lavaflow.boundary.BoundaryDetector(res, flow_source_geometry, flow_plate_geometry)[source]

Bases: ABC

Abstract class for boundary detection based on image masking.

This class abstracts how the flow source geometry and flow plate geometry are defined. Any specific boundary detection method should extend this class and implement the __call__ method.

The flow source is used to filter out masks that do not include the flow source, and the flow plate is used to crop the mask down to the area of interest to remove any flow outside the boundary.

Two flow source types are currently supported, circle geometry

{
  type: 'circle',
  data: {
    center: [x, y],
    radius: r,
  }
}

and edge geometry

edge_flow_source_geometry = {
  type: 'edge',
  data: {
    points: [[x, y], [x, y]],
    radius: r,
  }
}

The flow plate geometry is specified as a generic polygon only

{
  type: 'polygon',
  data: {
    points: [[x, y], [x, y], ...],
  }
}
Parameters
  • res (tuple) – image size written as (w, h)

  • flow_source_geometry (dict) – flow source geometry

  • flow_plate_geometry (dict) – flow plate geometry

abstract __call__(img)[source]

Map an image to a boundary.

Parameters

img (np.ndarray) – image

class lavaflow.boundary.ColorMaskBoundaryDetector(res, flow_source_geometry, flow_plate_geometry, color_mask)[source]

Bases: BoundaryDetector

Class for boundary detection based on color masking.

This boundary detection method uses color masks, morphology transformations, and contour matching to find a contour that matches the boundary of the colored region with the largest area that contains the flow source geometry.

Note that if the flow source geometry is temporarily occluded the boundary with the largest area will still be selected.

Parameters
  • res (tuple) – image size written as (w, h)

  • flow_source_geometry (dict) – flow source geometry

  • flow_plate_geometry (dict) – flow plate geometry

  • color_mask (ColorMask) – color mask (already configured)

__call__(img)[source]

Map an image to a boundary.

Parameters

img (np.ndarray) – image

class lavaflow.boundary.ExpandBoundary[source]

Bases: object

Class for boundary expanding.

This class takes the union of a window or sequence of boundaries by painting inside the boundaries and recomputing the boundary based on the union of the overlay.

__call__(boundaries)[source]

Apply expanding to boundary.

Parameters

boundaries (list) – list of np.ndarray boundaries for expanding

Returns

expanded boundary

Return type

boundary (np.ndarray)

class lavaflow.boundary.SmoothBoundary(intensity=None, threshold=None)[source]

Bases: object

Class for boundary smoothing.

This class smooths a window or sequence of boundaries using anti-aliasing by painting inside the boundaries with transparency and recomputing the boundary based on the overlay.

Parameters
  • intensity (float) – intensity to apply to each frame in window, generally 255 / N based on the number of boundaries in the window N

  • threshold (float) – threshold for smooth mask

__call__(boundaries)[source]

Apply smoothing to boundary.

Parameters

boundaries (list) – list of np.ndarray boundaries for smoothing

Returns

smoothed boundary

Return type

boundary_smoothed (np.ndarray)

lavaflow.log module

wxPython logging.

Defines a root logger which can be imported by scripts using this package, and configurations for color stdout formatting. Currently, modules request loggers locally, but they could import the root logger directly.

class lavaflow.log.ColoredFormatter(fmt=None, datefmt=None, style='%', validate=True)[source]

Bases: Formatter

Logger color formatter for changing color according to level name.

format(record)[source]

Format record according to level name (standard).

lavaflow.masking module

Functions for image masking.

class lavaflow.masking.ColorMask(center, sensitivity, code=None, output_type='mask')[source]

Bases: object

Class for creating an image mask based on color matching.

Each pixel is converted into the desired color space for color matching using cv2.cvtColor based on the color conversion code and each color channel is checked to see if the value falls within sensitivity of the center over the range [0, 255].

The LAB color space is good for color matching as it corresponds to colors as they are perceived by the human eye, L for perceptual lightness and a* and b* for the four unique colors of human vision red, green, blue, and yellow. This means that a numerical change in LAB space corresponds to a similar perceived change in color which makes it easier to choose the center and sensitivity.

Parameters
  • center (np.ndarray) – center color (same color space as the image)

  • sensitivity (np.ndarray) – relative sensitivity (separate for each color channel)

  • code (int) – color conversion code e.g. cv2.COLOR_BGR2LAB

  • output_type (str) – output type for map

__call__(img)[source]

Map image to image mask or distance map.

Parameters

img (np.ndarray) – image

Returns

image mask or distance map

Return type

obj (np.ndarray)

distance(img, channels=(0, 1, 2))[source]

Map image ot distance map.

Parameters

img (np.ndarray) – image

Returns

distance map

Return type

distance (np.ndarray)

mask(img)[source]

Map image to image mask.

Parameters

img (np.ndarray) – image

Returns

image mask

Return type

mask (np.ndarray)

class lavaflow.masking.MultiColorMask(color_masks, output_type='mask')[source]

Bases: object

Class for creating an image mask based on multiple color matching.

This class combines outputs from multiple :class:lavaflow.masking.ColorMask either by taking the union of image masks or the minimum of the distance maps.

Parameters
  • color_masks (list) – array of color masks

  • output_type (str) – output type for map

Returns

color mask

Return type

self (ColorMask)

__call__(img)[source]

Map image to image mask or distance map.

Parameters

img (np.ndarray) – image

Returns

image mask or distance map

Return type

obj (np.ndarray)

distance(img, channels=(0, 1, 2))[source]

Map image ot distance map.

Parameters

img (np.ndarray) – image

Returns

distance map

Return type

distance (np.ndarray)

mask(img)[source]

Mask image to image mask.

Parameters

img (np.ndarray) – image

Returns

image mask

Return type

mask (np.ndarray)

lavaflow.pipelines module

Functions for boundary detection and masking.

class lavaflow.pipelines.Pipeline(config)[source]

Bases: ABC

Generic pipeline class for streaming images for processing and modeling.

Parameters

config – config dictionary

abstract consume()[source]

Construct and run pipe for entire input stream and generate output specified in config.

abstract pipe()[source]

Construct and return pipe.

Returns

pipe

Return type

pipe (Pipe)

class lavaflow.pipelines.VideoStreamPipeline(config)[source]

Bases: Pipeline

Class for streaming frames of video from a file using iterables to consume less memory.

Parameters

config (dict) – dictionary config, see sparse.optical.flow.example

consume()[source]

Construct and run pipe for entire video stream and generate output specified in config.

get_video_pipe(video)[source]

Construct and return pipe with video transformation and processing only.

Parameters

video (dict) – video config

Returns

pipe with video transformation and processing only

Return type

pipe (Pipe)

pipe()[source]

Construct and return pipe.

Returns

pipe

Return type

pipe (Pipe)

preview(file, index)[source]

Construct and run pipe for a single frame of video.

Parameters
  • file (str) – video file

  • index (int) – frame index for generating preview

Returns

pipe configured to run for a single image only

Return type

pipe (Pipe)

visualizations = ['raw', 'raw_transformed', 'raw_processed']

lavaflow.processing module

Functions for image processing.

class lavaflow.processing.BrightnessAndContrastAdjuster(brightness=0, contrast=0, center=128)[source]

Bases: LinearAdjuster

Class for memory efficienct image processing.

Parameters
  • brightness (float) – brightness, range [-1, 1]

  • contrast (float) – contrast, range [-8, 8]

  • center (float) – center of contrast adjustment, range [0, 255]

class lavaflow.processing.ColorConverter(code, astype=None)[source]

Bases: object

Class for memory efficienct image processing.

Parameters
  • code (np.ndarray) – color conversion code

  • astype (type) – output type conversion (optional)

__call__(img)[source]

Convert an image from one color space to another.

Parameters

img (np.ndarray) – image

Returns

image with color conversion applied

Return type

img (np.ndarray)

class lavaflow.processing.CurveAdjuster(slope=0, shift=0)[source]

Bases: object

Class for memory efficienct image processing.

Parameters
  • slope (float) – slope factor, range [-1, 1]

  • shift (float) – shift factor, range [-1, 1]

__call__(img)[source]

Apply curve ajustment to an image.

Parameters

img (np.ndarray) – image

Returns

image with curve adjustment applied

Return type

img (np.ndarray)

class lavaflow.processing.GammaAdjuster(gamma=1, gain=1)[source]

Bases: LUTAdjuster

Class for memory efficienct image processing.

Parameters
  • gamma (float) – gamma value, range [0, inf)

  • gain (float) – gain value, range [0, inf)

class lavaflow.processing.KernelSharpener(s=0)[source]

Bases: object

Class for memory efficienct image processing.

Parameters

s (float) – sharpen parameter, range [0, inf)

__call__(img)[source]

Apply curve ajustment to an image.

Parameters

img (np.ndarray) – image

Returns

image with sharpening applied

Return type

img (np.ndarray)

class lavaflow.processing.LUTAdjuster(LUT)[source]

Bases: object

Class for memory efficienct image processing.

Parameters

LUT (np.ndarray) – look up table

__call__(img)[source]

Apply look up table to an image.

Parameters

img (np.ndarray) – image

Returns

image with look up table applied

Return type

img (np.ndarray)

class lavaflow.processing.LensCorrector(maker, model, lens, focal_length, aperture, distance)[source]

Bases: object

Contruct class for lens correction based on the following

Parameters
  • maker (str) – camera maker, e.g. Canon

  • model (str) – camera model,e.g. Canon 90D

  • lens (str) – lens model, e.g. EF-S 18-135mm f/3.5-5.6 IS USM

  • focal_length (double) – focal length based on camera settings

  • aperture (double) – aperture based on camera settings

  • distance (double) – actual focal distance based on camera settings (m)

__call__(img)[source]

Apply lens correction to an image.

Parameters

img (np.ndarray) – image

Returns

image with denoising applied

Return type

img (np.ndarray)

class lavaflow.processing.LinearAdjuster(gain=1, bias=0)[source]

Bases: LUTAdjuster

Class for memory efficienct image processing.

Parameters
  • gain (float) – gain value, range (-inf, inf)

  • bias (float) – bias value, range [-255, 255]

class lavaflow.processing.NonLocalMeansDenoiser(l=0, c=0, template_radius=7, search_radius=21)[source]

Bases: object

Class for memory efficienct image processing.

Parameters
  • l (float) – lightness denoising parameter, range [0, inf)

  • c (float) – color denoising parameter, range [0, inf)

  • template_radius (float) – kernel radius, range [1, inf)

  • search_radius (float) – kernel radius, range [1, inf)

__call__(img)[source]

Apply curve ajustment to an image.

Parameters

img (np.ndarray) – image

Returns

image with denoising applied

Return type

img (np.ndarray)

class lavaflow.processing.PerspectiveTransformer(src, dst)[source]

Bases: object

Class for perspective transformation using OpenCV.

See cv2.getPerspectiveTransform and cv2.warpPerspective.

For example,

perspective_transformer = PerspectiveTransformer(
    [[7, 37], [1517, 2], [1503, 952], [26, 919]],
    [[0, 0], [1920, 0], [1920, 1080], [0, 1080]]
)
Parameters
  • src (list|np.array) – list or array of source quadrangle points, shaped (N, 2)

  • dst (list|np.array) – list or array of destination quadrangle points, shaped (N, 2)

__call__(img)[source]

Apply perspective transformation to an image.

Parameters

img (np.ndarray) – image

Returns

image with perspective transformation applied

Return type

img (np.ndarray)

class lavaflow.processing.Resizer(res)[source]

Bases: object

Class for memory efficient image resizing.

Parameters

res (tuple) – target resolution (w, h)

__call__(img)[source]

Resize image.

Parameters

img (np.ndarray) – image

Returns

image resized

Return type

img (np.ndarray)

class lavaflow.processing.UnmaskSharpener(s=0, radius=5, spread=5)[source]

Bases: object

Class for memory efficienct image processing.

Parameters
  • s (float) – sharpen parameter, range [0, inf)

  • radius (float) – kernel radius, range [1, inf)

  • spread (float) – kernel spread, range [1, inf)

__call__(img)[source]

Apply curve ajustment to an image.

Parameters

img (np.ndarray) – image

Returns

image with sharpening applied

Return type

img (np.ndarray)

lavaflow.streams module

Classes for creating video streams.

class lavaflow.streams.AbstractStream[source]

Bases: ABC

Generic stream class (equivalent to an iterator with length).

abstract __iter__()[source]

Return iterator.

Returns

iterator

Return type

self (AbstractStream)

abstract __next__()[source]

Get next item.

Returns

item

Return type

img (np.ndarray)

class lavaflow.streams.VideoStream(file, res=(-1, -1), frame_seek=0, step_size=1, max_steps=-1)[source]

Bases: AbstractStream

Video stream frame iterator.

Parameters
  • file (str) – file name

  • res (tuple) – output frame size written as (w, h)

  • frame_seek (int) – frame to seek, default: 0 (start of video)

  • step_size (int) – number of frames to iterate each step

  • max_steps (int) – total number of frames to capture, default: -1 (end of video)

__iter__()[source]

Return iterator.

Returns

video frame iterator

Return type

self (VideoStream)

__next__()[source]

Get next frame.

Note that this will automatically stop iteration if the video is removed or becomes unavailable.

Returns

frame frame (int): frame index

Return type

img (np.ndarray)

get_output_size()[source]

Get output frame size.

Returns

frame size (w, h)

Return type

res (tuple)

get_position()[source]

Get current frame index.

Returns

current frame index

Return type

position (int)

get_source_size()[source]

Get input video size.

Returns

video size (w, h)

Return type

res (tuple)

seek(frame_seek=0, step_size=1, max_steps=-1)[source]

Reset video frame iterator back to frame_seek.

Parameters
  • frame_seek (int) – frame to seek, default: 0 (start of video)

  • step_size (int) – number of frames to iterate each step

  • max_steps (int) – total number of frames to capture, default: -1 (end of video)

lavaflow.tracking module

Functions for boundary detection and masking.

class lavaflow.tracking.FarnebackDenseOpticalFlow(pyr_scale, levels, winsize, iterations, poly_n, poly_sigma, flags)[source]

Bases: object

Class for Farneback dense optical flow.

Parameters are documented in examples on the maing page (work in progress).

Parameters
  • pyr_scale (float) – scale between layers

  • levels (int) – number of layers

  • winsize (int) – averaging window size

  • iterations (int) – number of iterations algorithm performs at each level

  • poly_n (int) – size of pixel neighbourhood used for polynomial expansion in each pixel

  • poly_sigma (float) – standard deviation of Gaussian used for smoothing derivatives

  • flags (str) – operation flags

__call__(img1, img2)[source]

Generate flow vector field from source image using Farneback dense optical flow.

Parameters
  • img1 (np.ndarray) – source image

  • img2 (np.ndarray) – target image

Returns

x component of velocity on x, y pixel meshgrid v (np.ndarray): y component of velocity on x, y pixel meshgrid

Return type

u (np.ndarray)

class lavaflow.tracking.FeatureDescriptorMatching(extractor, matcher, ratio, speed_lower_bound, speed_upper_bound)[source]

Bases: SparseFlow

Class for descriptor based feature matching and tracking.

Parameters are documented in examples on the maing page (work in progress).

Parameters
  • extractor (cv2.Feature2D) – any class implementing the detect and compute methods defined by cv2::Feature2D

  • matcher (cv2.DescriptorMatcher) – any class implementing the knnMatch method defined by cv2::DescriptorMatcher

  • ratio (float) – distance ratio used to filter candidates

  • speed_lower_bound (float) – speed must be above lower bound to be accepted

  • speed_upper_bound (float) – speed must be below upper bound to be accepted

__call__(img1, img2, boundary1=None, boundary2=None)[source]

Compute feature tracking across a pair of images.

Parameters
  • img1 (np.ndarray) – source image

  • img2 (np.ndarray) – target image

  • boundary1 (np.ndarray) – list of boundary contour points from source image

  • boundary2 (np.ndarray) – list of boundary contour points from target image

Returns

feature points in source image, shaped (N, 1, 2) p2 (nd.ndarray): feature points in target image, shaped (N, 1, 2)

Return type

p1 (np.ndarray)

class lavaflow.tracking.LucasKanadeSparseOpticalFlow(flow_source_geometry, extractor, winSize, maxLevel, maxIters, epsilon, min_displacement, max_displacement, exclude)[source]

Bases: SparseFlow

Class for Lucas Kanade sparse optical flow based tracking.

Parameters are documented in examples on the maing page (work in progress).

Parameters
  • flow_source_geometry (dict) – flow source geometry

  • extractor (cv2.Feature2D) – any class implementing the detect and compute methods defined by cv2::Feature2D

  • winSize (int) – size of the search window

  • maxLevel (int) – number of pyramid levels

  • maxIters (int) – maximum number of iterations

  • epsilon (float) – minimum convergence required

  • min_displacement (float) – minimum displacement required

  • max_displacement (float) – maximum displacement allowed

  • exclude (list) – list of points to exclude expressed as (point center, radius)

__call__(images, boundaries)[source]

Extract and track feature points using Luca Kanade sparse optical flow.

Parameters
  • images (list) – sequence of images

  • boundaries (list) – sequence of boundary contour points from each image

Returns

feature points in first image, shaped (N, 1, 2) p2 (nd.ndarray): feature points in final image, shaped (N, 1, 2)

Return type

p1 (np.ndarray)

class lavaflow.tracking.NormalBoundaryFlow(boundary_chord, boundary_point_pixel_spacing, boundary_intersection_lookahead)[source]

Bases: object

Parameters are documented in examples on the maing page (work in progress).

Parameters are documented in examples on the maing page (work in progress).

Parameters
  • boundary_chord (int) – number of points to use for determining normal vector to boundary chord

  • boundary_point_pixel_spacing (int) – approximate pixel spacing between boundary points (used to simplify boundary)

  • boundary_intersection_lookahead (int) – number of points to look ahead to determine target boundary intersections

__call__(img1, img2, boundary1=None, boundary2=None)[source]

Extract feature points from source image using Luca Kanade sparse optical flow.

Parameters
  • img1 (np.ndarray) – source image (not used)

  • img2 (np.ndarray) – target image (not used)

  • boundary1 (np.ndarray) – list of boundary contour points from source image

  • boundary2 (np.ndarray) – list of boundary contour points from target image

Returns

feature points in source image, shaped (N, 1, 2) p2 (nd.ndarray): feature points in target image, shaped (N, 1, 2)

Return type

p1 (np.ndarray)

class lavaflow.tracking.SparseFlow[source]

Bases: ABC

Generic class for tracking sparse flow.

abstract __call__(images, boundaries)[source]

Extract and track specific feature points from source image.

The images and boundaries lists should have the same length and should correspond to an ordered sequence of images. The results points p1 and p2 come from the first and last images respectively

Parameters
  • images (list) – sequence of images

  • boundaries (list) – sequence of boundary contour points from each image

Returns

feature points in first image, shaped (N, 1, 2) p2 (nd.ndarray): feature points in final image, shaped (N, 1, 2)

Return type

p1 (np.ndarray)

lavaflow.velocity module

Functions for interpolating a velocity field.

class lavaflow.velocity.DenseInterpolation(x, y, kernel_size=21, sigma=20)[source]

Bases: object

Class for interpolating on a dense velocity grid.

Parameters are documented in examples on the maing page (work in progress).

Parameters
  • x (np.ndarray) – x coordinates to interpolate (any shape)

  • y (np.ndarray) – y coordinates to interpolate (any shape)

  • kernel_size (int) – kernel size for Guassian smoothing

  • sigma (double) – sigma for Guassian smoothing

__call__(u, v)[source]

Interpolate tracked points.

Parameters
  • u (np.ndarray) – x component of velocity on x, y pixel meshgrid

  • v (np.ndarray) – y component of velocity on x, y pixel meshgrid

Returns

x meshgrid y (np.ndarray): y meshgrid u (np.ndarray): x component of velocity on x, y meshgrid v (np.ndarray): y component of velocity on x, y meshgrid

Return type

x (np.ndarray)

class lavaflow.velocity.DenseInterpolationGrid(x, y, w, h, spacing, kernel_size=21, sigma=20)[source]

Bases: DenseInterpolation

Class for interpolating on a regular grid.

Parameters are documented in examples on the maing page (work in progress).

Parameters
  • x (double) – x coordinate of top left corner of grid

  • y (double) – y coordinate of top left corner of grid

  • w (double) – width of grid

  • h (double) – height of grid

  • spacing (double) – grid spacing for output velocity grid

  • kernel_size (int) – kernel size for Guassian smoothing

  • sigma (double) – sigma for Guassian smoothing

class lavaflow.velocity.DenseInterpolationLine(points, spacing, kernel_size=21, sigma=20)[source]

Bases: DenseInterpolation

Class for interpolating on a line.

Parameters are documented in examples on the maing page (work in progress).

Parameters
  • points (np.ndarray) – line points [[x, y], [x, y]]

  • spacing (double) – point spacing for output velocity

  • kernel_size (int) – kernel size for Guassian smoothing

  • sigma (double) – sigma for Guassian smoothing

class lavaflow.velocity.SparseDelaunayTriangulation[source]

Bases: object

Construct class for finding a contrained delaunay triangulation with an internal and external boundary.

__call__(p1, b1, p2, b2)[source]

Interpolate tracked points.

Parameters
  • p1 (np.ndarray) – internal points in source image, shaped (N, 1, 2)

  • b1 (np.ndarray) – boundary points in source image, shaped (N, 1, 2)

  • p2 (nd.ndarray) – internal points in target image, shaped (N, 1, 2)

  • b2 (nd.ndarray) – boundary points in target image, shaped (N, 1, 2)

Returns

output from triangle.triangulate internal_boundary (np.ndarray): ordered internal boundary segments external_boundary (np.ndarray): ordered external boundary segments s1 (np.ndarray): steiner points in source image added by triangle.triangulate, shaped (N, 1, 2) s2 (np.ndarray): steiner points in target image added by triangle.triangulate, shaped (N, 1, 2)

Return type

tri (dict)

class lavaflow.velocity.SparseInterpolation(x, y)[source]

Bases: object

Class for interpolating on arbitrary points.

Parameters
  • x (np.ndarray) – x coordinates to interpolate (any shape)

  • y (np.ndarray) – y coordinates to interpolate (any shape)

__call__(tri, p1, b1, s1, p2, b2, s2)[source]

Interpolate tracked points.

This works by interpolating the velocities at each vertex of the Delaunay triangulation based on the barycentric coordinates of each point and it’s corresponding triangle.

Parameters
  • tri (dict) – output from triangle.triangulate

  • p1 (np.ndarray) – internal points in source image, shaped (N, 1, 2)

  • b1 (np.ndarray) – boundary points in source image, shaped (N, 1, 2)

  • s1 (np.ndarray) – steiner points in source image added by triangle.triangulate, shaped (N, 1, 2)

  • p2 (nd.ndarray) – internal points in target image, shaped (N, 1, 2)

  • b2 (nd.ndarray) – boundary points in target image, shaped (N, 1, 2)

  • s2 (np.ndarray) – steiner points in target image added by triangle.triangulate, shaped (N, 1, 2)

Returns

x meshgrid y (np.ndarray): y meshgrid u (np.ndarray): x component of velocity on x, y meshgrid v (np.ndarray): y component of velocity on x, y meshgrid

Return type

x (np.ndarray)

class lavaflow.velocity.SparseInterpolationGrid(x, y, w, h, spacing)[source]

Bases: SparseInterpolation

Class for interpolating on a regular grid.

Parameters
  • x (double) – x coordinate of top left corner of grid

  • y (double) – y coordinate of top left corner of grid

  • w (double) – width of grid

  • h (double) – height of grid

  • spacing (double) – grid spacing for output velocity grid

class lavaflow.velocity.SparseInterpolationLine(points, spacing)[source]

Bases: SparseInterpolation

Class for interpolating on a line.

Parameters
  • points (np.ndarray) – line points [[x, y], [x, y]]

  • spacing (double) – point spacing for output velocity