lavaflow package
The lavaflow package contains all of the code required for free surface velocity tracking.
lavaflow.boundary module
Functions for boundary detection and masking.
- class lavaflow.boundary.BoundaryDetector(res, flow_source_geometry, flow_plate_geometry)[source]
Bases:
ABCAbstract class for boundary detection based on image masking.
This class abstracts how the flow source geometry and flow plate geometry are defined. Any specific boundary detection method should extend this class and implement the
__call__method.The flow source is used to filter out masks that do not include the flow source, and the flow plate is used to crop the mask down to the area of interest to remove any flow outside the boundary.
Two flow source types are currently supported, circle geometry
{ type: 'circle', data: { center: [x, y], radius: r, } }
and edge geometry
edge_flow_source_geometry = { type: 'edge', data: { points: [[x, y], [x, y]], radius: r, } }
The flow plate geometry is specified as a generic polygon only
{ type: 'polygon', data: { points: [[x, y], [x, y], ...], } }
- Parameters
res (tuple) – image size written as
(w, h)flow_source_geometry (dict) – flow source geometry
flow_plate_geometry (dict) – flow plate geometry
- class lavaflow.boundary.ColorMaskBoundaryDetector(res, flow_source_geometry, flow_plate_geometry, color_mask)[source]
Bases:
BoundaryDetectorClass for boundary detection based on color masking.
This boundary detection method uses color masks, morphology transformations, and contour matching to find a contour that matches the boundary of the colored region with the largest area that contains the flow source geometry.
Note that if the flow source geometry is temporarily occluded the boundary with the largest area will still be selected.
- Parameters
res (tuple) – image size written as
(w, h)flow_source_geometry (dict) – flow source geometry
flow_plate_geometry (dict) – flow plate geometry
color_mask (ColorMask) – color mask (already configured)
- class lavaflow.boundary.ExpandBoundary[source]
Bases:
objectClass for boundary expanding.
This class takes the union of a window or sequence of boundaries by painting inside the boundaries and recomputing the boundary based on the union of the overlay.
- class lavaflow.boundary.SmoothBoundary(intensity=None, threshold=None)[source]
Bases:
objectClass for boundary smoothing.
This class smooths a window or sequence of boundaries using anti-aliasing by painting inside the boundaries with transparency and recomputing the boundary based on the overlay.
- Parameters
intensity (float) – intensity to apply to each frame in window, generally
255 / Nbased on the number of boundaries in the windowNthreshold (float) – threshold for smooth mask
lavaflow.log module
wxPython logging.
Defines a root logger which can be imported by scripts using this package, and configurations for color stdout formatting. Currently, modules request loggers locally, but they could import the root logger directly.
lavaflow.masking module
Functions for image masking.
- class lavaflow.masking.ColorMask(center, sensitivity, code=None, output_type='mask')[source]
Bases:
objectClass for creating an image mask based on color matching.
Each pixel is converted into the desired color space for color matching using
cv2.cvtColorbased on the color conversion code and each color channel is checked to see if the value falls withinsensitivityof thecenterover the range[0, 255].The LAB color space is good for color matching as it corresponds to colors as they are perceived by the human eye, L for perceptual lightness and a* and b* for the four unique colors of human vision red, green, blue, and yellow. This means that a numerical change in LAB space corresponds to a similar perceived change in color which makes it easier to choose the center and sensitivity.
- Parameters
center (np.ndarray) – center color (same color space as the image)
sensitivity (np.ndarray) – relative sensitivity (separate for each color channel)
code (int) – color conversion code e.g.
cv2.COLOR_BGR2LABoutput_type (str) – output type for map
- __call__(img)[source]
Map image to image mask or distance map.
- Parameters
img (np.ndarray) – image
- Returns
image mask or distance map
- Return type
obj (np.ndarray)
- class lavaflow.masking.MultiColorMask(color_masks, output_type='mask')[source]
Bases:
objectClass for creating an image mask based on multiple color matching.
This class combines outputs from multiple :class:
lavaflow.masking.ColorMaskeither by taking the union of image masks or the minimum of the distance maps.- Parameters
color_masks (list) – array of color masks
output_type (str) – output type for map
- Returns
color mask
- Return type
self (ColorMask)
- __call__(img)[source]
Map image to image mask or distance map.
- Parameters
img (np.ndarray) – image
- Returns
image mask or distance map
- Return type
obj (np.ndarray)
lavaflow.pipelines module
Functions for boundary detection and masking.
- class lavaflow.pipelines.Pipeline(config)[source]
Bases:
ABCGeneric pipeline class for streaming images for processing and modeling.
- Parameters
config – config dictionary
- class lavaflow.pipelines.VideoStreamPipeline(config)[source]
Bases:
PipelineClass for streaming frames of video from a file using iterables to consume less memory.
- Parameters
config (dict) – dictionary config, see
sparse.optical.flow.example
- consume()[source]
Construct and run pipe for entire video stream and generate output specified in config.
- get_video_pipe(video)[source]
Construct and return pipe with video transformation and processing only.
- Parameters
video (dict) – video config
- Returns
pipe with video transformation and processing only
- Return type
pipe (Pipe)
- preview(file, index)[source]
Construct and run pipe for a single frame of video.
- Parameters
file (str) – video file
index (int) – frame index for generating preview
- Returns
pipe configured to run for a single image only
- Return type
pipe (Pipe)
- visualizations = ['raw', 'raw_transformed', 'raw_processed']
lavaflow.processing module
Functions for image processing.
- class lavaflow.processing.BrightnessAndContrastAdjuster(brightness=0, contrast=0, center=128)[source]
Bases:
LinearAdjusterClass for memory efficienct image processing.
- Parameters
brightness (float) – brightness, range
[-1, 1]contrast (float) – contrast, range
[-8, 8]center (float) – center of contrast adjustment, range [0, 255]
- class lavaflow.processing.ColorConverter(code, astype=None)[source]
Bases:
objectClass for memory efficienct image processing.
- Parameters
code (np.ndarray) – color conversion code
astype (type) – output type conversion (optional)
- class lavaflow.processing.CurveAdjuster(slope=0, shift=0)[source]
Bases:
objectClass for memory efficienct image processing.
- Parameters
slope (float) – slope factor, range
[-1, 1]shift (float) – shift factor, range
[-1, 1]
- class lavaflow.processing.GammaAdjuster(gamma=1, gain=1)[source]
Bases:
LUTAdjusterClass for memory efficienct image processing.
- Parameters
gamma (float) – gamma value, range
[0, inf)gain (float) – gain value, range
[0, inf)
- class lavaflow.processing.KernelSharpener(s=0)[source]
Bases:
objectClass for memory efficienct image processing.
- Parameters
s (float) – sharpen parameter, range
[0, inf)
- class lavaflow.processing.LUTAdjuster(LUT)[source]
Bases:
objectClass for memory efficienct image processing.
- Parameters
LUT (np.ndarray) – look up table
- class lavaflow.processing.LensCorrector(maker, model, lens, focal_length, aperture, distance)[source]
Bases:
objectContruct class for lens correction based on the following
metadata extracted using https://exiftool.org/
lens profiles from https://wilson.bronger.org/lensfun_coverage.html
- Parameters
maker (str) – camera maker, e.g.
Canonmodel (str) – camera model,e.g.
Canon 90Dlens (str) – lens model, e.g.
EF-S 18-135mm f/3.5-5.6 IS USMfocal_length (double) – focal length based on camera settings
aperture (double) – aperture based on camera settings
distance (double) – actual focal distance based on camera settings (m)
- class lavaflow.processing.LinearAdjuster(gain=1, bias=0)[source]
Bases:
LUTAdjusterClass for memory efficienct image processing.
- Parameters
gain (float) – gain value, range
(-inf, inf)bias (float) – bias value, range
[-255, 255]
- class lavaflow.processing.NonLocalMeansDenoiser(l=0, c=0, template_radius=7, search_radius=21)[source]
Bases:
objectClass for memory efficienct image processing.
- Parameters
l (float) – lightness denoising parameter, range
[0, inf)c (float) – color denoising parameter, range
[0, inf)template_radius (float) – kernel radius, range
[1, inf)search_radius (float) – kernel radius, range
[1, inf)
- class lavaflow.processing.PerspectiveTransformer(src, dst)[source]
Bases:
objectClass for perspective transformation using OpenCV.
See
cv2.getPerspectiveTransformandcv2.warpPerspective.For example,
perspective_transformer = PerspectiveTransformer( [[7, 37], [1517, 2], [1503, 952], [26, 919]], [[0, 0], [1920, 0], [1920, 1080], [0, 1080]] )
- Parameters
src (list|np.array) – list or array of source quadrangle points, shaped
(N, 2)dst (list|np.array) – list or array of destination quadrangle points, shaped
(N, 2)
- class lavaflow.processing.Resizer(res)[source]
Bases:
objectClass for memory efficient image resizing.
- Parameters
res (tuple) – target resolution
(w, h)
lavaflow.streams module
Classes for creating video streams.
- class lavaflow.streams.AbstractStream[source]
Bases:
ABCGeneric stream class (equivalent to an iterator with length).
- abstract __iter__()[source]
Return iterator.
- Returns
iterator
- Return type
self (AbstractStream)
- class lavaflow.streams.VideoStream(file, res=(-1, -1), frame_seek=0, step_size=1, max_steps=-1)[source]
Bases:
AbstractStreamVideo stream frame iterator.
- Parameters
file (str) – file name
res (tuple) – output frame size written as (w, h)
frame_seek (int) – frame to seek, default: 0 (start of video)
step_size (int) – number of frames to iterate each step
max_steps (int) – total number of frames to capture, default: -1 (end of video)
- __iter__()[source]
Return iterator.
- Returns
video frame iterator
- Return type
self (VideoStream)
- __next__()[source]
Get next frame.
Note that this will automatically stop iteration if the video is removed or becomes unavailable.
- Returns
frame frame (int): frame index
- Return type
img (np.ndarray)
- get_position()[source]
Get current frame index.
- Returns
current frame index
- Return type
position (int)
- seek(frame_seek=0, step_size=1, max_steps=-1)[source]
Reset video frame iterator back to
frame_seek.- Parameters
frame_seek (int) – frame to seek, default: 0 (start of video)
step_size (int) – number of frames to iterate each step
max_steps (int) – total number of frames to capture, default: -1 (end of video)
lavaflow.tracking module
Functions for boundary detection and masking.
- class lavaflow.tracking.FarnebackDenseOpticalFlow(pyr_scale, levels, winsize, iterations, poly_n, poly_sigma, flags)[source]
Bases:
objectClass for Farneback dense optical flow.
Parameters are documented in examples on the maing page (work in progress).
- Parameters
pyr_scale (float) – scale between layers
levels (int) – number of layers
winsize (int) – averaging window size
iterations (int) – number of iterations algorithm performs at each level
poly_n (int) – size of pixel neighbourhood used for polynomial expansion in each pixel
poly_sigma (float) – standard deviation of Gaussian used for smoothing derivatives
flags (str) – operation flags
- __call__(img1, img2)[source]
Generate flow vector field from source image using Farneback dense optical flow.
- Parameters
img1 (np.ndarray) – source image
img2 (np.ndarray) – target image
- Returns
x component of velocity on x, y pixel meshgrid v (np.ndarray): y component of velocity on x, y pixel meshgrid
- Return type
u (np.ndarray)
- class lavaflow.tracking.FeatureDescriptorMatching(extractor, matcher, ratio, speed_lower_bound, speed_upper_bound)[source]
Bases:
SparseFlowClass for descriptor based feature matching and tracking.
Parameters are documented in examples on the maing page (work in progress).
- Parameters
extractor (cv2.Feature2D) – any class implementing the detect and compute methods defined by cv2::Feature2D
matcher (cv2.DescriptorMatcher) – any class implementing the knnMatch method defined by cv2::DescriptorMatcher
ratio (float) – distance ratio used to filter candidates
speed_lower_bound (float) – speed must be above lower bound to be accepted
speed_upper_bound (float) – speed must be below upper bound to be accepted
- __call__(img1, img2, boundary1=None, boundary2=None)[source]
Compute feature tracking across a pair of images.
- Parameters
img1 (np.ndarray) – source image
img2 (np.ndarray) – target image
boundary1 (np.ndarray) – list of boundary contour points from source image
boundary2 (np.ndarray) – list of boundary contour points from target image
- Returns
feature points in source image, shaped (N, 1, 2) p2 (nd.ndarray): feature points in target image, shaped (N, 1, 2)
- Return type
p1 (np.ndarray)
- class lavaflow.tracking.LucasKanadeSparseOpticalFlow(flow_source_geometry, extractor, winSize, maxLevel, maxIters, epsilon, min_displacement, max_displacement, exclude)[source]
Bases:
SparseFlowClass for Lucas Kanade sparse optical flow based tracking.
Parameters are documented in examples on the maing page (work in progress).
- Parameters
flow_source_geometry (dict) – flow source geometry
extractor (cv2.Feature2D) – any class implementing the detect and compute methods defined by cv2::Feature2D
winSize (int) – size of the search window
maxLevel (int) – number of pyramid levels
maxIters (int) – maximum number of iterations
epsilon (float) – minimum convergence required
min_displacement (float) – minimum displacement required
max_displacement (float) – maximum displacement allowed
exclude (list) – list of points to exclude expressed as (point center, radius)
- __call__(images, boundaries)[source]
Extract and track feature points using Luca Kanade sparse optical flow.
- Parameters
images (list) – sequence of images
boundaries (list) – sequence of boundary contour points from each image
- Returns
feature points in first image, shaped (N, 1, 2) p2 (nd.ndarray): feature points in final image, shaped (N, 1, 2)
- Return type
p1 (np.ndarray)
- class lavaflow.tracking.NormalBoundaryFlow(boundary_chord, boundary_point_pixel_spacing, boundary_intersection_lookahead)[source]
Bases:
objectParameters are documented in examples on the maing page (work in progress).
Parameters are documented in examples on the maing page (work in progress).
- Parameters
boundary_chord (int) – number of points to use for determining normal vector to boundary chord
boundary_point_pixel_spacing (int) – approximate pixel spacing between boundary points (used to simplify boundary)
boundary_intersection_lookahead (int) – number of points to look ahead to determine target boundary intersections
- __call__(img1, img2, boundary1=None, boundary2=None)[source]
Extract feature points from source image using Luca Kanade sparse optical flow.
- Parameters
img1 (np.ndarray) – source image (not used)
img2 (np.ndarray) – target image (not used)
boundary1 (np.ndarray) – list of boundary contour points from source image
boundary2 (np.ndarray) – list of boundary contour points from target image
- Returns
feature points in source image, shaped (N, 1, 2) p2 (nd.ndarray): feature points in target image, shaped (N, 1, 2)
- Return type
p1 (np.ndarray)
- class lavaflow.tracking.SparseFlow[source]
Bases:
ABCGeneric class for tracking sparse flow.
- abstract __call__(images, boundaries)[source]
Extract and track specific feature points from source image.
The images and boundaries lists should have the same length and should correspond to an ordered sequence of images. The results points
p1andp2come from the first and last images respectively- Parameters
images (list) – sequence of images
boundaries (list) – sequence of boundary contour points from each image
- Returns
feature points in first image, shaped (N, 1, 2) p2 (nd.ndarray): feature points in final image, shaped (N, 1, 2)
- Return type
p1 (np.ndarray)
lavaflow.velocity module
Functions for interpolating a velocity field.
- class lavaflow.velocity.DenseInterpolation(x, y, kernel_size=21, sigma=20)[source]
Bases:
objectClass for interpolating on a dense velocity grid.
Parameters are documented in examples on the maing page (work in progress).
- Parameters
x (np.ndarray) – x coordinates to interpolate (any shape)
y (np.ndarray) – y coordinates to interpolate (any shape)
kernel_size (int) – kernel size for Guassian smoothing
sigma (double) – sigma for Guassian smoothing
- __call__(u, v)[source]
Interpolate tracked points.
- Parameters
u (np.ndarray) – x component of velocity on x, y pixel meshgrid
v (np.ndarray) – y component of velocity on x, y pixel meshgrid
- Returns
x meshgrid y (np.ndarray): y meshgrid u (np.ndarray): x component of velocity on x, y meshgrid v (np.ndarray): y component of velocity on x, y meshgrid
- Return type
x (np.ndarray)
- class lavaflow.velocity.DenseInterpolationGrid(x, y, w, h, spacing, kernel_size=21, sigma=20)[source]
Bases:
DenseInterpolationClass for interpolating on a regular grid.
Parameters are documented in examples on the maing page (work in progress).
- Parameters
x (double) – x coordinate of top left corner of grid
y (double) – y coordinate of top left corner of grid
w (double) – width of grid
h (double) – height of grid
spacing (double) – grid spacing for output velocity grid
kernel_size (int) – kernel size for Guassian smoothing
sigma (double) – sigma for Guassian smoothing
- class lavaflow.velocity.DenseInterpolationLine(points, spacing, kernel_size=21, sigma=20)[source]
Bases:
DenseInterpolationClass for interpolating on a line.
Parameters are documented in examples on the maing page (work in progress).
- Parameters
points (np.ndarray) – line points [[x, y], [x, y]]
spacing (double) – point spacing for output velocity
kernel_size (int) – kernel size for Guassian smoothing
sigma (double) – sigma for Guassian smoothing
- class lavaflow.velocity.SparseDelaunayTriangulation[source]
Bases:
objectConstruct class for finding a contrained delaunay triangulation with an internal and external boundary.
- __call__(p1, b1, p2, b2)[source]
Interpolate tracked points.
- Parameters
p1 (np.ndarray) – internal points in source image, shaped (N, 1, 2)
b1 (np.ndarray) – boundary points in source image, shaped (N, 1, 2)
p2 (nd.ndarray) – internal points in target image, shaped (N, 1, 2)
b2 (nd.ndarray) – boundary points in target image, shaped (N, 1, 2)
- Returns
output from triangle.triangulate internal_boundary (np.ndarray): ordered internal boundary segments external_boundary (np.ndarray): ordered external boundary segments s1 (np.ndarray): steiner points in source image added by triangle.triangulate, shaped (N, 1, 2) s2 (np.ndarray): steiner points in target image added by triangle.triangulate, shaped (N, 1, 2)
- Return type
tri (dict)
- class lavaflow.velocity.SparseInterpolation(x, y)[source]
Bases:
objectClass for interpolating on arbitrary points.
- Parameters
x (np.ndarray) – x coordinates to interpolate (any shape)
y (np.ndarray) – y coordinates to interpolate (any shape)
- __call__(tri, p1, b1, s1, p2, b2, s2)[source]
Interpolate tracked points.
This works by interpolating the velocities at each vertex of the Delaunay triangulation based on the barycentric coordinates of each point and it’s corresponding triangle.
- Parameters
tri (dict) – output from triangle.triangulate
p1 (np.ndarray) – internal points in source image, shaped (N, 1, 2)
b1 (np.ndarray) – boundary points in source image, shaped (N, 1, 2)
s1 (np.ndarray) – steiner points in source image added by triangle.triangulate, shaped (N, 1, 2)
p2 (nd.ndarray) – internal points in target image, shaped (N, 1, 2)
b2 (nd.ndarray) – boundary points in target image, shaped (N, 1, 2)
s2 (np.ndarray) – steiner points in target image added by triangle.triangulate, shaped (N, 1, 2)
- Returns
x meshgrid y (np.ndarray): y meshgrid u (np.ndarray): x component of velocity on x, y meshgrid v (np.ndarray): y component of velocity on x, y meshgrid
- Return type
x (np.ndarray)
- class lavaflow.velocity.SparseInterpolationGrid(x, y, w, h, spacing)[source]
Bases:
SparseInterpolationClass for interpolating on a regular grid.
- Parameters
x (double) – x coordinate of top left corner of grid
y (double) – y coordinate of top left corner of grid
w (double) – width of grid
h (double) – height of grid
spacing (double) – grid spacing for output velocity grid
- class lavaflow.velocity.SparseInterpolationLine(points, spacing)[source]
Bases:
SparseInterpolationClass for interpolating on a line.
- Parameters
points (np.ndarray) – line points [[x, y], [x, y]]
spacing (double) – point spacing for output velocity