captouch module

Basic usage

In a flow3r application you receive a CaptouchState object in each think() cycle. Here’s a simple example:

class App(Application):
    def think(self, ins, delta_ms):
        petal_0_is_pressed = ins.captouch.petals[0].pressed

You cannot instantiate this object directly, but for REPL experiments there is a workaround listed below.

class CaptouchState
petals: Tuple[CaptouchPetalState]

State of individual petals.

Contains 10 elements, with the zeroeth element being the petal closest to the USB port. Then, every other petal in a clockwise direction.

Even indices are top petals, odd indices are bottom petals.

The top petal indices are printed in roman numerals around the flow3r display, with “X” corresponding to 0.

ticks_us: int

Timestamp of when the captouch data has been requested from the backend, i.e. when the think() cycle started. Mostly useful for comparing for to the same attribute of PetalLogFrame. Behaves identical to the return type of time.ticks_us() and should only be used with time.ticks_diff() to avoid overflow issues. Overflow occurs after ~10min.

class CaptouchPetalState
pressed: bool

True if the petal has been touched during the last think() cycle.

May be affected by captouch.Config.

pos: Optional(float)

Coordinates where this petal is touched or None if the petal isn’t touched or positional output is turned off via captouch.Config`.

The coordinate system is rotated with the petal’s orientation: The real part corresponds to the axis going from the center of the screen to the center of this petal, the imaginary part is perpendicular to that so that it increases with clockwise motion.

Both real and imaginary part are centered around 0 and scaled to a [-1..1] range. We try to guarantee that the output can span the full unit circle range, but it may also go beyond.

Some filtering is applied.

May be affected by captouch.Config.

See captouch.PETAL_ROTORS to align the output with the display coordinate system.

raw_pos: float

Similar to .pos, but never None. Will probably return garbage when petal is not pressed. It is mostly useful for interpolating data between petals. Filtering is still applied.

raw_cap: float

Returns a the raw capacity reading from the petal in arbitrary units. The value that kind-of-sort-of corresponds to how much the pad is covered. Since the footprint of a finger expands when compressed (what a sentence), this could in theory be roughly used for pressure, but the data quality just doesn’t cut it:

It’s mostly okay when not compared against fixed values, but rather some sort of floating average, but it’s not really monotonic and also it doesn’t react until the finger is a few mm away from the pad so it’s kinda bad for proximity sensing too. It’s tempting to use it for gating away light touches, but that results in poor performance in some environmental conditions. Test carefully, and best make nothing important depend on it.

Normalized so that “1” corresponds to the upper hysteresis limit of the pressed API.

May be affected by captouch.Config.

log: Tuple[PetalLogFrame]

Raw frame output of the captouch driver. Must be enabled by captouch.Config.

Since micropython and the captouch driver are running asynchronously we’re providing a list of all raw data points collected since the last .think() call.

The lowest indices are the oldest frames, so that you could compile a complete log (or one cropped to arbitrary length) simply by appending new data:

def on_enter(self, vm):
    super().on_enter(vm)
    conf = captouch.Config.default()
    conf.petals[0].logging = True
    conf.apply()
    self.log = list()

def think(self, ins, delta_ms):
    super().think(ins, delta_ms)
    # append new frames to end of log
    self.log += ins.captouch.petals[0].log
    # crop old frames
    self.log = self.log[-100:]
PETAL_ROTORS: Tuple[complex]

Tuple of 10 constants that can be used to rotate the output of the .pos attribute of both CaptouchPetalState and PetalLogFrame to align with the display (x, y) coordinates.

# (in think)
for x in range(10):
    pos = ins.captouch.petals[x].pos
    pos *= 60 * captouch.PETAL_ROTORS[x]
    self.display_coords[x] = (pos.real, pos.imag)

# (in draw)
for x in range(10):
    ctx.move_to(* self.display_coords[x])
    ctx.text(str(x))
PETAL_ANGLES: Tuple[float]

Tuple of 10 constants that can be used to align with the display (x, y) coordinates. PETAL_ANGLES[x] is equivalent to cmath.phase(PETAL_ROTORS[x]) and PETAL_ROTORS[x] is equivalent to cmath.rect(1, PETAL_ANGLES[x]).

Speeding things up

The flow3r captouch driver is not the fastest, it takes at least 14ms to generate a full dataset with all channels running. For applications where speed is key it is possible to merge data channels to reduce scanning time. Each petal can be turned off entirely, most can act as simple a button or a 1D slider (also 2D for all top petals). For example, if you turn off all petals except for 2 and 8 for a “dual joystick” mode, you increase your frame rate to up to 2.3ms!

import captouch

class App(Application):
    def __init__(self):
        self.captouch_config = captouch.Config.empty()
        # top petals are used as buttons, bottom petals not at all
        for petal in range(0,10,2):
            self.captouch_config.petals[petal].mode = 1

    def on_enter(self, vm):
        self.captouch_config.apply()
class Config
classmethod empty() Config:

Initializer method that returns a config with everything disabled. Ideal for ORing the requirements of different components together.

classmethod default() Config:

Initializer method that returns the default config, same as when entering an application.

classmethod current() Config:

Initializer method that returns the currently active config.

apply() None:

Apply this config to the driver.

apply_default() None:

Convenience method to restore defaults. same as Config.default().apply() but mildly faster if you already have one around.

petals: Tuple[PetalConfig]

Config of individual petals, indexed as in the CaptouchState object.

class Config.PetalConfig
mode: int

What kind of data should be collected for this petal. Raises ValueError when set to an unallowed value.

0: No data at all. Allowed for all petals.

1: Button Mode: All pads combined, no positional output. Only allowed for bottom petals and petals 4 and 6.

2: 1D: Only radial position is provided. Only allowed for bottom petals and petals 4 and 6.

3: 2D: Full positional output. Only allowed for top petals.

Defaults to the maximum allowed value.

The integer value corresponds to the number of active chip channels. Data rate scales linearily per chip at 0.75ms per channel plus a noisy overhead of 2-4ms typically. Bottom petals and petal 2 are connected to one chip, the remaining top petals to another.

Note: We discovered last-minute that modes 1 and 2 are not functioning properly for some top petals, so they are currently unavailable. We will try to fix them up in the future. They work fine for petals 4 and 6 due to their lower bulk capacity, presumably because of the speaker holes.

logging: bool

Whether or not you want to collect the raw data log. This eats some CPU time proportional to the think() cycle time, use only when you actually do something with the data.

Default: False

set_min_mode(mode: int) None:

If the current mode is lower than the argument, it gets increased to that value if allowed. If the value is not allowed it is either set to the next-biggest allowed value or, if no such value exists, to the largest allowed value.

Gestures

For many common applications we provide widgets that do whatever data processing is needed so you don’t have to implement everything from scratch, see st3m.ui.widgets. If whatever you want is already in there we recommend using these as future performance improvements will then directly benefit your application.

If you do want to do your own signal processing you will probably want to use the logging feature: The positional data is fairly imperfect already, missing frames or needing to detect duplicates doesn’t make it better, and the general-purpose filtering on the “primitive” positional output may be an issue for fast motion detection. Using the unprocessed log doesn’t make postprocessing easy, but at least you get the best data quality the driver can offer.

In order to use the logging feature effectively we also provide a PetalLog class that implements fast time-based cropping and data processing. This could all be done in raw python too, but e.g. for linear regression the C implementation runs around 40 times faster and creates less intermediate objects so that garbage collection triggers less. This is particularily important if the captouch driver is configured to run a specific petal very fast.

class PetalLogFrame
pressed: bool

Identical to pressed of CaptouchPetalState.

pos: Optional(float)

Identical to pos of CaptouchPetalState but without any filtering.

raw_pos: float

Identical to raw_pos of CaptouchPetalState but without any filtering.

raw_cap: float

Identical to raw_cap of CaptouchPetalState.

mode: int

Config mode setting that was used for recording the frame (see captouch.Config).

ticks_us: int

Timestamp that reflects the approximate time at which the data was captured (to be exact, when the I2C transmission has completed). Behaves identical to the return type of time.ticks_us() and should only be used with time.ticks_diff() to avoid overflow issues. Overflow occurs after ~10min.

class PetalLog
frames

List of PetalLogFrames. May be manipulated or replaced by user. We use the binary structure of micropython list as well as PetalLogFrame, so any duck typing may result in TypeError when the other attributes and methods of this class are used.

append(frame: PetalLogFrame):

Appends frame to .frames. There’s a performance benefit when only modifying .frames with this method alongside .crop() and .clear().

crop(index: Optional(int)) int

Crops the oldest elements in .frames in-place and returns the number of cropped frames. The index parameter behaves slice-like, equivalent to .frames = .frames[index:], i.e. positive values remove that amount of oldest frames, negative values limit the list at most -index frames and None does nothing. Typically used together with index_offset_ms() to keep the length of frames in check.

clear()

Clears .frames.

length() int

Returns len(.frames) but slightly faster.

length_ms(start: Optional(int) = None, stop: Optional(int) = None, /) float

Returns difference in timestamp between newest and oldest frame in milliseconds or 0 if .frames is empty. The optional start and stop parameters delimit which slice of .frames is used for computation, equivalent to .frames[start:stop]. Negative values behave as expected.

index_offset_ms(index: int, min_offset_ms: float, /)

Returns the index of the frame that is at least min_offset_ms newer (or older for negative min_offset_ms) than the frame at index, or None if no such frame exists. Negative index values are allowed and work as expected, e.g. index = -1 indicates the newest frame. Will raise IndexError if the index is out of range.

average(start: Optional(int) = None, stop: Optional(int) = None, /)

Returns the average position of elements in .frames. Will return None if no frames are available. The optional start and stop parameters delimit which slice of .frames is used for computation, equivalent to .frames[start:stop]. Negative values behave as expected.

slope_per_ms(start: Optional(int) = None, stop: Optional(int) = None, /)

Returns the ordinary least squares linear regression slope of the position of elements in .frames. Uses timestamp and disregards order of .frames. Will return None if less than 2 frames are available or all timestamps are equal. The optional start and stop parameters delimit which slice of .frames is used for computation, equivalent to .frames[start:stop]. Negative values behave as expected.

The nitty gritty

The flow3r captouch setup is not as good as a smartphone touchscreen. While a typical modern touchscreen receives data from a fine grid of wire intersections, flow3r just has 2 per bottom pad and 3 per top pad. Here’s an illustration:

../../_images/captouch_petals.png

On a grid type touch device you can infer rough position even with rather high noise levels as long as a “high” and “low” for each grid point is roughly represented. On a device like flow3r, unfortunately we do not have this luxury. This leads to higher noise sensitivity and some other unexpected behaviors that limit how captouch can be used:

Liftoff artifacts

In general, the positional output is dependent on pressure, finger size and environmental factors. For example, if you have a USB cable connected to the USB-C port and put it in your pants pocket without connecting it to anything, your finger will result in a different excitation than another person’s finger who touches a different petal. This is not a super pratical scenario, but people have observed effects like this if flow3r has been on different surfaces (i.e. tables, couches). We tried our best to suppress these side effects in the .pressed and .pos outputs, but for example .raw_cap is heavily affected by it and there’s little we can do about it.

A more pratical side effect is that if you release a petal, the positional output will momentarily drift. This is bad for swipe gesture recognition, as this can easily be misread as a swipe. You might think that the .raw_cap channel may help suppressing this, but since .raw_cap also changes a lot during motion without liftoff, a trivial algorithm would suppress valid swipes. The current implementation of the Scroller widget does not use .raw_cap at all since any math we could come up with reasonable effort was situational at best, but typically detrimental to the feel.

These liftoff artifacts (or lifton, for the counterpart at the beginning of a gesture) are a nuisance to many widgets in different forms. In general, we found it to be the be the best approach to ignore 20ms of the positional data from the beginning and/or end of each touch, depending on the use case. This pattern was found to be so common that the PetalLog class has been in great parts designed around facilitating the implementation of such rejection algorithms.

Some users may instinctively use slow liftoff in order to make sure they don’t accidentially introduce a motion, erroneously attributing these artifacts to their own performance rather than a shortcoming of the hardware. This is unfortunate, as these slow liftoffs are much harder to detect (we did some testing with .raw_cap but found no universally applicable pattern, there often is a visible kink in the data but it often occurs later than the artifacts, so if you investigate options like this make sure to exclude “red herrings” - we wasted a good few hours that could’ve been prevented by plotting all the data).

The hardware is out there in the world, the best we can do at this point is to accept its performance, explain it to the user and then be consistent - if fast liftoffs are the most consistent way to work around these issues, we should go for them, even if for some they may be counterintuitive.

Data rates

As a rule of thumb, all (even) top petals are hooked up to one chip, all (odd) bottom petals to another, except for petal 2, which is connected to the “bottom” chip. This means for example that if you disable all bottom petals, petal 2 receives data much faster than the other top petals.

Generally, each data channel that you collect (their amount being the integer value of .mode for each petal) takes about 0.75ms, however due to the asynchronous peripheral protocol we typically run a bit slower than that, expect the full cycle to take 2-3ms on top. Higher priority tasks (audio rendering, WiFi) may make this worse. Also, if the bottom chip is fully utilized (13 datapoints, 2 from each bottom petal, 3 from petal 2) there is an additonal penalty resulting in a spin time of about 14ms.

The PetalLog class (especially the index_offset_ms() method) is specifically designed to help dealing with those different data rates. Making widgets that feel the same-ish with different driver configurations is difficult: We’re trying hard to make the provided widget library perform satisfactory at all configurations, but it is a time consuming task. Of course, if you write an application you only need to consider the driver configuration(s) that you actually are using. It is still a good idea to ask yourself whether some or the other data processing is supposed to occur in the time domain (for example, detecting motion in the last 100ms) or the index domain (for example, rejecting noise by averaging 4 samples).

There is one caveat: If you do “hardcode” the behavior of a widget to a specific driver configuration, you should take care to set up the driver configuration so that all petals which use that widget actually run at the same expected data rate (i.e., same amount of active channels of that chip). Most commonly this affects petal 2 due to its irregular connection. For example, the Violin application, which extracts rubbing motion from all top petals, activates bottom petal channels that it does not use in order to make sure that petal 2 runs at the same data rate as the other top petals. Feel free to not use the widget auto-configuration at all and create your own for the purpose manually, or modify the autogenerated one after it has been created. It is meant as a mere helper, you may find reasons to ignore or enhance it at times.

You might feel tempted to dynamically switch configuration, for example to run petals as buttons very fast and only enable positional output once they are touched. This would be great in theory and make many applications a bit snappier, however the chips exhibit strange undocumented glitches when configurations are changed in certain ways. Our approach to configuration changes at this point is to try to guarantee the validity of all datasets that you receive, but since these glitches often rare and difficult to track down we are overshooting and throwing away more data than needed. Changing configuration at this point in time results in 3 datasets being thrown away, resulting in a significant (~50ms typ.) gap in the data logs. We may be able to improve on this in some specific transition types, (i.e., channel number remains constant), but it is unclear if we will ever find the effort to implement this justifiable.

Miscellaneous quirks

  • The top petals have quite a large deadzone near the outer tip.

  • The top petals like to “zig zag” around the center. For 1D value input the bottom petals are plain better.

  • The bottom petals are less noisy. To compensate, the top petals use stronger filtering in the non-logged positional output, making them a bit slower.

  • Faster spin times do not only affect the log but also the built-in filters on the non-logged outputs, making especially the top petals much more responsive.

  • .raw_cap is not monotonic with respect to how much of the petal you cover: In fact, if you cover an entire top petal with multiple flat fingers, it fairly consistently outputs lower values compared to the flat of the thumb. The causes for this behavior are unknown.

Annex 1: Basics of complex numbers

You may have noticed that the positional output is complex-valued. We find that it enables very concise 2D operations, but not everyone is familiar with them. If you don’t wanna deal with it at all and use traditional coordinates instead, you can simply convert it to an x-y tuple like so:

# create complex number with real part 1 and imaginary part 3
pos = complex(1, 3)
# transform it into a x-y tuple
tuple_pos = (pos.real, pos.imag)

If you do want to use them directly however, here’s some basics:

Typically we think of complex numbers as vector-like objects with two common representations: Above, we expressed them by their real and imaginary component, similar how traditional coordinates would use x and y components. Alternatively, we can express them as an angle and a length, as shown in the graphic below. Much of the magic of complex coordinate systems lies in the ability to seamlessly jump between those two representations.

Illustration of an imaginary number on a cartesian plane with an arrow from origin to the number. The length of the arrow and the angle between x axis and arrow are marked.

by Kan8eDie / CC BY-SA 3.0 Unported

Above, we have created a complex number by specifying the real and imaginary component. Let’s create one in the “circular” representation instead and convert back and forth a little:

import cmath

# create number with angle of 45 degrees (math.tau / 8 in radians) and length of 2:
pos = cmath.rect(2, math.tau / 8)

# as before, we can look at the x-y representation via the real and imaginary attributes:
pos_x = pos.real
pos_y = pos.imag

# we can look at the angular representation with standard library functions:
# get length, in this case 2
length = abs(pos)
# get angle, in this case math.tau / 8
angle = cmath.phase(pos)

Let’s manipulate those numbers a little. For starters, let’s look at translation and scaling. This is fairly straigtforward and doesn’t rely on the “angular” representation at all:

# make another number
offset = complex(2, 4)
# alternative notation: make a real number imaginary by appending the complex unit "j":
offset = 2 + 4j

# translate by 2 in the real direction and 4 in the imaginary direction
pos += offset

# scale both real and imaginary part by 2
pos *= 2

This is not very exciting, so let’s look at a cooler trick: Multiplication of two complex numbers adds their respective angles together, which can be used for rotation. Of course, this can be used together with scaling in the same operation; the scaling factor is simply the length of the complex number.

# create number with angle of 30 degrees (=360 / 12) and length of 0.1:
rotator = cmath.rect(0.1, math.tau / 12)

# save angle for future reference
prev_pos_angle = cmath.phase(pos)
prev_pos_length = abs(pos)

# apply the rotation and scaling
pos *= rotator

# check how much angle has changed: (angle_change % math.tau) equals* math.tau / 12
angle_change = cmath.phase(pos) - prev_pos_angle

# check how much angle has changed: length_change equals* 0.1:
length_change = abs(pos)/prev_pos_length

# *: plus minus floating point rounding errors

Division works as with reals in that it undoes multiplication: It scales by the inverse (1/length), and rotates by the same angle but in the other direction. Of course, as with reals, multiplying by 0 destroys information so that dividing by 0 is impossible.

# complex numbers are nontruthy if both real and imaginary part are 0, else truthy
if rotator:
    # we can undo rotation and scaling by dividing:
    pos /= rotator
    # this operation is slower than multiplication, but we can cache
    # the inverse to make applying it fast:
    antirotator = 1/rotator
    # pos remains unchanged plus minus floating point rounding errors:
    pos = (pos * rotator) * antirotator

For rotating around a point other than the origin, simply translate and de-translate before and after the rotation:

pos -= offset
pos *= rotator
pos += offset

As a practical example, here’s how to set a bright RGB color from a petal position:

We use a HSV representation because it is similarily circular, with hue being an angle and saturation being the distance from the (white) center. Notably, when saturation is 0, the value of hue doesn’t matter. The final parameter, value, is fixed at “1” so we always get a bright color.

Note: A Slider widget would do a better job at preventing artifacts but let’s keep things simple.

# (in app.think())
petal = ins.captouch.petals[0]
if petal.pos is not None:
    # angle of the position vector corresponds to hue
    hue = cmath.phase(petal.pos)

    # length of the position vector corresponds to saturation
    sat = abs(petal.pos)
    # length can be greater than 1 so we need to limit it
    # (but it is guaranteed to be able to reach 1 at any angle)
    sat = max(sat, 1)

    # max brightness always
    val = 1

    # transform to RGB because the LED driver uses that
    rgb_col = st3m.ui.colours.hsv_to_rgb(hue, sat, val)
    # apply to all LEDs
    leds.set_all_rgb(*rgb_col)
    leds.update()

Annex 2: REPL workaround

We’ve cornered ourselves there a little: some useful features of the captouch driver are synchronized to the think() cycle, but many early applications don’t use the CaptouchState provided but instead create their own via captouch.read() (legacy, don’t do this in new applications please). If this function were to trigger a reset in the .pressed attribute, half of the data would be thrown away and it would be easy to miss button presses. .log would drop frames in similar manner. You could do some sort of lazy evaluation of think()’s object but that might just result in more subtle bugs if users aren’t careful, we’d rather break loudly :D. Instead, the OS uses a special trigger function at the beginning of each think(). To construct a proper CaptouchState in the repl we must call this function manually. Don’t ever do it in applications tho, really.

import sys_captouch # in REPL only, never in applications!
sys_captouch.refresh_events() # your app will break in subtle and annoying ways
captouch_state = sys_captouch.read()

Annex 3: Legacy API

class CaptouchPetalState
position: Tuple(int, int)

Similar to .raw_pos, but not normalized. First element corresponds to real part, second element to imaginary. For top petals about 35000 * .raw_pos, for bottom petals about 25000 * .raw_pos + 5000 (note that addition only affects the real part, the imaginary part is always 0 for bottom petals).

pressure: int

Similar to .raw_cap, but not normalized. Depending on firmware version roughly about 8000 * .raw_cap, but may or may not be always 0 if the petal is not pressed.

read() CaptouchState

Reads current captouch state from hardware and returns a snapshot in time. .pressed and .log attributes are broken in the REPL.

Typically you’d want to use the captouch data provided by think(), so this method for application purposes is replaced with nothing. See workaround for reasoning.

What if you do need the captouch state outside of think() though? Well, chances are you don’t, it just appears convenient: We’ve seen this pattern a few times where think() requires a previous state, and the first such previous state is generated by __init__(), but this is an anti-pattern. Instead, set the previous state to None in on_enter() and handle that case in think(). The common consequence of doing otherwise is that after exiting and reentering an application the previous state is very stale, which can lead to unintended behavior. Dedicated “first think” functionality really is the way to go in these cases.

Some example applications that ship with flow3r unfortunately use this pattern, and we should really clean that up, but we didn’t have time for this release yet. Apologies, IOU, will totally get around to it soon.