The default decoder would take the micro-unit value of the old sensor
value and multiply it by INT32_MAX. This would, at times, cause an
overflow for the int64_t which is the cause of some bugs like when
-7952 was used (-7952000000 * INT32_MAX < INT64_MIN). Instead the new
math converts:
- `value_u * INT32_MAX / ((1 << header->shift) * 1000000)`
to a bitmap:
- `sample.val1` consumes the upper `N` bits
- `sample.val2 * BIT(32 - N) / 1000000` consumes the lower `32-N`
bits
This both improves the accuracy, and avoids the overflow since
`shift` is guaranteed to be between 0 and 31.
Signed-off-by: Yuval Peress <peress@google.com>
Update the sensor shell logic to use the new sensor_read() APIs and
make triggers an option of the sensor_shell sample (this avoids the
trigger stealing the interrupt status from one-shot reads).
Signed-off-by: Yuval Peress <peress@google.com>
Add a new async API based on the RTIO subsystem. This new API allows:
1. Users to create sampling configs (telling the sensor which channels
they want to sample together).
2. Sample data in an asynchronous manner which provides greater control
over the data processing priority.
3. Fully backwards compatible API with no driver changes needed for
functionality (they are needed to improve performance).
4. Helper functions for processing loop.
Signed-off-by: Yuval Peress <peress@google.com>