In this tutorial, we will use machine learning to build a gesture recognition system that runs on a tiny microcontroller, the RP2040.

This tutorial has 2 parts. The first one is to explore the Raspberry Pi Pico, its main components, and how to program it using Micropython and its C/C++ SDK (Software Development Kit).

Next, we will use the Pico to capture “gesture data” to be used on a TinyML model training, using Edge Impulse Studio. Once developed and tested, the model will be deployed and used for real inference on the same device.Here, a quick view of the final project:

If you are familiar with Pico’s basic programming, please feel are to jump for part 2, where the real fun will begin!

PART 1: Exploring the Raspberry Pi Pico and its SDK

The Raspberry Pi Pico

Raspberry Pi Pico is a low-cost, high-performance microcontroller board with flexible digital interfaces. Key features include:

  • RP2040 microcontroller chip designed by Raspberry Pi Foundation
  • Dual-core Arm Cortex M0+ processor, flexible clock running up to 133 MHz
  • 264KB of SRAM, and 2MB of on-board Flash memory
  • USB 1.1 with device and host support
  • Low-power sleep and dormant modes
  • 26 × multi-function GPIO pins
  • 2 × SPI, 2 × I2C, 2 × UART, 3 × 12-bit ADC, 16 × controllable PWM channels
  • Accurate clock and timer on-chip
  • Temperature sensor
  • Accelerated floating-point libraries on-chip
  • 8 × Programmable I/O (PIO) state machines for custom peripheral support
Pico-R3-SDK11-Pinout.png

An interesting characteristic is its ability to drag-and-drop programming using mass storage over USB.

Spite that it is straightforward to “upload” a program to the Pico; what is missing is a reset push-button to prevent USB disconnection every time a new code is uploaded, which can damage the Pico USB connector. Fortunately, pin 30 (RUN) is available and can be used for this function. Just use a push-button (normally-open), connecting this pin to the ground. Now, anytime that a program should be uploaded to Pico press both buttons at the same time.

Buttons.png

In this documentation link, is possible to find detailed information about the MCU RP 2040, the heart of Pico.

Programing the Pico

There are two ways of programming a Pico: MicroPython and C/C++.

Programing using MicroPython

For initial tests (and for beginners), running MicroPython with the Pico is extremely easy. Once the Pico is plugged for the first time in your computer (via USB) and with the BOOT button pressed (or pressing Reset and Boot, after connection), a window named RPI-RP2 should appear, as a normal Mass Storage Device (same as a regular Pen-Driver).

PICO-Boot.png

Clicking on INDEX.HTM will send you to a page where you will get started with MicroPython.

Load_uP.png

Follow the instructions, downloading the UF2 file that will easily install the MicroPython interpreter in the Pico.Having the UF2 file, only drag it to that RPI-RP2 window, and that is it! The Pico is ready to receive an executable Python script. For MicroPython, I suggest Thonny as the IDE of choice, once it is possible to write python scripts directly on the shell as below, or develop a script on the editor:

Thonny.png

Confirm that the interpreter is configured to the Pico. Click on it (IDE Inferior right corner) for options.

It is also possible write or enter with Python scripts as the blink example bellow:

blink.png

With the Run button (marked on the above figure), the script is uploaded to the Pico. The internal LED (pin 25) will flash 10 times, printing the loop number on the Shell.

Try now to read the internal temperature sensor, creating a log file to monitor internal Pico temp. In this example, the temp.txt log file is stored inside the Pico, so pay attention to how much memory space you will need.

read_temp.png

If you are new to MicroPython, the Raspberry Pi Foundation put together an excellent book, Get Started with MicroPython on Raspberry Pi Pico (free in pdf), that will teach all the steps on physical computing using the Pico e MicroPython.

Programing using C/C++

MicroPython is good for initial exploration and learning electronics, but it is important to use C/C++ language for real embedded projects. For that, it is necessary to understand de SDK C/C++.

The RPi Foundation put together great documentation. The first one: Getting Started with the Raspberry Pi Pico gives information on how to set up your hardware, IDE/environment, and how to build and debug software for the Raspberry Pi Pico (and other RP2040-based devices).

The second document, Raspberry Pi Pico C/C++ SDK, explores programming using the SDK with advanced features and gives complete API documentation.

SDK installation with Linux:

Install tools (Cmake and gcc for ARM):

sudo apt update
sudo apt install git cmake gcc-arm-none-eabi libnewlib-arm-none-eabi build-essential

Create a folder, where the projects will be developed:

cd ~/
mkdir pico
cd pico

Clone the SDK repository:

git clone -b master https://github.com/raspberrypi/pico-sdk.git

Go to pico-sdk folder and update sub-modules:

cd pico-sdk
git submodule update --init

Return to pico folder

cd ..

SDK installation with MacOSInstall Toolkit

brew install cmake
brew tap ArmMbed/homebrew-formulae
brew install arm-none-eabi-gcc

Install the SDK

cd ~/
mkdir pico
cd pico
git clone -b master https://github.com/raspberrypi/pico-sdk.git
cd pico-sdk
git submodule update --init
cd ..

At this point you have all set to create a embedded project using C/C++

Creating a Blink Project in C/C++

It’s good to do download the examples created specifically for Pico. They will give us a good starting point to handle the HW and libraries.

git clone -b master https://github.com/raspberrypi/pico-examples.git

The examples also have a blink code, but let’s start a complete one from scratch.First, create a folder where your project will be located (under /pico/ and at the same level as your /pico-sdk is located:

cd~/
cd pico
mkdir blink
cd blink
mkdir build

Note that we also create a sub-folder named build. This folder will receive the final compiled code to be uploaded to Pico.In the project folder (in this case, blink), we always must have 3 files:

  • blink.c (the main C code)
  • CMakeList.txt (that tells the SDK how to turn the C file into a binary application for an RP2040-based microcontroller board)
  • pico_sdk_import.cmake (help to locate the SDK)

Let’s start to copy pico_sdk_import.cmake in the folder project:

cp ../pico-sdk/external/pico_sdk_import.cmake .

For blink.c (C source file) and CMakeList.txt, use a text editor that you like more, like as Eclipse, VS, Geany.Let’s see the CMakeList.txt:

cmake_minimum_required(VERSION 3.12)

project(app_blink_project)

include(pico_sdk_import.cmake)
pico_sdk_init()

add_executable(app_blink
    blink.c
)

pico_add_extra_outputs(app_blink)
target_link_libraries(app_blink pico_stdlib)

Note that we will create a executable file named: app_blink based on the code of blink.cNow, the sourse code, blink.c:

/**
 * Pico - RP2040
 * Blink Internal LED
 */

#include "pico/stdlib.h"

const uint LED_PIN = 25;

int main() {

    gpio_init(LED_PIN);
    gpio_set_dir(LED_PIN, GPIO_OUT);
    gpio_put(LED_PIN, 0);
    
    while (true) {
        gpio_put(LED_PIN, 1);
        sleep_ms(250);
        gpio_put(LED_PIN, 0);
        sleep_ms(250);
    }
}

At the top of the C file, we include a header called pico/stdlib.h. This is an umbrella header that pulls in some other commonly used headers. The ones needed here are hardware/gpio.h, which is used for accessing the general-purpose IOs on RP2040 (the gpio_xxx functions here), and pico/time.h, which contains, among other things, the sleep_msfunction.

Broadly speaking, a library whose name starts with pico provides high-level APIs and concepts or aggregates smaller interfaces; a name beginning with hardware indicates a thinner abstraction between your code and RP2040 on-chip hardware.

So, using mainly the hardware_gpio and pico_time libraries, this C program will blink an LED connected to GPIO25 on and off, twice per second, forever (or at least until unplugged).

Great! At this point, your project folder should contain 3 files and one sub-folder (build):

pico/
├── blink/
│   ├── blink.c
│   ├── CMakeLists.txt  
│   ├── pico_sdk_import.cmake
│   └── build/
│
├── pico-sdk/
│   ├── cmake
│   └── external
│   └── ...

Now, go to build folder, export the environment variables and run cmake:

cd build
export PICO_SDK_PATH=../../pico-sdk
cmake ..

The last step is to compile the project:

make -j4

In the build folder, several files are generated, including the app_build.uf2, the executable file.

Press Boot and Reset to open the window RPI-RP2 and drag the compiled project file app_build.uf2 to that folder.

run_app.png

You can also use cp at command line, instead of dragging the file.

Spite that all those steps seem complicated, once your project environment is set up, for any changes in the project, you should only compile the new code using make or make -j4 (That used all 4 cores of CPU).

PART 2: TinyML – Motion Recognition Project using the Pico

The idea of this project is to use the Pico to classify some human-made gestures as “up-down, ” “left-right” and “circle”. This classification will be done 100% “off-line” at the MPU level. In other words, what we will be doing, is “embedded machine learning, also known as TinyML.

As explained on Edge Impulse documentation (What is embedded ML, anyway?), recent advances in microprocessor architecture and algorithm design have made it possible to run sophisticated machine learning workloads on even the smallest microcontrollers (our case with RP2040).

Depending basically of HW capacity and memory size, different types of MCU/Application can be used in the TinyML arena, as shown in the bellow chart.

HW-Mem.png

Our Pico, based on ARM Cortex-M0+ is more than suitable to perform Sensor Classification, as we will do in this project.

The Machine Learning Workflow

So far, we have already defined the 1st phase of the project: its goal (Gesture Classification). The below workflow shows all the remaining phases to be executed from data collection on our Pico till the final inference and evaluation back to our tiny device, passing for the real model development done at Edge Impulse Studio in the cloud.

ML_worflow.png

Collect a dataset

TinyML enables machine intelligence right next to the physical world, which means circa of sensors. So, the first thing to do is to capture data to understand those gestures. For that, we will use a simple 3-axis accelerometer.

The Sensor

The sensor used, the MMA7361L is a three-axis analog accelerometer that requires a meager amount of power and has a g-select input which switches the accelerometer between ±1.5g and ±6g measurement ranges. Other features include a sleep mode, signal conditioning, a 1-pole low pass filter, temperature compensation, self-test, and 0g-detect, which detects linear freefall. Zero-g offset and sensitivity are factory set and require no external devices.

acc-axis.png

Each sensor analog output (XOUT, YOUT, and ZOUT) will be connected to Pico’s ADC inputs (ADC0, 1, and 2). The VDD is 3.3V and will also be supplied by Pico. The pin GS selected the g-level and will be left open (+/-1.5G).

Wiring

wiring.png

There are several different packages for three-axis analog accelerometer. In principle any breakout board for Freescale’s MMA7361L should work.

Sensor Measurements:

With the pin GS left open (+/-1.5G), the sensor sensibility according to spec is: 800mV/g, being that for 1G (sensor resting), the output is around 1.65V (‘G0’). It is important to remember that Pico ADCs has a resolution of 12bits (3.3V ==> 4096), so if we want the ADC measurement in g, we must apply the following conversion factor to raw data collected (read_axis_raw):

conversion_factor = 3.3V / 4096
read_axis_in_g = (read_axis_raw * conversion_factor) - G0

and for acceleration in m/s:

CONVERT_G_TO_MS2 = 9.80665
read_axis_in_ms = read_axis_in_g  * CONVERT_G_TO_MS2

Preparing the project environmentThe Data Collection project file tree structure should be:

pico/
├── accelerometer_data_capture/
│   ├── accel_mma7361l.c
│   ├── CMakeLists.txt  
│   ├── pico_sdk_import.cmake
│   └── build/
│
├── pico-sdk/
│   ├── cmake
│   └── external
│   └── ...

Below the source code for data collection:

#include <stdio.h>
#include "pico/stdlib.h"
#include "hardware/gpio.h"
#include "hardware/adc.h"
#include "pico/binary_info.h"

#define NSAMP 10

#define G0 1.65f
#define CONVERT_G_TO_MS2    9.80665f
#define FREQUENCY_HZ        50
#define INTERVAL_MS         (1000 / (FREQUENCY_HZ + 1))

const float conversion_factor = 3.3f / (1 << 12);

float get_axis (int adc_n) {
    adc_select_input(adc_n);
    unsigned int axis_raw = 0;
    for (int i=0;i<NSAMP;i++){
        axis_raw = axis_raw + adc_read();
        sleep_ms(1);
    }
    axis_raw = axis_raw/NSAMP;
    float axis_g = (axis_raw*conversion_factor)-G0;
    return axis_g;
}

int main() {

    stdio_init_all();
    
    adc_init();

    adc_gpio_init(26);
    adc_gpio_init(27);
    adc_gpio_init(28);

    while (1) {

        printf("%f \t", (get_axis (0) * CONVERT_G_TO_MS2));
        printf("%f \t", (get_axis (1) * CONVERT_G_TO_MS2));
        printf("%f \n", (get_axis (2) * CONVERT_G_TO_MS2));
        
        sleep_ms(INTERVAL_MS);
    }
}

The above code reads the 3 ADCs (10 times each, delivering an average value (smooth value). The data values, one for each axis, separated by tabs and converted to m/s, are sent to Pico USB output using the instruction printf(). The reading frequency was defined to 50Hz but should be lower than this due to the time for data capture and smooth process.

Below the CMakeLists.txt. Note the line: pico_enable_stdio_usb(accel_ml 1), this means that the USB (Serial 0) is enabled.

cmake_minimum_required(VERSION 3.13) 

include(pico_sdk_import.cmake)

project(accelerometer_mma7361l_project C CXX ASM) 
set(CMAKE_C_STANDARD 11) 
set(CMAKE_CXX_STANDARD 17) 
pico_sdk_init()

add_executable(accel_ml 
    accel_mma7361l.c
)

pico_enable_stdio_usb(accel_ml 1)

pico_add_extra_outputs(accel_ml) 
target_link_libraries(accel_ml pico_stdlib hardware_adc)

Once you have copied pico_sdk_import.cmake to your folder project, go to build sub-folder and repeat the same procedure did with blink project:

cd build
export PICO_SDK_PATH=../../pico-sdk
cmake ..
make -j4

The final compiled code (accel_ml.uf2) will appear in a build folder.

Press Boot and Reset to open the window RPI-RP2 and drag the compiled project file accel_ml.uf2 to that folder, the same as you did with the blink. The Pico will start capturing data from the accelerometers and send them to USB (Serial 0).

Now you can read it on a Serial Monitor.

If you are using Linux, a good option is a minicom. First, install it:

sudo apt install minicom

And open the serial monitor:

minicom -b 115200 -o -D /dev/ttyACM0
Data_colect_test.png

One alternative is the program Serial or even the Arduino IDE Serial Monitor and Plotter on a macOS.

Arduino Serial Monitor Plotter.png

What we need to do now is to collect data samples (set of 3-axis values) for each one of the human-made gestures that we want to classify:

  • “up-down” (Moving the Pico/Sensor from a high position to a lower)
  • “left-right” (Moving the Pico/Sensor from left to right and vice-versa)
  • “circle”. (Moving the Pico/Sensor in circles CW and CCW).
  • “resting” (Left the Pico/Sensor on the table, with no movement)
gestures.png

If you do not have an account at Edge Impulse Studio, do it now! Edge Impulse is the leading development platform for machine learning on edge devices, free for developers and trusted by enterprises. Open an account and create a new project.

My project is public and can be cloned here: Pico_Motion_Detection.

Once you have created your project, install Edge Impulse CLI on your computer; for that, follow these instructions: CLI Installation. This should be done only one time.

Once the project is created and the CLI is installed, the easiest way of getting data from Pico is using the Edge Impulse Data forwarder. This lets you forward data collected over a serial interface to the studio. This method only works perfectly on sensors with lower sampling frequencies, as in our case (human gestures).

The data forwarder is used to easily relay data from any device to Edge Impulse over serial (exactly our case). Devices write sensor values over a serial connection, and the data forwarder collects the data, signs the data, and sends the data to the ingestion service.

At your terminal, run:

edge-impulse-data-forwarder

The CLI will ask for your credentials, the name of the project that you are working and the name of the data values that we will capture (note that the CLI already analyzed the serial and know that the 3-axis sensor data are available) and finally will ask you for a device name (optional).

data_fw_cli.png

Return to Edge Impulse Studio and go to Data Acquisition section:

data_fw_studio.png

Your device name, together with the available sensor and frequency of capture, should appear automatically. Define the data label and the amount of sample you want (default is 10s), and press Start Sample. Below, 10 seconds of the up_down gesture.

data_sample.png

Note that accZ (blue line) has the highest pics, what make sense.

“Machine Learning it’s a way of writing programs that process raw data and turn it into information that is meaningful at an application level”, so how much data you have, more information you can get! Let’s capture at least 60 seconds of data for each label. Try to balance your dataset, having around the same amount of data for each label (class).

Feature Engineering

Now we have all the raw data that will be needed for training. But as you saw with the last image, raw data is a time series type of data, and it is not easy to create a model that understands such type of data. So, the data should be pre-processed. For that, we will take a window of 2 seconds and extract some relevant values from that, for example, the RMS value for such a group of data and its main frequency components (FFT). From each window, 33 features will be generated (11 per axis).

Feat_eng.png

It could seem complex, but the good news is that Edge Impulse will do this almost automatically for us.

Feature Engineering and Model definition (Impulse Design)

Let’s go back a little. Once you have you raw data, go to the Impulse Design section and create your project impulse.An impulse takes raw data, uses signal processing to extract features, and then uses a learning block to classify new data.

Impulse.png

In short, the Impulse will take the raw data, split it into 2 seconds segments. But note that those windows will be sliding over time, with 80ms of displacement. With this, more data will be generated.

In the Spectral Features section is possible to define the general parameters for the feature generation. I stayed with default values, and on Generate features tab was possible to explorer visually all 3, 879 samples generated.

feature_explorer.png

Our dataset classes are very well defined, which suggests that our classificatory model should work fine.

Note that orange data (left-right) goes manly on the y-axis, and that red data (up-down) goes along the z-axis. Also, resting (green dot) shows no acceleration, which is expected (on the previous stage, the earth acceleration (g) was filtered from the z-axis.

Design and Train the Neural Networks (NN) Classifier

The NN Classifier model can be very simple:

model.png

The model has 33 neurons in its first layer (1 neuron for each of the features) and 4 neurons in the last layer (1 neuron for each of the 4 classes). The model has 2 hidden layers, respectively, with 20 and 10 neurons.

The default hyper-parameters are 30 epochs (that is a lot and can be reduced in half in this case) and a learning rate of 0.0005. Running the training, we ended with an accuracy of 100%, confirmed by the F1 Score. This is not normal with Deep Learning projects, but we could realize that the data classes were very well split. Only 80% of data was used for training during the training phase, having 20% spare for validation. Both sets of data performed well, and not seem that the model overfitted, as shown in the Loss vs. Epoch graph:

Loss_epochs.png

Here the Edge Impulse Studio Training result:

Training.png

This quantized model is expected to take around 1ms in inference time, using 1.5Kb in RAM and 15.4Kb in ROM. Very good!

Testing the model with real (new) data

On Studio Live Classification Section, you can repeat what was done during the data capture phase. Once I kept the same type of movements, the result was excellent.

All data captured in this section are stored as Test Data, analyzed in Data Acquisition Section, under the tab Test Data.

The next phase tests the model with completely new data (stored in the Data Acquisition/Test Data section). The result was excellent again, only with some confusion mixing left_right with the circle, what is expected.

test_model.png

Conversion & Deployment

Once the model is developed, trained, and tested, the next step on our Machine Learning Workflow is Conversion and Deployment.

On Edge Impulse Deployment section is possible to deploy both, the trained model and the pre-processing block (Signal Processing), as a C++ library.

For MCUs that work with Arduino IDE, the studio automatically generates the libraries and workable examples that can be used as starting point for real inference.

In Raspberry Pi Pico’s case, we will choose the C++ Library option once this MCU does not work yet with Arduino IDE. But, we will still use one of Arduino’s code examples as our starting point.

Also, we will enable the Edge Optimized Neural (EON™) Compiler, which permits to run of neural networks in 25-55% less RAM, and up to 35% less flash, while retaining the same accuracy, compared to TensorFlow Lite for Microcontrollers, as we can see below:

Deployment.png

Pressing the BUILD button on this studio section will download the complete package to be used on our final project.

Inference

Now it is time to really do machine learning on embedding devices! We will program our Pico to recognize the gestures fully off-line, without an internet connection. This is the revolution that TinyML is doing!

Preparing the Gesture Recognition project environment

The C/C++ package downloaded from Edge Impulse Studio has the following folders/files:

├── edge-impulse-sdk/
├── model-parameters/
├── tflite-model/
├── CMakeLists.txt

We will update the CMakeLists.txt with the needed specific information for our project and add our C++ source code (that will under a source folder).

The Gesture Recognition Collection project file tree structure should be:

pico/
├── pico_gesture_recognition_inference/
│   ├── edge-impulse-sdk/
│   ├── model-parameters/
│   ├── tflite-model/
│   ├── source/
│   ├── CMakeLists.txt  
│   ├── pico_sdk_import.cmake
│   └── build/
│
├── pico-sdk/
│   ├── cmake
│   └── external
│   └── ...

Based on the great tutorial Machine Learning Inference on Raspberry Pico 2040 by Dmitry Maslov, which inspire me on this project, we could create the CMakeLists.txt below. The final executable program will be named “app”:

cmake_minimum_required(VERSION 3.13)

set(MODEL_FOLDER .)
set(EI_SDK_FOLDER edge-impulse-sdk)

include(pico_sdk_import.cmake)

project(pico_motion_detection_project C CXX ASM) 
set(CMAKE_C_STANDARD 11) 
set(CMAKE_CXX_STANDARD 17) 
pico_sdk_init()

add_executable(app 
    source/main.cpp
    source/ei_classifier_porting.cpp
)

include(${MODEL_FOLDER}/edge-impulse-sdk/cmake/utils.cmake)

pico_enable_stdio_usb(app 1)
 
target_link_libraries(app pico_stdlib hardware_adc)

add_subdirectory(${MODEL_FOLDER}/edge-impulse-sdk/cmake/zephyr)

target_include_directories(app PRIVATE
    ${MODEL_FOLDER}
    ${MODEL_FOLDER}/classifer
    ${MODEL_FOLDER}/tflite-model
    ${MODEL_FOLDER}/model-parameters
)

target_include_directories(app PRIVATE
    ${EI_SDK_FOLDER}
    ${EI_SDK_FOLDER}/third_party/ruy
    ${EI_SDK_FOLDER}/third_party/gemmlowp
    ${EI_SDK_FOLDER}/third_party/flatbuffers/include
    ${EI_SDK_FOLDER}/third_party
    ${EI_SDK_FOLDER}/tensorflow
    ${EI_SDK_FOLDER}/dsp
    ${EI_SDK_FOLDER}/classifier
    ${EI_SDK_FOLDER}/anomaly
    ${EI_SDK_FOLDER}/CMSIS/NN/Include
    ${EI_SDK_FOLDER}/CMSIS/DSP/PrivateInclude
    ${EI_SDK_FOLDER}/CMSIS/DSP/Include
    ${EI_SDK_FOLDER}/CMSIS/Core/Include
)

include_directories(${INCLUDES})

# find model source files
RECURSIVE_FIND_FILE(MODEL_FILES "${MODEL_FOLDER}/tflite-model" "*.cpp")
RECURSIVE_FIND_FILE(SOURCE_FILES "${EI_SDK_FOLDER}" "*.cpp")
RECURSIVE_FIND_FILE(CC_FILES "${EI_SDK_FOLDER}" "*.cc")
RECURSIVE_FIND_FILE(S_FILES "${EI_SDK_FOLDER}" "*.s")
RECURSIVE_FIND_FILE(C_FILES "${EI_SDK_FOLDER}" "*.c")
list(APPEND SOURCE_FILES ${S_FILES})
list(APPEND SOURCE_FILES ${C_FILES})
list(APPEND SOURCE_FILES ${CC_FILES})
list(APPEND SOURCE_FILES ${MODEL_FILES})

# add all sources to the project
target_sources(app PRIVATE ${SOURCE_FILES})

pico_add_extra_outputs(app)

Taking as starting point the Arduino example: nano_ble33_sense_accelerometer.ino and changing the instructions that are not compatible, create the file main.cpp below. The internal LED will flash during the time that the data is captured and classified:

/* Includes ---------------------------------------------------------------- */
#include <stdio.h>
#include "pico/stdlib.h"
#include "ei_run_classifier.h"
#include "hardware/gpio.h"
#include "hardware/adc.h"

/* Constant defines -------------------------------------------------------- */
#define CONVERT_G_TO_MS2    9.80665f
#define G0 1.65f
#define NSAMP 10

/* Private variables ------------------------------------------------------- */
static bool debug_nn = false; // Set this to true to see e.g. features generated from the raw signal

const float conversion_factor = 3.3f / (1 << 12);

const uint LED_PIN = 25;

float readAxisAccelation (int adc_n) {
    adc_select_input(adc_n);
    unsigned int axis_raw = 0;
    for (int i=0;i<NSAMP;i++){
        axis_raw = axis_raw + adc_read();
        sleep_ms(1);
    }
    axis_raw = axis_raw/NSAMP;
    float axis_g = (axis_raw*conversion_factor)-G0;
    return axis_g;
}

int main()
{
    stdio_init_all();
    
    gpio_init(LED_PIN);
    gpio_set_dir(LED_PIN, GPIO_OUT);
    gpio_put(LED_PIN, 0);
    
    adc_init();
    adc_gpio_init(26);
    adc_gpio_init(27);
    adc_gpio_init(28);
    
    if (EI_CLASSIFIER_RAW_SAMPLES_PER_FRAME != 3) {
        ei_printf("ERR: EI_CLASSIFIER_RAW_SAMPLES_PER_FRAME should be equal to 3 (the 3 sensor axes)\n");
        return 1;
    }
    
    while (true){
        
        ei_printf("\nStarting inferencing in 2 seconds...\n");
        sleep_ms(2000);
        gpio_put(LED_PIN, 1);
        ei_printf("Sampling...\n");

        // Allocate a buffer here for the values we'll read from the IMU
        float buffer[EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE] = { 0 };

        for (size_t ix = 0; ix < EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE; ix += 3) {
            // Determine the next tick (and then sleep later)
            uint64_t next_tick = ei_read_timer_us() + (EI_CLASSIFIER_INTERVAL_MS * 1000);
            
            buffer[ix] = readAxisAccelation (0);
            buffer[ix + 1] = readAxisAccelation (1);
            buffer[ix + 2] = readAxisAccelation (2);

            buffer[ix + 0] *= CONVERT_G_TO_MS2;
            buffer[ix + 1] *= CONVERT_G_TO_MS2;
            buffer[ix + 2] *= CONVERT_G_TO_MS2;

            sleep_us(next_tick - ei_read_timer_us());
        }

        // Turn the raw buffer in a signal which we can the classify
        signal_t signal;
        int err = numpy::signal_from_buffer(buffer, EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE, &signal);
        if (err != 0) {
            ei_printf("Failed to create signal from buffer (%d)\n", err);
            return 1;
        }

        // Run the classifier
        ei_impulse_result_t result = { 0 };

        err = run_classifier(&signal, &result, debug_nn);
        if (err != EI_IMPULSE_OK) {
            ei_printf("ERR: Failed to run classifier (%d)\n", err);
            return 1;
        }

        // print the predictions
        ei_printf("Predictions ");
        ei_printf("(DSP: %d ms., Classification: %d ms., Anomaly: %d ms.)",
            result.timing.dsp, result.timing.classification, result.timing.anomaly);
        ei_printf(": \n");
        for (size_t ix = 0; ix < EI_CLASSIFIER_LABEL_COUNT; ix++) {
            ei_printf("    %s: %.5f\n", result.classification[ix].label, result.classification[ix].value);
        }
    #if EI_CLASSIFIER_HAS_ANOMALY == 1
        ei_printf("    anomaly score: %.3f\n", result.anomaly);
    #endif
    gpio_put(LED_PIN, 0);
    }

#if !defined(EI_CLASSIFIER_SENSOR) || EI_CLASSIFIER_SENSOR != EI_CLASSIFIER_SENSOR_ACCELEROMETER
#error "Invalid model for current sensor"
#endif
return 0;
}

Together on the source folder, there is the file ei_classifier_porting.cpp, also adapted by Dmitri, that I kept as it.At this point, having all relevant files on our project folder, go to build sub-folder and repeat the same procedure did with all previous projects to compile the final executable code:

cd build
export PICO_SDK_PATH=../../pico-sdk
cmake ..
make -j4

The final compiled code (app.uf2) will appear in a build folder.Press Boot and Reset to open the window RPI-RP2 and drag the compiled project file app.uf2 to that folder, the same as you did with other projects. The Pico will start sampling data from the accelerometers every 2 seconds and display the Serial Monitor’s predictions.

inference.png

Note that Classification time (Inference) is 1ms, same as predict by Edge Impulse Studio.

Final Considerations

The next step in this project would be to detect anomalies, which is also simple to implement with Edge Impulse Studio.

For the ones that are curious to learn more about TinyML, I strongly suggest the Coursera free course: Introduction to Embedded Machine Learning | Edge Impulse. Here a glimpse of one of the projects that I developed during the course, classificating operation modes and anomalies on a Blender:

inference.png

You can also clone this project on Edge Impulse Studio: Blender – Motion Detection.

Conclusion

This project’s general idea was to learn how to program a Raspberry Pi Pico and do a proof of concept that is possible to perform Machine Learning with this MCU, not yet officially supported by Edge Impulse and Arduino, what I hope that happens soon because this would simplify all the process of coding for non-expert developers enormously.

On my GitHub repository, you will find the last version of the codes: Pico-Motion-Recognition.As always, I hope this project can help others find their way in the exciting world of AI, Electronics and IoT!

lSaludos from the south of the world!

See you at my next project!

Thank you

Marcelo

Emulating a Google Assistant on a RaspberryPi and Arduino Nano 33 BLE (TinyML)

Continue lendo...

Home Automation with Alexa

31 31-03:00 dezembro 31-03:00 2020 — Deixe um comentário

This project shows how to emulate IoT devices and control them remotely by voice using Alexa.

Continue lendo...

Exploring IA at the Edge!

19 19-03:00 agosto 19-03:00 2020 — 1 Comentário

Image Recognition, Object Detection and Pose Estimation using Tensorflow Lite on a Raspberry Pi

Continue lendo...

Sensing the Air Quality

22 22-03:00 agosto 22-03:00 2019 — 1 Comentário

A low-cost IoT Air Quality Monitor based on RaspberryPi 4

santiago_contamination

I have the privilege of living in one of the most beautiful countries in the world, but unfortunately, it’s not all roses. Chile during winter season suffers a lot with air contamination, mainly due to particulate materials as dust and smog.

Chile

Because of cold weather, in the south, air contamination is mainly due to wood-based calefactors and in Santiago (the main capital in the center of the country) mixed from industries, cars, and its unique geographic situation between 2 huge mountains chains.

 

Nowadays, air pollution is a big problem all over the world and in this article we will explore how to develop a low expensive homemade Air Quality Station, based on a Raspberry Pi.

If you are interested to understand more about it,  please visit the “World Air Quality Index” Project.

Continue lendo…

How safe are the streets of Santiago?

16 16-03:00 agosto 16-03:00 2019 — 1 Comentário

Let’s answer it with Python and GeoPandas!

Costanera Center, Santiago / Benja Gremler

Some time ago I wrote an article, explaining how to work with geographic maps in Python, using the “hard way” (mainly Shapely and Pandas): Mapping Geography Data in Python. Now it is time to do it again, but this time, explaining how to do it in an easy way, using GeoPandas,  that can be understood as Pandas + Shapely at the same package.

Geopandas is an open source project to make working with geospatial data in Python easier. GeoPandas extends the datatypes used by Pandas to allow spatial operations on geometric types.

The motivation for this article was a recent project proposed by our professor Oscar Peredo and developed with my colleagues, Fran Gortari and Manuel Sacasa for the Big Data Analytics course of UDD’s (Universidad del Desarrollo) Data Science Master Degree.

bannerThe objective of that project was to explore the possibility of, taking advantage of state of the art Machine Learning Algorithms, to predict crash risk score for an urban grid, based on public car crash data from 2013 to 2018. By the other hand, the purpose of this article is simply to learn how to use GeoPandas,  on a real problem, answering a question:

“How safe are the streets in Santiago?”.

If you want to know what we have done with the proposed project for our DS Master deegre , please visit its GitHub repository.

Continue lendo…

When we are talking about physical variables, as temperature, pressure, etc., as a Data Scientist, usually we start working from a dataset that was created somewhere else. But have you thought about how to capture those data yourself?

On this tutorial we will get data from several different sensors, sending them to an IoT service, ThingSpeak.com and to a mobile App (Thingsview), where we can log and play with data. We will explore several different communication ways of connecting sensors to a Raspberry Pi, as:

  • DHT22 – Temperature & Humidity Sensor – Digital Comm
  • DS18B20 – Temperature Sensor – 1-Wire
  • BMP180 – Temperature & Pressure Sensor – I2C
  • UV – Ultra Violet Sensor – Analog Sensor via A/D and SPI bus

In short, all data will be captured, saved locally on a CSV file and send to an IoT service (ThingSpeak.com), via MQTT protocol, as you can see on below block diagram:

To complete a real Weather Station, on the final step, you will also learn how to measure wind speed and direction, following Mauricio Pinto‘s tutorial.
Continue lendo…

The idea with this tutorial is to capture tweets and to analyze them regarding the most used words and hashtags, classifying them regarding the sentiment behind them (positive, negative or neutral).

Continue lendo...

Aprenda a instalar o Jupyter Notebook em um Raspberry Pi e, diretamente ler e controlar sensores e atuadores.

Todos sabemos que o Jupyter Notebook é uma ferramenta fantástica, ou melhor, um aplicativo Web de código aberto que permite criar e compartilhar documentos que contenham códigos ativos, equações, visualizações e texto.

O Jupyter Notebook é amplamente utilizado no campo do Data Science, limpando e transformando dados, fazendo simulações numéricas, modelagem estatística, visualização de dados, aprendizado de máquina e muito mais!

Mas, que tal usar os Jupyter Notebooks para controlar os GPIOs de um Raspberry Pi?

É exatamente  isso que faremos neste tutorial. Aprenderemos como instalar o Jupyter Notebook em um Raspberry Pi, e diretamente nele, ler sensores e atuar em atuadores.

Continue lendo…

Este é um dos meus quatro tutoriais com foco em Scratch e “Physical Computing”. Em relação a esse tópico, já publiquei aqui em meu blog:

Aprendendo eletrônica com o Arduino e a linguagem Scratch 2.0

Existem mais dois tutoriais em ingles, (um para o Arduino e outro para o Raspberry), onde exploro a versão 1 do Scratch:

Aqui, exploraremos projetos usando a versão mais recente do Scratch disponível para o Raspberry Pi, a 2.0.

Em resumo,

Aprenderemos:

  • O que é o Scratch 2.0 e suas diferenças com o Scratch 1.4
  • Como interagir com LEDs, botões de pressão e campainha
  • Como criar blocos especiais
  • Como lidar com variáveis
  • Como integrar animações com o mundo físico.

Vamos lá!

Continue lendo…