Mug, or not Mug, that is the question!

18 18-03:00 março 18-03:00 2022 — Deixe um comentário

EdgeAI made simple – Exploring Image Classification with Arduino Portenta, Edge Impulse, and OpenMV

Introduction

This tutorial explores the Arduino Portenta, a development board that includes two processors that can run tasks in parallel. Portenta can efficiently run processes created with TensorFlow™ Lite. For example, one of the cores computing a computer vision algorithm on the fly (inference), having the other leading with low-level operations like controlling a motor and communicating or acting as a user interface.

The onboard wireless module allows the management of WiFi and Bluetooth® connectivity simultaneously.

image.png

Two Parallel Cores

H7’s central processor is the dual-core STM32H747, including a Cortex® M7 running at 480 MHz and a Cortex® M4 running at 240 MHz. The two cores communicate via a Remote Procedure Call mechanism that seamlessly allows calling functions on the other processor. Both processors share all the on-chip peripherals and can run:

  • Arduino sketches on top of the Arm® Mbed™ OS
  • Native Mbed™ applications
  • MicroPython / JavaScript via an interpreter
  • TensorFlow™ Lite

Memory

Memory is crucial for embedded machine Learning projects. Portenta H7 board can host up to 64 MB of SDRAM and 128 MB of QSPI Flash. In my case, my board comes with 8MB of SDRAM and 16MB of Flash QSPI. But it is essential to consider that the MCU SRAM is the one to be used with machine learning inferences; that for the STM32H747 is only 1MB. This MCU also has incorporated 2MB of FLASH, mainly for code storage.

Vision Shield

We will add a Vision Shield to our Portenta board for use in vision applications, which brings industry-rated features, like Ethernet (or LoRa), camera, and microphones.

image.png
  • Camera: Ultra-low-power Himax HM-01B0 monochrome camera module with 320 x 320 active pixel resolution support for QVGA.
  • Microphone: 2 x MP34DT05, an ultra-compact, low-power, omnidirectional, digital MEMS microphone built with a capacitive sensing element and an IC interface.
Continue lendo…

Regression can be hand when classification goes with a high number of classes.

Introduction

The most common TinyML projects by far involve classification. We can easily find examples on home automation (personal assistant), health (respiratory and heart diseases), animal sensing (elephant and cow behavior), industry (anomaly detection), etc.

But what happens when more than a few categories are necessary for a project? Even trying to classify 10 or 20 different categories is not easy. I recently saw a student in our university working on an exciting project. He was trying to find the amount of medicine (ml/cc) on a syringe using images.

image.png

Of course, his first approach was to classify different images of the same syringe, but when he ended with dozens of categories (1cc, 2cc, 3cc… 30cc…), the model started to become complicated. So, another idea was tried: “How about to define the range of volume inside the syringe and to use discrete steps to measure it?”. Well, this could be understood as a regression problem! And that was what was done with great success.

Aditya Mangalampalli developed a similar project, published at Edge Impulse Blog: Estimate Weight From a Photo Using Visual Regression in Edge Impulse. There, Aditya collected a total of 50 images for each 10 grams up to 400 grams, totaling 2050 images. And note that each image on dataset was labelled with the weight it represents:

  • 41 labels: 0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160, 170, 180, 190, 200, 210, 220, 230, 240, 250, 260, 270, 280, 290, 300, 310, 320, 330, 340, 350, 360, 370, 380, 390, 400.
image.png

You can learn more about using regression with Edge Impulse Studio on the tutorial Predict the Future with Regression Models.

White Wine Quality using Regression

We will use a white wine dataset, public available at the UCI Machine Learning Repository: Wine Quality, for this project. The repository has two datasets related to red and white variants of the Portuguese “Vinho Verde” wine. It consists of a quality ranking and measured physical attributes for 1599 Vinho Verde wines from Portugal. The data was collected from May 2004 to February 2007.

Data provided by P. Cortez, A. Cerdeira, F. Almeida, T. Matos and J. Reis. Modeling wine preferences by data mining from physicochemical properties. In Decision Support Systems, Elsevier, 47(4):547-553, 2009.

DatasetAttribute Information:

Input variables:

image.png

Output variable : quality (score between 0 and 10) – Min = 3 and Max = 9

image.png
Continue lendo…

Learning Image Classification on edge devices (Android)

App Inventor: EdgeML Image Classification: Fruits vs Veggies

A few weeks ago, I wrote a tutorial exploring Image Classification, one of the most popular Machine Learning applications, deployed on a tiny device, the ESP32-CAM. It was an example of a TinyML application.

When we talk about TinyML, it immediately comes to our mind squeezed machine learning models running on embedded devices and consuming very low power. The characteristic of such applications is that we are running AI (or Machine Learning) at the Edge. But power is not always a concern, and so, we can find examples of edge machine learning applications running on more complex devices such as the Raspberry Pi (see my tutorial Exploring AI at the Edge) or even Smart Phones. In short, TinyML can be considered a subset of EdgeML applications. The below figure illustrates this statement:

hw.png

This project will explore an Edge ML application Classifying Images on an Android device.

Developing Android (AI) Apps

Nowadays, developing Android apps using Java or the Kotlin language at Android Studio is not complicated, but you need to take tutorials to gain some domain. However, if you need to develop real professional applications, Laurence Moroney teaches an excellent course, available free at Coursera: Device-based Models with TensorFlow Lite.

But if you are not a developer, do not have the time, or only need a more straightforward app that can be quickly deployed, the MIT App Inventor should be your choice.

MIT App Inventor is an intuitive, visual programming environment that allows everyone – even children – to build fully functional apps for Android phones, iPhones, and Android/iOS tablets.

Only basic AI Applications are available with MIT App Inventor, such as Image and Sound classification, Pose Estimation, etc.

To start, optionally on this tutorial, available at the MIT App Inventor site, you can go step by step to create a general Image Classification App that will run on your Android device. In that project, the Mobilenet model was pre-trained with the ImageNet dataset, which 999 classes can be checked here. I left the project code (.aia) and the executable (.apk) of my version of this App in my GitHub.

img__class_app.-png.png

But what we will explore here in this tutorial is how we can use our images to train a machine learning model to be deployed on an edge device, in this case, an Android tablet.

Continue lendo…

Learning Image Classification on embedding devices (ESP32-CAM)

ESP32-CAM: TinyML Image Classification - Fruits vs Veggies

More and more, we are facing an embedding machine learning revolution. And when we talk about Machine Learning (ML), the first thing that comes to mind is Image Classification, a kind of ML Hello World!

One of the most popular and affordable development boards that already integrates a camera is the ESP32-CAM, which combines an Espressif ESP32-S MCU chip with an ArduCam OV2640 camera.

image.png

The ESP32 chip is so powerful that it can even process images. It includes I2C, SPI, UART communications, and PWM and DAC outputs.

Parameters:

  • Working voltage: 4.75-5.25V
  • splash: Default 32Mbit
  • RAM: Internal 520KB + external 8MB PSRAM
  • Wi-Fi: 802.11b/g/n/e/i
  • Bluetooth: Bluetooth 4.2BR/EDR and BLE standard
  • Support interface (2Mbps): UART, SPI, I2C, PWM
  • Support TF card: maximum support 4G
  • IO port: 9
  • Serial port rate: default 115200bps
  • Spectrum range: 2400 ~2483.5MHz
  • Antenna form: onboard PCB antenna, gain 2dBi
  • Image output format: JPEG (only OV2640 support), BMP, GRAYSCALE
ESP32-S.jpeg

Below, the general board pinout:

image.png

Note that this device has not a USB-TTL Serial module integrated, so to upload a code to the ESP32-CAM will be necessary a special adapter as below:

FTDI Basic.png

Or a USB-TTL Serial Conversion Adapter as below:

If you want to learn about the ESP32-CAM, I strongly recommend the books and tutorials of Rui Santos.

Continue lendo…

TinyML Made Easy: Gesture Recognition

10 10-03:00 fevereiro 10-03:00 2022 — Deixe um comentário

Seeed Wio Terminal programed using Codecraft/Edge Impulse is a fantastic tool for beginners to start on tinyML (Embedded Machine Learning).

TinyML

This project mixes Machine Learning (that is part of Artificial Intelligence) with a small device (Wio Terminal), which is nothing more than a microcontroller and sensors, whose main characteristics are ultra-low power consumption, 32-bit CPU, and a few kilobytes of memory. This new field of engineering is known as Embedded Machine Learning, or tinyML.

As we know, Microcontrollers (or MCUs) are very cheap electronic components, usually with just a few kilobytes of RAM, designed to use tiny amounts of energy. Nowadays, MCUs can be found embedded in almost any consumer, medical, automotive, and industrial devices. It is estimated that over 40 billion microcontrollers are sold every year, and probably hundreds of billions of them are in service nowadays. But, interestingly, those devices don’t get much attention because they’re often only used to replace functionality that older electro-mechanical systems could do in cars, washing machines, or remote controls.

More recently, with the IoT (Internet of Things) era, a significant part of those MCUs is generating “quintillions” of data, that in its majority, is not used due to the high cost and complexity (bandwidth and latency) of data transmission.

On the other side, in recent decades, we have seen a lot of development of Machine Learning models (aka Artificial Intelligence) trained with tons of data in very powerful and hungry mainframes.

But what is happening today is that suddenly, it become possible to take noisy signals like images, audio, or accelerometers and extract meaning from them by using neural networks. And what is more important is that we can run these networks on microcontrollers and sensors themselves using little power, interpreting much more of those sensor data that we are currently ignoring. This is tinyML, a new area that enables machine intelligence right next to the physical world.

The novelty area of tinyML can help bring good to our society.

The Wio Terminal

The Wio Terminal, a very affordable $36 device, is using an ATSAMD51P19 microcontroller with ARM Cortex-M4F running at 120MHz (boost up to 200MHz), 4MB of external flash memory, and 192KB of RAM. Wireless connectivity with Realtek RTL8720DN support. It is compatible with Arduino and MicroPython. It supports both Bluetooth and Wi-Fi providing a solid foundation for IoT and tinyML projects. There is a 2.4- inch LCD screen on Wio Terminal, an onboard IMU (LIS3DHTR), microphone, buzzer, microSD card slot, light sensor, and IR emitter (IR 940nm). Most importantly there are two multi-functional Grove ports onboard for the Grove ecosystem and Raspberry Pi compatible 40-pin GPIO pins for additional add-on support.

Continue lendo…

“Listening Temperature” with TinyML

10 10-03:00 fevereiro 10-03:00 2022 — Deixe um comentário

Can we “hear” a difference between pouring hot and cold water? Amazing proof-of-concept by a quick real deployment using Edge Impulse Studio

"Listening Temperature" with TinyML

Introduction

A few months ago, my dear friend Dr. Marco Zennaro from ICTP, Italy, asked me if I heard that we humans could distinguish between hot and cold water only by listening to it. At principle, I thought that I was the one who did not listen to him well! 😉 But when he send me this paper: Why can you hear a difference between pouring hot and cold water? In an investigation of temperature dependence in psychoacoustics, I realized that Dr. Marco was pretty serious about it!

The first thing that called my attention reading the paper was the mention of this youtube video (please, watch the video and try it yourself before going on with the reading). Uaw! I could easily differentiate between the two different temperatures, only by the sound of water splashing in the cup (could you?) But why did this happen? The video mentioned that the change in the water splashing changes the sound that it makes because of various complex fluid dynamic reasons’. Not much explanation on this. Others say “that the viscosity changed with the temperature” or that it must be something with hot liquid tending to be more “bubbling.” Anyway, according to the paper’s researches, all of this is only speculation.

Besides the scientific investigation on it (what should be very interesting), the question that comes to us was: Is this ability of “listening temperatures” something replicable using Artificial Neural Networks? We did not know, but let’s try to create a simple experience using TinyML (Machine Learning applied to embedded devices).

Uaw! I could easily differentiate between the two different temperatures, only by the sound of water splashing in the cup (could you?) But why did this happen? The video mentioned that the change in the water splashing changes the sound that it makes because of various complex fluid dynamic reasons. Not much explanation on this. Others say that the viscosity changed with the temperature or that it must be something with hot liquid tending to be more bubbling. Anyway, according to the paper’s researches, all of this is only speculation.

Besides the scientific investigation on it (what should be very interesting), the question that comes to us was: Is this ability of listening temperatures something replicable using Artificial Neural Networks? We did not know, but let’s try to create a simple experience using TinyML (Machine Learning applied to embedded devices) and find the answer!

The Experiment

image.png

First, this is a simple proof-of-concept, so let us reduce the variables. Two similar glasses were used (same with the plastic recipient where the water was collected). The water temperatures were very different, with a 50oC range between them. (11oC and 61oC).

Each sample was around the time that the glass took to be filled (3 to 5 seconds).

Note that we were interested in capturing the sound of the water only during the pouring process.

The sound was captured by the same digital microphone (sampling frequency: 16KHz. Bit Depth: 16 Bits) and stored as.wav files in 3 different folders:

  • Cold Water sound (“Cool”)
  • Hot Water sound (“Hot”)
  • No water sound (“Noise”).

The Cold Water label should be better defined as “cold” instead of “cool”, but once this is not a scientific paper, was cool leave it as ”cool” 😉

With the dataset captured, we uploaded it to Edge Impulse Studio, where the data were preprocessed, the Neural Network (NN) model was trained, tested, and deployed to an MCU for real physical test (an iPhone was also used for live classification).

Continue lendo…

In this tutorial, we will use machine learning to build a gesture recognition system that runs on a tiny microcontroller, the RP2040.

This tutorial has 2 parts. The first one is to explore the Raspberry Pi Pico, its main components, and how to program it using Micropython and its C/C++ SDK (Software Development Kit).

Next, we will use the Pico to capture “gesture data” to be used on a TinyML model training, using Edge Impulse Studio. Once developed and tested, the model will be deployed and used for real inference on the same device.Here, a quick view of the final project:

If you are familiar with Pico’s basic programming, please feel are to jump for part 2, where the real fun will begin!

PART 1: Exploring the Raspberry Pi Pico and its SDK

The Raspberry Pi Pico

Raspberry Pi Pico is a low-cost, high-performance microcontroller board with flexible digital interfaces. Key features include:

  • RP2040 microcontroller chip designed by Raspberry Pi Foundation
  • Dual-core Arm Cortex M0+ processor, flexible clock running up to 133 MHz
  • 264KB of SRAM, and 2MB of on-board Flash memory
  • USB 1.1 with device and host support
  • Low-power sleep and dormant modes
  • 26 × multi-function GPIO pins
  • 2 × SPI, 2 × I2C, 2 × UART, 3 × 12-bit ADC, 16 × controllable PWM channels
  • Accurate clock and timer on-chip
  • Temperature sensor
  • Accelerated floating-point libraries on-chip
  • 8 × Programmable I/O (PIO) state machines for custom peripheral support
Pico-R3-SDK11-Pinout.png

An interesting characteristic is its ability to drag-and-drop programming using mass storage over USB.

Spite that it is straightforward to “upload” a program to the Pico; what is missing is a reset push-button to prevent USB disconnection every time a new code is uploaded, which can damage the Pico USB connector. Fortunately, pin 30 (RUN) is available and can be used for this function. Just use a push-button (normally-open), connecting this pin to the ground. Now, anytime that a program should be uploaded to Pico press both buttons at the same time.

Buttons.png

In this documentation link, is possible to find detailed information about the MCU RP 2040, the heart of Pico.

Continue lendo…

Emulating a Google Assistant on a RaspberryPi and Arduino Nano 33 BLE (TinyML)

Continue lendo...

Home Automation with Alexa

31 31-03:00 dezembro 31-03:00 2020 — 2 Comentários

This project shows how to emulate IoT devices and control them remotely by voice using Alexa.

Continue lendo...

Exploring IA at the Edge!

19 19-03:00 agosto 19-03:00 2020 — 1 Comentário

Image Recognition, Object Detection and Pose Estimation using Tensorflow Lite on a Raspberry Pi

Continue lendo...