Arquivos para 30 30UTC November 30UTC 1999

Regression can be hand when classification goes with a high number of classes.

Introduction

The most common TinyML projects by far involve classification. We can easily find examples on home automation (personal assistant), health (respiratory and heart diseases), animal sensing (elephant and cow behavior), industry (anomaly detection), etc.

But what happens when more than a few categories are necessary for a project? Even trying to classify 10 or 20 different categories is not easy. I recently saw a student in our university working on an exciting project. He was trying to find the amount of medicine (ml/cc) on a syringe using images.

image.png

Of course, his first approach was to classify different images of the same syringe, but when he ended with dozens of categories (1cc, 2cc, 3cc… 30cc…), the model started to become complicated. So, another idea was tried: “How about to define the range of volume inside the syringe and to use discrete steps to measure it?”. Well, this could be understood as a regression problem! And that was what was done with great success.

Aditya Mangalampalli developed a similar project, published at Edge Impulse Blog: Estimate Weight From a Photo Using Visual Regression in Edge Impulse. There, Aditya collected a total of 50 images for each 10 grams up to 400 grams, totaling 2050 images. And note that each image on dataset was labelled with the weight it represents:

  • 41 labels: 0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160, 170, 180, 190, 200, 210, 220, 230, 240, 250, 260, 270, 280, 290, 300, 310, 320, 330, 340, 350, 360, 370, 380, 390, 400.
image.png

You can learn more about using regression with Edge Impulse Studio on the tutorial Predict the Future with Regression Models.

White Wine Quality using Regression

We will use a white wine dataset, public available at the UCI Machine Learning Repository: Wine Quality, for this project. The repository has two datasets related to red and white variants of the Portuguese “Vinho Verde” wine. It consists of a quality ranking and measured physical attributes for 1599 Vinho Verde wines from Portugal. The data was collected from May 2004 to February 2007.

Data provided by P. Cortez, A. Cerdeira, F. Almeida, T. Matos and J. Reis. Modeling wine preferences by data mining from physicochemical properties. In Decision Support Systems, Elsevier, 47(4):547-553, 2009.

DatasetAttribute Information:

Input variables:

image.png

Output variable : quality (score between 0 and 10) – Min = 3 and Max = 9

image.png
Continue lendo…

Learning Image Classification on edge devices (Android)

App Inventor: EdgeML Image Classification: Fruits vs Veggies

A few weeks ago, I wrote a tutorial exploring Image Classification, one of the most popular Machine Learning applications, deployed on a tiny device, the ESP32-CAM. It was an example of a TinyML application.

When we talk about TinyML, it immediately comes to our mind squeezed machine learning models running on embedded devices and consuming very low power. The characteristic of such applications is that we are running AI (or Machine Learning) at the Edge. But power is not always a concern, and so, we can find examples of edge machine learning applications running on more complex devices such as the Raspberry Pi (see my tutorial Exploring AI at the Edge) or even Smart Phones. In short, TinyML can be considered a subset of EdgeML applications. The below figure illustrates this statement:

hw.png

This project will explore an Edge ML application Classifying Images on an Android device.

Developing Android (AI) Apps

Nowadays, developing Android apps using Java or the Kotlin language at Android Studio is not complicated, but you need to take tutorials to gain some domain. However, if you need to develop real professional applications, Laurence Moroney teaches an excellent course, available free at Coursera: Device-based Models with TensorFlow Lite.

But if you are not a developer, do not have the time, or only need a more straightforward app that can be quickly deployed, the MIT App Inventor should be your choice.

MIT App Inventor is an intuitive, visual programming environment that allows everyone – even children – to build fully functional apps for Android phones, iPhones, and Android/iOS tablets.

Only basic AI Applications are available with MIT App Inventor, such as Image and Sound classification, Pose Estimation, etc.

To start, optionally on this tutorial, available at the MIT App Inventor site, you can go step by step to create a general Image Classification App that will run on your Android device. In that project, the Mobilenet model was pre-trained with the ImageNet dataset, which 999 classes can be checked here. I left the project code (.aia) and the executable (.apk) of my version of this App in my GitHub.

img__class_app.-png.png

But what we will explore here in this tutorial is how we can use our images to train a machine learning model to be deployed on an edge device, in this case, an Android tablet.

Continue lendo…

Learning Image Classification on embedding devices (ESP32-CAM)

ESP32-CAM: TinyML Image Classification - Fruits vs Veggies

More and more, we are facing an embedding machine learning revolution. And when we talk about Machine Learning (ML), the first thing that comes to mind is Image Classification, a kind of ML Hello World!

One of the most popular and affordable development boards that already integrates a camera is the ESP32-CAM, which combines an Espressif ESP32-S MCU chip with an ArduCam OV2640 camera.

image.png

The ESP32 chip is so powerful that it can even process images. It includes I2C, SPI, UART communications, and PWM and DAC outputs.

Parameters:

  • Working voltage: 4.75-5.25V
  • splash: Default 32Mbit
  • RAM: Internal 520KB + external 8MB PSRAM
  • Wi-Fi: 802.11b/g/n/e/i
  • Bluetooth: Bluetooth 4.2BR/EDR and BLE standard
  • Support interface (2Mbps): UART, SPI, I2C, PWM
  • Support TF card: maximum support 4G
  • IO port: 9
  • Serial port rate: default 115200bps
  • Spectrum range: 2400 ~2483.5MHz
  • Antenna form: onboard PCB antenna, gain 2dBi
  • Image output format: JPEG (only OV2640 support), BMP, GRAYSCALE
ESP32-S.jpeg

Below, the general board pinout:

image.png

Note that this device has not a USB-TTL Serial module integrated, so to upload a code to the ESP32-CAM will be necessary a special adapter as below:

FTDI Basic.png

Or a USB-TTL Serial Conversion Adapter as below:

If you want to learn about the ESP32-CAM, I strongly recommend the books and tutorials of Rui Santos.

Continue lendo…

TinyML Made Easy: Gesture Recognition

10 10-03:00 fevereiro 10-03:00 2022 — Deixe um comentário

Seeed Wio Terminal programed using Codecraft/Edge Impulse is a fantastic tool for beginners to start on tinyML (Embedded Machine Learning).

TinyML

This project mixes Machine Learning (that is part of Artificial Intelligence) with a small device (Wio Terminal), which is nothing more than a microcontroller and sensors, whose main characteristics are ultra-low power consumption, 32-bit CPU, and a few kilobytes of memory. This new field of engineering is known as Embedded Machine Learning, or tinyML.

As we know, Microcontrollers (or MCUs) are very cheap electronic components, usually with just a few kilobytes of RAM, designed to use tiny amounts of energy. Nowadays, MCUs can be found embedded in almost any consumer, medical, automotive, and industrial devices. It is estimated that over 40 billion microcontrollers are sold every year, and probably hundreds of billions of them are in service nowadays. But, interestingly, those devices don’t get much attention because they’re often only used to replace functionality that older electro-mechanical systems could do in cars, washing machines, or remote controls.

More recently, with the IoT (Internet of Things) era, a significant part of those MCUs is generating “quintillions” of data, that in its majority, is not used due to the high cost and complexity (bandwidth and latency) of data transmission.

On the other side, in recent decades, we have seen a lot of development of Machine Learning models (aka Artificial Intelligence) trained with tons of data in very powerful and hungry mainframes.

But what is happening today is that suddenly, it become possible to take noisy signals like images, audio, or accelerometers and extract meaning from them by using neural networks. And what is more important is that we can run these networks on microcontrollers and sensors themselves using little power, interpreting much more of those sensor data that we are currently ignoring. This is tinyML, a new area that enables machine intelligence right next to the physical world.

The novelty area of tinyML can help bring good to our society.

The Wio Terminal

The Wio Terminal, a very affordable $36 device, is using an ATSAMD51P19 microcontroller with ARM Cortex-M4F running at 120MHz (boost up to 200MHz), 4MB of external flash memory, and 192KB of RAM. Wireless connectivity with Realtek RTL8720DN support. It is compatible with Arduino and MicroPython. It supports both Bluetooth and Wi-Fi providing a solid foundation for IoT and tinyML projects. There is a 2.4- inch LCD screen on Wio Terminal, an onboard IMU (LIS3DHTR), microphone, buzzer, microSD card slot, light sensor, and IR emitter (IR 940nm). Most importantly there are two multi-functional Grove ports onboard for the Grove ecosystem and Raspberry Pi compatible 40-pin GPIO pins for additional add-on support.

Continue lendo…

Emulating a Google Assistant on a RaspberryPi and Arduino Nano 33 BLE (TinyML)

Continue lendo...

Exploring IA at the Edge!

19 19-03:00 agosto 19-03:00 2020 — 1 Comentário

Image Recognition, Object Detection and Pose Estimation using Tensorflow Lite on a Raspberry Pi

Continue lendo...