Drone Aid

DroneAid is currently being transferred to The Linux Foundation.

DroneAid uses machine learning to detect calls for help on the ground placed by those in need. At the heart of DroneAid is a Symbol Language that is used to train a visual recognition model. That model analyzes video from a drone to detect and count specific images. A dashboard can be used to plot those locations on a map and initiate a response.

An aerial scout for first responders

DroneAid consists of several components:

  1. The DroneAid Symbol Language that represents need and quantities
  2. A mechanism for rendering the symbols in virtual reality to train a model
  3. The trained model that can be applied to drone livestream video
  4. A dashboard that renders the location of needs captured by a drone

The current implementation can be extended beyond a particular drone to additional drones, airplanes, and satellites. The Symbol Language can be used to train additional visual recognition implementations.

The original version of DroneAid was created by Pedro Cruz in August 2018. A refactored version was released as a Code and Response™ with The Linux Foundation open source project in October 2019.

Here is the specifics. READ ME

Leave a comment

Minimum 4 characters