The aim of this project is to develop a proof of concept for a system with edge computing architecture, so as to enable the implementation of a system with artificial intelligence without overloading the central server. The project focuses on each of the layers of the architecture, with the main objectives being the implementation of:
- Training and fine tuning robust deep learning models on high-performance computers;
- Inference systems for artificial intelligence models in embedded devices;
- Nodes for queuing and pre-processing data sent by embedded devices;
- Cloud server architecture, capable of storing the data received in high-performance databases and feeding business intelligence/dashboard systems;
- A simplified dashboard proposal for visualizing the data collected.
Application examples
- Smart Traffic Monitoring: In smart cities, camera systems and sensors installed at traffic lights and intersections can process data in real time to optimize traffic flows. By analyzing the images and traffic data locally, the system can adjust traffic signals in real time to reduce congestion and improve road safety;
- Remote Medical Assistance: Edge computing devices can be used for remote patient monitoring, allowing real-time analysis of health data such as vital signs. This enables a faster response in critical situations and reduces the load on centralized healthcare systems.
MANDATORY INFRASTRUCTURE RESOURCES:
In order to carry out the proposed proof of concept, hardware, software and data resources are required. The hardware resources required are:
- High-performance computers with high video memory capacity for training deep learning models;
- Embedded devices with microprocessor-compatible SoCs (e.g. raspberry pi);
- Embedded devices with microcontrollers with high performance constraints; and
- Sensors for measuring the relevant data (e.g. cameras for computer vision, microphones for speech recognition).
The software resources required are:
- Cloud server system (e.g. Microsoft Azure, AWS, GCP);
- Libraries for training artificial intelligence models (e.g. Torch);
- Libraries for inference of artificial intelligence models compatible with embedded devices (e.g. TinyML);
- NoSQL database system compatible with mutable objects (e.g. MongoDB);
- System for scalable orchestration of edge device processing nodes (e.g. KubeEdge);
- System for processing complex events with high throughput (e.g. Apache Nifi, Apache Kafka, Redis); and
- Tools for building dashboards (e.g. Kibana, Grafana)
The data resources required are:
- Previously processed and cleaned dataset for training/fine tuning the artificial intelligence models that will be implemented at the edge.
COMPLETION AND DELIVERY OF THE PROJECT:
All the prototypes generated during the project are delivered at the end of the 10th week.