Vision-Based Adaptive Traffic Control System, a project by Team AbruTech, won Gold at NBQSA 2020 (National ICT Awards) organized by British Computer Society in Tertiary Student Category and has been nominated for APICTA (Asia Pacific) Awards to be held in Malaysia.
A patent for our system is currently under review at NIPO and the system is being further developed with funding from World Bank via AHEAD into a product by a multidisciplinary team of engineers through the Enterprise (Business Linkage Cell) of the University of Moratuwa, together with RDA and SD&CC.
We are forever indebted to our families of Tehara and Chinthana for hosting our team for weeks/months during strikes and study breaks, allowing us to work together. In addition, we thank our Supervisors: Prof. Rohan and Prof. Saman Bandara for their support and assistance.
Problem Statement
In most countries, traffic flow is controlled by traffic lights with pre-set timers. In Sri Lanka, this often causes congestion during peak hours as the system is not sensitive to the traffic levels in each lane of an intersection. To solve this, the traffic policemen usually turn off the lights and manually control the traffic during peak hours.Solution
Team: Abarajithan, Rukshan, Tehara, ChinthanaWe present a system-on-chip design that:
- Processes the video feed locally at edge through YOLOv2 (a 23-layer convolutional neural network for single shot object detection)
- Deduces traffic flow in each phase
- Suggests green times to the traffic lights
Results
- The above videos demonstrate the object detection (with YOLOv2) and tracking (a custom-built algorithm that can be implemented in C without using any libraries) on test data (CNN was trained on images from a different road)
- The tracking algorithm is lightweight enough to run at thousands of FPS. The minimum FPS needed to reliably track vehicles travelling at 70 km/h (speed limit) is 3 FPS (as shown)
- Only the vehicles coming towards the camera are considered. The blue box signifies the vehicle being counted.
IOT dashboard for demonstration in SLIOT competition on Nvidia Jetson Nano logging data through MQTT |
Project Tasks
The following tasks were (and are being) done.
1. Machine Learning
- Built four remotely powered, wirelessly data-collection device
- Collected, annotate and augment traffic images to create a Sri Lankan traffic dataset (1500 images)
- Built a numpy-based inference framework (keras-like) from scratch as the testbench
- Optimized the architecture of YOLOv2 object detection neural network for hardware implementation
- Trained YOLOv2 and TinyYOLO
2. FPGA Implementation
- Designed a resource-efficient hardware architecture for a CNN acceleration engine to implement YOLOv2 on FPGA
- Designed memory pipelines for high throughput data feeding
- Implementing and debugging the acceleration engine & memory pipeline
3. Object Tracking and Traffic Sensing
- Built a standalone (no libraries used) vehicle tracking algorithm
- Built vehicle counting and green-time allocation
- Finalize algorithms and rewrite them to C (bare-metal on ARM side of ZYNQ FPGA.
4. Traffic Simulation and Testing
- Built a simulation model of Piliyandala bypass junction in VISSIM (industry grade traffic simulation software used by civil engineers to design intersections) to test traffic control algorithms
5. IOT Implementation
- Logging data to a central server through MQTT, when demonstrating the project in NVIDIA Jetson Nano
Methodology
1. Data Collection
Building and fixing the data collection device |
2. Modifications to YOLOv2
- Fused batch normalization into convolution by modifying the weights and biases accordingly.
- Interchanged conv => leaky-relu => max-pool to conv => max-pool => leaky-relu to reduce power.
- Changed the output layer from 80 classes to 5 classes, by reusing weights of appropriate classes.
- Changed grid size from (13 x 13) to (12 x 8) and designed the sensing algorithm accordingly
- Trained with custom Sri Lankan Traffic Dataset
- Built a numpy-based inference framework and tested custom floating-point arithmetic (two types of float8), float16 and integer quantization.
3. CNN Accelerator Design
- Accelerator core v1.0 was designed to perform 12 of 3x3 convolutions in 9 clock cycles, using 9-muxes = 24, 3-muxes = 48, 16-bit registers = 144, Multipliers = 3, Accumulators = 3.
- This was redesigned into core 2.0, which was 4 times faster, using five times fewer 3-muxes, zero 9-muxes, about 20 times fewer registers (for the same speed), with 100% utilization of all multipliers and adders.
- Currently building the caches and memory pipes to run the AXIS cores without stalling.
4. Object Tracking
- Built a custom lightweight tracking algorithm that can be implemented in C, without any libraries, so it can be run bare-metal (standalone) on the ZYNQ-PS side with minimal memory bandwidth (such that the ZYNQ-PL can use maximum bandwidth)
- Near 97% vehicle counting accuracy in the daytime, 85% accuracy in the night, rainy time, on test data (on a road the CNN has never seen before)
- Hoping to achieve near 100% counting accuracy in day, night and rain conditions through improvements.
- NOTE: Object detector (YOLOv2) has less accuracy. But tracking algorithm is designed to obtain near 100% accuracy in vehicle counting and identification
5. Traffic Control Algorithm
- Designed and tested 8 algorithms based on density, bounding box count, flow...etc
- Currently working on eliminating traffic snake formation
Modeling the Piliyandala bypass junction in VISSIM - An industry-grade traffic simulation software
No comments:
Post a Comment