Fast mmWave Beamforming Dataset with Camera Images
Download our datasets:Please use below links to download the datasets:
Dataset#1: Images taken from the testbed with associated Best beam pair for Wooden obstacle
Dataset#2: Images taken from the testbed with associated Best beam pair for Card box as obstacle
These datasets were used for the paper "Machine Learning on Camera Images for Fast mmWave Beamforming", IEEE MASS 2020. Any use of this dataset, which results in an academic publication or other publication that includes a bibliography, should contain a citation to our paper. Here is the reference for the work:
B. Salehi, M. Belgiovine, S. Garcia Sanchez, J. Dy, S. Ioannidis, K. Chowdhury, "Machine Learning on Camera Images for Fast mmWave Beamforming" IEEE MASS, 10-13 December 2020, Delhi NCR, India.
Description:The current mmWave standard exploits the beam sweeping procedure for beam alignment that consumes tens of milliseconds up to several seconds. We propose to leverage visual information as a potential solution to mitigate the beam training overhead. Consider a scenario in which the images are taken from cameras to guide the transmitter and receiver for finding the best beam configuration. To that end, we designed a two-stage deep CNN architecture to 1) locate the transmitter and receiver in the image and 2) Map it to the best beam configuration, which maximizes the SNR at the receiver. Fig.1 demonstrates a schematic of our proposed method. Our results reveal that our ML approach reduces beamforming related exploration time by 93% under different ambient lighting conditions, with an error of less than 1% compared to the time-intensive deterministic method defined by the current standards. To evaluate our proposed method's performance, we have created two datasets containing Images from our testbed and associated best beam pairs in the presence of two different types of obstacles.
Experimental SetupWe used the mmWave transceiver system from National Instruments that supports real-time over the air mmWave communication along with Sibeam RF heads for data collection. The transmitter and receiver are mounted on two sliders moving in a horizontal direction. Two GoPro Hero 4 cameras monitor the movements in the room. An obstacle is located between the sliders, blocking the LOS path between transmitter and receiver in certain directions. We have collected the dataset for two types of obstacles, wood and card box. The obstacles are rectangular with dimensions 33 cm × 88cm × 3cm and 33cm × 88cm × 10cm for wood and card box, causing 30 dB and 4 dB attenuation while blocking the LOS path, respectively. Fig. 2 shows the testbed setup from the first camera angle with wooden obstacle. Moreover, a diagram of the experiment setting is demonstrated in Fig. 3. For more detailed information, please check section III in the paper. We discretize the slider length by factor 5. At each location, we record:
- The link quality of different beam pairs.
- Two pictures taken from two camera angles deployed in our testbed.
Fig. 2: Testbed setup from the first angle
Fig. 3: A diagram representing experiment parameters
Dataset Description:We are releasing two datasets a) Dataset #1: Images were taken from the testbed with associated Best beam pair for Wooden obstacle; b) Dataset #2: Images were taken from the testbed with associated Best beam pair for Cardbox as an obstacle. The structure of both datasets is as follows:
- Raw measurements: It is a folder containing raw measurements of link qualities. For each beam pair, we measure 50 samples of received signal SNR. The transmitter and receiver have a codebook of 13 elements each, leading to 169 different beam pair configuration possibilities. The first column shows the measured received SNR and the second column in each CSV file shows the beam configuration at Tx and Rx. While moving from one configuration to another one, we write zeros to the CSV files.
- Process.m: We use equation (3) to define the best beam configuration. This Matlab code gets the raw measurements as input and generates the best beam configuration indexes at the transmitter and receiver.
- Tx-rx-Best-Beam-indexes.csv: Each location on the slider is defined by a case id (see section III). The case ids and associated best beam pair (the labels) are defined in this file. The labels are also included in (4) and (5).
- Cardbox_angle1.hdf5: This hdf5 file is the main file, including the dataset itself. Fig. 5 shows the hierarchical structure used to generate hdf5 files.
- Cropped_Samples_per_class: This group includes samples of Antenna1, Antenna2, and Background. To balance the dataset, the augmented version of class antenna 1 and antenna 2 are generated by adding different light effects. The train/test/validation set for stage 1 (Detection, see section IV) can be generated by sweeping these images with a window of W × W. In our experiments, We used W =50; if you want to change the window size, you might need to generate more/fewer samples to balance the dataset.
- Entire_Image_original_angle_1: Includes the Entire Images taken from angle 1. This group contains the Entire Images taken from angle 1 and associated beam pairs. Each sample is named as follows,
- Cardbox_angle2.hdf5: The same structure as 4 for images taken from angle 2.
There are two main groups in the hdf5 file.
'Image-case-i-j': The Image taken from case (i,j) from angle 1, the input to our pipeline.
'label-case-i-j': Associated label to case (i,j), the output of our pipeline
Visualizer:Let's say you want to find the image and labels of the case (1,5) of a wooden obstacle when the image is taken from angle 1. You can use the code below to visualize the images and extract the associated labels.
import h5py import numpy as np from PIL import Image f = h5py.File('Wood_angle1.hdf5', 'r') Angle1 = f.get('Angle1') Entire_Image_original_angle_1 = Angle1.get('Entire_Image_original_angle_1') Img_case_1_5 = Entire_Image_original_angle_1['Image-case-1-5'] label_case_1_5 = Entire_Image_original_angle_1['label-case-1-5'] ######## Save image to JPG Image_case_1_5 = Image.fromarray(np.array(Img_case_1_5)) Image_case_1_5.save('case_1_5.JPG') print('Image of case (1,5) saved') Label_case_1_5 = np.array(label_case_1_5) print('The label of case 1-5 is:',Label_case_1_5)