Using Data in Video

Dissertation on using data in video designs

For my final year dissertation at Rose Bruford College I wanted to expand on what I had done with using real-time data in the DotCal project.

Using real-time data I looked at different ways of presentation and how to go about creating visualizations of the information using real-time tools.

The data source I chose was the Environment Agency flood and river level data from the real-time data API (Beta), under open government license V3. This granted me access to readings from individual water measurement stations.

In doing so I created 3 different versions, these should have been presented in a 3 sided installation like space but due to the COVID-19 pandemic at the time I had to stream a virtual version of this in 360, this keeping the data real-time and interactivity possible.

Versions

Below are recordings of each version. Using fullscreen is recomended.

1

Basic and set the foundations for the following ones. Here I constructed the first versions of the two graphs, the Javascript to call the API and tuned the 360 camera output.

2

Focused on interactivity. In the real world implementation this would have been done with by tracking the hands of users, with height detected automatically.

This design also expanded on contextual information about the data although limited due to time constraints

3

Taking a more artistic approach this version visualised the data without any visual scales or other context.

The front visualisation is the 30 day averages seen in the line graphs within previous versions, with time flowing towards the user.

The right visulisation is the 4 averages seen in the previous versions, distance from the centre representing the value.

Data Fetching and Processing

The flowchart below shows how data is downloaded and processed

Graphs

Versions 1 and 2 both contained graphs, the workings behind each were the same between versions, with only visual changes made.

Bar Graph

Line Graph

Interactivity

The second version contained interactive elements, these would have been controlled by motion tracking the hands of users, but in the remote 360 enviroment they were controled via TouchOSC by the stream operator, taking input from a voice call with the participant.