Making of: Interactive Video for HPE

  • Sonar Motion Tracking, Camera Motion Tracking, Generative VideoSound Design

In depth look at how we created the interactive technology for HPE’s Open NFV Stand at Mobile World Congress 2016.

Begin with the concept, the rest will write itself…

In November 2015 we were approached by the Matt and the team at Ivory. They had been tasked by client HPE (Hewlett Packard Enterprise) to create an exhibition stand for Mobile World Congress 2016 (MWC2016) in Barcelona. The show is huge in the world of telecoms, tech and networking and the place for the industry to meet. HPE is the Cloud computing side of Hewlett Packard and the stand we were to work on was to showcase their Open NFV Ecosystem. We had to ask what that meant too! NFV stands for Network Function Virtualisation and in almost plain English, this is an initiative that’s goal is to remove network functions from dedicated hardware such as routers, firewalls (stuff you find in your server cabinet at work) and virtualise this onto Virtual Machines. OK, now we’re clear on that, let’s move on to the good bit…

hpe_nfv_concept_11-15In short, the stand was about huge networks. The concept was to visualise this, the data moving backwards and forwards and via on to LED screens that made up the structure we were to create interactive content to tie it altogether. Each of the nine “LED ribbons” that wrapped up, across and over the stand represented the nine NFV developers that were to be showing their products in the demo area. The top of the stand represents the HPE platform that makes it all possible. Oh, and they wanted it to be interactive to represent the very nature of the Open NFV Ecosystem! Our task was not only to create video content for the screens but to develop an interactive system With our new knowledge of networking we set to work on the R&D.

sensorsWe get to play with a lot of professional equipment at Event Projection, we have a warehouse full of it. Unfortunately, we quickly came to the conclusion that we didn’t have and couldn’t just go out and buy what we needed for this project. Thankfully, our lab rat, AKA Steve was confident that something could be built. After much research we decided that the most reliable form of sensor for the stand was ultrasonic. These are typically found on the back of a modern car in the reversing system, they’re the things that make the beeping noise to stop you hitting the lamppost. They’re cheap and detailed information meant that we could predict what they were going to do. We set off with the sensor research, trawling through spec sheets until we had found what we were looking for. The polar pattern (sensitivity range) was critical. We had decided to use two sensors per LED ribbon. Each pair would be able to triangulate the position, velocity and direction of a person passing. The outcome of this R&D stage was signed off after a successful “proof of concept” was delivered and work could really being.

sbotThe Sensors would not work on their own with out an interface. With a total of 18 sonar senses, we needed to build both the hardware and software to make it all talk. The interface is effectively a tool that take the 18 data streams from the sensors and outputs meaningful data for our computers to work with. With no off the shelf solutions, this was to be known as Steve’s Box of Tears… it wasn’t so easy to build! The rest of the system consisted of two data crunching computers and two graphics generating computers.

pulse_v0-8__0000With any project that deals with video, content is king and without good content, the concept falls apart. Thankfully, with such a strong concept the ideas were plentiful. We needed to illustrate the movement of data between the user and the network. We liked the idea of rain, data falling with gravity effects down the LED ribbons; “Data Birds” flying around, mapped onto all 9 pillars and effects similar to those in The Matrix where a person entering the stand would cause “Data Ripples” in graphics flowing up and down the pillars. Many of these ideas were, as to be expected, thrown out by the client. We also learnt that for almost every different sequence, a different “graphics engine had to be written. At this time in the project, it was also decided that the screens were to be made from relatively low, 12mm pixel pitch LED screen tiles. These were to limit the content further; any curves in the continue would look jagged on the low resolution screens. It was decided that the screens were to be covered with diffusion fabric to soften the pixels too.

winstontestledIn January, a case of LED screens arrived at our warehouse. We could now see the content as it would look on the lower 3m section of the Ribbon which was a great help in content creation. Over the course of the next month, the concepts for the various content loops was debated and refined. Custom printed circuit boards for the hardware interface were designed and gradually the whole system began to come together. The shipping deadline was looming though and the last thing we needed was problems.

camerasWe had a problem. To our embarrassment, we discovered that the sonar sensors would create a faint clicking noise on a mobile phone when standing too close to the sensors! After discussions, it was determined that a second system was needed incase of problems in Barcelona. We had originally shelved the idea of using cameras as the motion tracking sensors because we could not be sure of factors outside of our control in Barcelona such as lights from neighbouring stands and because they would be inside what is effectively a giant light box, even dramatic changes in our own content could make the system unworkable. Thankfully, by this stage the way the content was going meant that it was subtle so we set about creating a system using nine cameras, one for each pillar. The cameras would use “frame differencing” to track the motion of people on the stand.

rackIn the final days before leaving the UK, the custom built enclosures arrived from Germany. With the additional work load of developing the camera system time was short so it was all hands on deck to fit the sensors. The rack had to be completed too but thankfully the additional equipment required to cater for the cameras was squeezed in!

The technical design and build of the stand itself was crafted by Berlin based Minga Network and we arrived bang on where the build schedule put us. The stand itself was a technical marvel, made from an extruded aluminium frame that housed the LED panels to perfection. It proved to be too perfect on the show when visitors had to be stopped from leading on the beautifully disguised screens! Our build had been 75% completed back in London and we just had to wheel our rack or precious kit into the tech room and fit and wire up the sonar and cameras.

The set-up was smooth and the show was a great success. There were the inevitable and foreseen tweaks during the installation but the system proved to be rock solid and the back-ups were never used.

Thank you to all the team at Ivory for putting your faith in us; HPE for trusting the creative vision; Minga for your stunning skills; PRG for your LED and wonderful technician, Lenn and F1 Networks for sending Noe with his knowledge of fine Spanish wine.