Emma Alexander is a computer scientist and PhD student at the University of Hull. She works in the Hull Immersive Visualisation Environment (HIVE) team which focuses on state of the art visualisation, virtual reality and high performance computing.
For the record, I don’t dress like this all the time.
On getting into science
I used to spend a lot of time watching things like Tomorrow’s_World and The Sky at Night and that pretty much did it for me. Watching people do that stuff, talk about it excitedly, just ignited the passion really in me, that that was really cool, it was like playing with toys all the time and you get to see and do really cool stuff.
And I then turned into a bit of a gamer for a few years and enjoyed playing games, was interested in how they worked, how they did it and, yeah, I kept my foot in with playing with consoles and stuff whilst I was doing the general education thing at the same time and managed to sort of interweave the two.
On working as a computer scientist
Part of what I do, as well as being a computer scientist, is I facilitate the communication of science and technology. So the research centre that I look after at the University of Hull is the whole immersive, visualisation environment and it really specialises in helping people look at their work differently to consider how they can use visualisation and simulation as part of that work to either aid the research itself or to aid the communication of what they do.
At the university institution, what happens is researchers, students, and businesses outside the institution can come along and use the kind of technology that we have but they can also seek support and guidance from the research visualisation and simulation experts as to how they actually want to do it because quite often when a user comes into the centre they don’t know what they want to do. They know they’ve got a problem with their data, whether visualising, simulating it or doing something with it, or communicating it, they don’t always know the best way of coming up with a solution for that so we can help guide them and say, ‘This is what you can visualise, this is how you can do it and this is how you can interact with it as well’.
I suppose it is about trying to nuture multidisciplinary collaboration. So, for example, we’ve had archaeologists and historians come into the centre and say, ‘Yeah, we’d like to be able to do some good visualisations for our work, to help excite, enthuse people about it and to help teach it too’. So an archaeologist might be used to going out in the field, wellies on, standing there, looking around a site, talking to 50 students about what used to be in this field. And what they can now do is take a tablet PC, hold it up in the middle of the field, look around and talk about the crop marks that are on the ground and what did they use to be and then see visualisations of what the landscape may or may not have looked like in past times. And then you can translate that back into the centre. We have a large 5 x 3 metre immersive work wall, where you stand in front of it, you look at it, it’s 3D, everything’s popping out of the screen and that’s all very nice, but what’s more exciting is when you can stand in front of the screen, look at the data, and start to use your hands to experience it and to interact with it. And when you feel like you’re engaging with the data you end up with quite an emotional relationship with it and you start to learn more and better and you start to ask questions more about what it is you’re looking at and why.
On ‘Minority Report’ interface
In many respects we’re already there. It’s just different levels of it coming out for different people to use. I mean, one of the new devices that’s come out in the last few months is the new head mounted display called the Oculus Rift which is very consumer oriented technology but it’s really good stuff: it’s got a good field of view, it’s got some great reviews, and that’s just one lovely example of the kind of…what used to be very sophisticated technology that we’d refer to as cutting edge is now feeding down into the mainstream, which is traditionally how all this works anyway.
On motion capture
I’ve come here [to Cheltenham Science Festival] with one of my colleagues from chemistry, Dr Mark Lorch, and we’re doing a session called ‘When the monkey met the iPad’ and it’s actually all about the technology of the iPad, how the liquid crystals work inside them, what do these devices do and how do they do it? And then we’re forming connections with that, with the iPads, and the kind of technology that I’m wearing now, which is a gyroscope based motion capture system. So we start off the session talking about the science and the chemistry of it and we finish up talking about the motion capture, the gyroscopes, the accelerometers, the magnetometers in this kind of technology and talk to kids about the idea that this is all the same stuff that’s in the phones that they’re carrying in their pockets. And they quite like that connection between the multi-million pound movie and game industry and the technology that they use and how they’ve got a little bit of that in their pocket and it gives them a little bit of connection to that industry I think. It’s mostly about trying to enthuse and excite young people into science, whether it’s chemistry or computer science, but also showing them how all these technologies, how these disciplines are interrelated as well. That you shouldn’t just look at one against the other, look at them together.
OK, so in the suit that I’m wearing now there are about sixteen different sensors all over the body, um, these small little systems…this is off the shelf kit. This isn’t anything we’ve researched and designed. But this is quite nice because it’s comfortable to wear, when you’re not wearing it for about six hours a day, and you don’t have the whole issue of marker occlusion you would have if you used an optical based system whereby if you were suited up with little reflective markers, you’ve then got some issues with marker occlusion if you start moving or doing fight sequences next to other people and stuff. So there’s various benefits and other drawbacks with the different kinds of technology. So inside these sensors are the gyroscopes, accelerometers and magnetometers and you align the suit by facing magnetic sails standing in a very still position to actually get your calibration first of all and then, as you move, the character you’re looking at on the screen will move as well, and we’re just using some off the shelf software called Motion Builder that brings a character in that’s already got a skeleton inside it. You bring the device into the software, the two are magically connected through the powers of software and I become a monkey. Which this week has been great from the perspective of the children understanding what we’re doing, but I have been known as the monkey all week so I may live to regret this.
So probably in the last ten years or so, this has become off the shelf. If you think of how quickly, especially, sort of, the cartoon industry are picking up this sort of kit now and the vast majority of animations that you see on TV for kids are all…there’s some motion capture in there somewhere. And I think that’s been in the last ten years. Motion capture’s been around for longer than that and it’s changing every few months. There’s different ways of doing this. You can do this marker-less with Kinect and stuff so there’s all sorts of different ways you can do some good body capture nowadays.