Nick Barnes – Project Lead on VibroMat

Nick Barnes from Data61 and Bionic Vision Australia is the project lead on VibroMat, an innovative device which allows the user to see an image by feeling vibrations from a back holstered, wearable mat. Using vibrational motors similar to that in your smartphone, and working closely with bionic vision teams around Australia, Nick’s team at Data61 is aiming to make this device a commercial product in the near future, to change the lives of thousands around Australia.

How does Data 61 and the VibroMat team make ideas happen?

Making ideas happen really is kind of the agenda of Data 61.  Going from invention and fundamental research with a new inspiration is really the goal of the organisation. The idea is that you are looking for things which are useful to people, so the main difference between us and a traditional university, is that a university has the main outcome of fundamental research. For us, getting ideas and taking them out there into the industry’s that’s all part of the brief.

How did your team go about getting VibroMat started?

VibroMat has had a long journey, and been tied quite closely with Bionic Eye Australia. For that project, you’re looking for a device which is surgically implanted, which continues to get higher resolution as the technology develops, but is also quite expensive. For VibroMat, we started with the idea of something which didn’t require surgery, and treat a wider group of patients including those who are blind. VibroMat would target more basic vision, such as orientation and mobility. I was interested in what we could do with low-vision technologies, and the Bionic Eye Consortium came along which was a really convenient way of getting funding for that. We’d already been working on it a bit prior to that ourselves. At the time I was working for NICTA, which is now Data61, and what came out of that was the consortium and $50 million for the retinal implant project. 

Given restricted vision from a bionic eye or VibroMat, our basic idea of was that computer vision could take a scene and turn it into something that a person could recognize. We looked at the specific problems people had – what sort of things would computer vision would need to do; what sort of objects and tasks we’d need to assist with. If you’ve got 20/200 (legally blind in Australia), you tend to not have too much difficulty with mobility, so we’re more interested with people who have really profound blindness (near to no vision at all), and need help with more basic mobility. We talked to a lot of blind people, and are working with people in the Center for Eye Research Australia, who specialize in working with blind patients. With them, we got an understanding of the sorts of visual cues that people with little vision use, and we began working on how to translate that into something like VibroMat. It was really understanding the restrictions of the device – low resolution and low dynamic range – and understanding what people need for it and working out what computer vision algorithms could do for that.

Australia has a lot of experience with biomedical devices (eg. Cochlear), so we have the expertis. The 2020 summit proposed by the Rudd government in 2008, eventually lead to VibroMat getting endorsed by that summit, as well as a couple of other groups.

Where are you guys at now with VibroMat?

We’re in two separate places. With Bionic Vision Australia we worked with a retinal prosthetic which has 20 electrodes, and they did a series of orientation and mobility trials, which had really encouraging results. Those were proof of concept – showing that we could actually do something which blind people could perceive as being useful in their mobility. For VibroMat itself we’re looking to test with blind participants this year. So far we’ve evaluated with normally sighted person, since it’s really easy to do you just strap it on. The only difficulty is that blind people are actually much better at orientation and mobility than people who are blindfolded – so I’m quite terrible at it! The test was walking down a straight corridor with hanging obstacles; and we took blind participants who have been blind for 20 years…they never deviated from that straight line since they’re so used to it. Whereas I’d try to walk in a straight line, and turn into a wall within 10 meters! But with the sighted volunteer trials, we’ve had very encouraging results, so we’re ready to trial with some blind patients which will get started very soon.

What does a typical day look like?

It varies a lot! There’s a degree of administrative work which gets done, but some part of it is sitting down with a problem, with pen and paper or a keyboard writing programs, and figuring it out. I’m also affiliated with ANU, so I also work with and supervise university students to solve those problems. In general most of the researchers here at Data61 have an affiliation with a university, and will supervise students for them, sometimes on site with the equipment we have here.

What role have mentors played for you?

Mentors are important on specific questions. There’s sometimes the right mentor in general, or the right mentor for a particular set of questions. I’ve been continually talking to someone who has expertise in biomedical devices and taking them to market, and they’ve been invaluable for understanding the process and steps needed to take a biomedical device to market. What’s written in books doesn’t really give you the level of understanding you need to do it. I can look up on the US FDA website and figure out what I legally need to get a biomedical device to market, but knowing to look at that website, and what the steps are to get investment along the way are questions which you really can’t get any other way.

So what are the steps for taking a biomedical device to market?

It really depends on what level of investment you can get. There are many research steps, but the first major commercial step is to develop a product and do scientific trials with patients. Getting end users of the device, getting them to trial the device in relevant scenarios…and hopefully finding that they like and get a lot out of it. You’re really looking to have the trial patient say “Yes, this is a device I would gladly pay for”. Proof of concept trials like that are really the first stage. From there, it’s more intensive trials, with more take home emphasis and continual improvements.

What about funding for such projects? Is it different to startup funding?

Not necessarily. It’s expensive…it’s a very expensive business, because eventually you’ve got to do those patient trials. The main goal is the CE Mark, which is the European equivalent to FDA approval. To do the former, you’ve got to trial for 6 months with around 10 – 15 people. SecondSight, a US bionic eye company recently got preliminary FDA approval after a 30 person trial over 2 years. What they showed was that the device was safe and that patients liked it; and now they’re selling it as a commercial product. To get to these types of trials like implanting or trying out with patients needs a lot of time and money – so you need investment at some point. This can either be grant based, or at some stage, probably venture capital based. For VibroMat, we’re not quite at that stage, but we’re aiming to do those preliminary trials this year, which we’ll conduct from Data61 funding. From there, we’ll seek outside investment. BionicEye Australia is currently in discussion with investors.

Data61 has its various commercial people who know how to put investors together, and how to get those proposals together. As a researcher, it’s good to have in your mind what things you need, what parts of the puzzle you need, but it’s very time consuming to go down the full commercial path, so it’s more efficient to have people dedicated to that and others dedicated to the research side. That’s really fundamental to the structure of Data61, who work with and make those connections for university researchers.

Who else is doing really cool stuff in your field; nationally and internationally?

In Australia there was a set of people involved with BionicVision Australia; so they were the Centre of Eye Research (clinicians and surgeons), who’ve been routinely involved with the implant. Then there was the Bionics Institute, an offshoot of Melbourne university, and another group at UNSW who have also developed an implant which is kind of the next stage. It was a big consortium consisting of these partners coming together to make these trials possible. There’s a lot of specialised domains, and it wouldn’t have been possible without them – I don’t know anything about surgery! Beyond that there’s the Monash Vision Group, who are doing a cortical implant (back of the brain), which is different to what we’re doing at BionicVision Australia, which is a retinal implant.

Internationally, classifying by larger scale patient trials, there are a couple of other groups. SecondSight (US) and Retinal Implant AG (Germany) both have retinal designs, and have overall achieved similar results to one another in terms of acuity, which is a measure of how well you can distinguish between two points. There’s another group in Germany, and one in Stanford which are also trying different things. So there’s a lot going on.

What kind of impact can this technology have on social change?

It’s really all about reduction of health burden; so reducing health costs and helping a person integrate better into society, be able to work, having better mobility and independence…that’s the real goal.

How will VibroMat look at the end of the process as a commercial product?

The device itself has several components. These are a set of pager motors (similar to what’s inside your phone), sitting on a mat strapped to someone’s back. The motors vibrate in accordance to what’s seen by a camera. In between that is the software turning that image into vibration. The hardware itself we’re aiming at a very small, mobile phone like device, and a head mounted camera, probably on a set of glasses. That would be the kind of look we’re aiming for. Apart for the glasses, the rest would be invisible so you wouldn’t notice someone using it. In terms of the commercial path, because VibroMat is not an implantable device, it’s not necessarily the case that you have to go through the FDA route. So you could just commercialize it as wearable tech, but there’s advantages to the former since it has a very clear path in getting to market by showing its really making a difference to someone’s life, and the product would be covered by insurance and Medicare. Eventually there may be a GP consultation, or going through Vision Australia or GuideDogs which show blind patients what options they have. In that way, it’s kind of different to the bionic eye, since you can try before you buy with VibroMat.

How do you go from an image to a physical vibration on your back?

In general, it’s interesting that perception through any device has a similar set of concepts, such as that of a “just noticeable” difference. So I can put a vibrating motor on your back, make it vibrate at a particular frequency, move it up or down one and ask if you can feel the difference. Eventually there’s a “threshold” frequency. With the bionic eye, this is the same as the smallest amount of current that leads you to actually seeing something. In general, for vision, if there’s a grey background, and a slightly less grey spot on it, there’s a measurable difference between the “grey-ness”, telling whether you’ll be able to see that, reliably. This is a continuing scale, so you can imagine a set of stairs going up an image. For Vibromat, the important question is how many different levels can someone perceive. In vision, you have a palette of levels a person can see which is quite small. In a visual image you have 256 levels of brightness, maybe a million or more separate pixels, so we need you convert that information to the vibrations in the Vibromat which has 96 motors, and probably around 4 – 10 levels a person can notice (in terms of vibrations).

We can take an image, look at the depth of the scene, find where the ground and walls are using image interpretation – looking for obstacles and overhanging obstacles. What’s important isn’t the floor, but the boundaries (the walls) and the objects like a door. So we take those things, and make them a noticeable difference in the final image. You give more emphasis to closer objects of course, and that’s the basic kind of idea. That’s really the main kind of focus of the work I personally do with VibroMat.

How do you see the Bionic Eye and VibroMat projects progressing in parallel with each other – competing or complementary?

There’s not way of telling until something commercial happens. For something like the Bionic Eye project, you’re really looking at treating particular disease groups such as macular degeneration, which affect the retina. We [the consortium] are really looking at working with engineers and groups in biomedical engineering to make the technology better over time. The vision processing question is the same though; making objects visible against their background. With Vibromat though, there is a limit – your tactile sensitivity on your back is restricted. You’re more sensitive on other parts of your body, but they’re not really adequate for a large number of tacters. They both have the same basic problem of reducing an image, and in that way they’re very complementary – we’re the same team using the same code! But the devices themselves treat different groups of blindness. I think they’re complementary, but eventually we’re hoping to talk to investors on both sides, so it’ll be down to them as to whether they’re complementary or competing products.

 

Practically, what would this device mean for someone using it in everyday life?

Obstacle avoidance and mobility, which don’t need a lot of acuity. So you need to know there’s an obstacle in front of you, but not what it is necessarily (whether it’s a cat or a brick). It’s probably not going to be so great for, say, reading. You definitely could use it for recognising letters and numbers though, for example, I could be walking around and see a sign and wonder, what does the sign actually say? I was talking to a blind patient with extremely narrow vision who uses a flatbed scanner to read quite comfortably on a flat screen TV – but you can’t carry around a flatbed scanner out on the street! In that sort of scenario, VibroMat could be really useful. We also think it could be useful for a situation, say, where there’s an operator on a construction site who can see reasonably well in front of them, but there’s dangers of things coming from beside, behind or underneath. So VibroMat in this instance, could be used as an augmentation with a camera on the back of your head, or a series of cameras on a vehicle giving the person a 360 degree map of their surroundings.

 

Tags from the story
,
More from Adrian Hindes

8 Australian Scientific Innovations to celebrate this #NatSciWk

With National Science Week (13 – 21 August) now underway, we thought...
Read More