Dr Geoff V. Merrett
www.geoffmerrett.co.uk
| | | | | |
DejaView Logo

DejaView

A wearable memory aid

The DejaView Concept

Human memory is not perfect. Dementia, injury, and the natural decline of mental abilities with age all further affect our ability to remember. Memory impairment has a huge impact on quality of life and, with an ageing western population, is becoming ever more of a problem. Current coping strategies range from simple aids such as post-it notes and calendars to, more recently, assistive devices which attempt to provide reminders at appropriate times, capture details of important events, or aid with performing complex tasks. Promising findings in the use of wearable camera-based memory aids such as the SenseCam have been widely reported. However, to date, there has been relatively little consideration of the potential for offering memory help in real-time during daily living. We suggest that such assistance, in the form of proactive visual prompts, could help people with memory problems to immediately orientate themselves in a situation - supplying details of where they are, or who they are with. Providing this form of immediate and 'in-the-moment' contextual feedback to the user represents the philosophy of the DejaView system.

DejaViewThe DejaView system is represented by a three-tier architecture, comprised of:
  1. a low-power wearable sensing device called the DejaView device. The device autonomously captures photos based on inputs to its onboard sensors, and transmits them (and collected sensor data) wirelessly via a BlueTooth interface to a smartphone;
  2. an application running on a smartphone which receives data from the DejaView device, appends additional sensor data, and transmits this information to a remote web service using its Internet connection. The application also receives contextual feedback from the web service and presents it to the user;
  3. a web service, which determins context from the uploaded data, using the wealth of data and processing that it has access to (for example algorithms such as face or object recognition, and connecting to the user's social networks, calendar, online photo albums, etc).
In its simplest form, the architecture allows similar functionality to that of the SenseCam - photographs can be autonomously captured and stored for later review. A distinction here is that, instead of being stored on the device itself, the images are immediately transmitted (via the smartphone) to the Internet where they are stored; this means that other use-cases can be conceived, for example where real-time monitoring of a wearer is possible. However, the real flexibility of the DejaView system comes when the Internet service instantly analyses uploaded photos and sensor data, and feeds back relevent contextual information to the user via their smartphone. In the currently-implemented example, photos captured by the wearable device are compared against a database of faces stored on the remote computer. The user subsequently receives information about people around them via their smartphone. More generally, the architecture permits a wide range of intelligent methods for selecting useful cues, based on the user's environment, to be integrated into the system, facilitating the provision of real-time help for memory problems.

Below is a crude video showing the operation (as of March 2012) of the DejaView system, including the device, mobile phone/Android application, and web service/interface:


We are currently working to improve the operation and functionality of the system, and collaborating with memory experts to evaluate the clinical benefit of the system. We are always happy to hear from interested parties regarding comments or potenital collaborations by emailing gvm@ecs.soton.ac.uk . Further information can also be found at http://www.dejaview.ecs.soton.ac.uk.


Recent DejaView News

I Begin to trial DejaView!
17th July 2012
Two photos from DejaView: being interviewed, and caught and recognised shaving in the mirror!
Today I got my own DejaView device to start to wear, play with, improve (and fix the bugs)! While we've had working devices and a working system for many months now, up until now any devices that we have had have been used by the rest of the research team and our clinicial collaborators in London. ... [more]

Alex Presents DejaView Research at IET WSS
19th June 2012
Alex presenting his research at IET WSS 2012
Today Alex presented his research at the IET conference on Wireless Sensing Systems (WSS). His paper was on the topic of "Adaptive Sampling in Context-Aware Systems: A Machine Learning Approach", and is a method he is researching to attempt to maximise the usefullness of images captured by the DejaV... [more]

Best Presentation Award at SenseCam 2012
4th April 2012
Alex Wood being awarded the Best Presentation prize
Our presentation entitled "DejaView: help with memory, when you need it" was awarded the best presentation prize at the SenseCam 2012 Symposium. The prize was awarded by Steve Hodges from Microsoft Research. Abstract: Promising findings in the use of wearable memory aids such as SenseCam have bee... [more]

DejaView Research Team

Co-Investigators:
Professor Dame Wendy Hall
Professor Nigel Shadbolt
Professor Bashir Al-Hashimi
Professor Paul Lewis
Dr Kieron O'Hara

Researchers (Current):

Researchers (Previous):
Dr Ash Smith (Research Fellow, 2011-12)
Dr Dirk de Jager (Research Fellow, 2011-12)
Dr Dirk de Jager (Senior Research Assistant, 2010-11)
Dr Alex Wood (PhD Student, 2010-15)
Norbert ****** [hidden] (Intern Student, 2013-14)
Martin Ulrich (Intern Student, 2009-10)
Jan Bollenbacher (Intern Student, 2009-10)

Publications and Resources

Below are our most recent and relevant publications on the DejaView system.

External Data Source

This list of publications was automatically generated from the University of Southampton EPrints repository. This feed can be found on the repository's DejaView 'shelf'. EPrints is free software developed by the University of Southampton to facilitate Open Access to research.


History of DejaView

We have been working towards the current incarnation of DejaView for a number of years, with our work into a new kind of memory aid starting in 2009. The various versions and developments made along the way are depicted below (we suggest reading this in chronological order, i.e. from the bottom up!):

dejaview v3.3DejaView v3.3
The v3.2 device saw some minor modifications to improve the casing (making the capture button easier to press, incorporating a lanyard which attached at either side of the device to reduce the likelihood of the device rotating), but also to make the Bluetooth communications more robust and error-free.
 -2012-    
 
 
   
 
 
 -2012- dejaview v3.2DejaView v3.2
Version 3.2 of the DejaView device was the first 'fully-functional' device which was able to form part of the whole system, and incorporated the newly developed interface software and v3.1 device. The casing was also completely redesigned from the v2.6 forming a smaller, lighter, and more aesthetically pleasing item. This device formed the basis for our presentation at the SenseCam 2012 symposium (see the 'Publications and Resources' section for more information) where we won the 'best presentation' award.
 
 
 
 
interface software v2Updated Interface Software
Alongside development of the DejaView device, significant work had been ongoing on the interface. The new interface software, designed for use with the v3 device, had seen a complete redesign of both the Android interface and the web service. The Android application now provides a easy-to-use interface showing the photos taken and, when a face has been recognised by the system, highlights the face in the image and displays the name and relationship of the poerson identified. There is also the option for the phone to speak the person's name (useful if the wearer was wearing headphones). The web service was redesigned to be robust, allow multiple users to use the service at the same time, train new people and confirm or reject previous face matches to improve recognition.
-2012- 
   
 
 
 -2011- dejaview v3.1DejaView v3.1
Following a considerable redesign via the v3.0 device, the first operational version of the 'fully-functional' device was developed, namely DejaView v3.1. This device provided a 5MP camera with native JPEG compression (the v1 device had provided large BMP files taking significant time to transmit), an improved compass/accelerometer, smaller light and PIR sensors (allowing for a more compact and user-friendly casing, but also more reliable operation), and revised firmware which, among other improvements, allowed configuration of the device's rules and parameters via a USB interface.
 
 
 
 
dejaview v3.0DejaView v3.0
While progress was being made on the 'cut-down' version of the DejaView device (which resulted in the v2.6 device), development continued on the fully-functional version. The first stage in this was the development of a test board which, among other features, allowed the team to evaluate different camera modules and configurations for use in the device. This test board allowed for fine tuning of the design and device firmware, and to overcome a number of the technical issues that had been present in the v1 device.
-2011- 
   
 
 
-2011-  dejaview v2.6DejaView v2.6
This was the first fully-operational version of the DejaView system, incorporating the previously developed interface software and v2.5 device, all contained within a wearable casing. This device and the results of using it (with particular emphasis on energy and latency) were published in our first DejaView paper at MobileHealth 2011 workshop (see the 'Publications and Resources' section for more information).
 
 
 
 
interface software v1First Interface Software
Version 2 of the DejaView device also saw the first functional version of the interface software. This incorporated both an Android application operating on the mobile phone and a web service. The Android application receives images and sensor data from the wearable DejaView device, sends this over the internet to the remote server, and provides feedback to the wearer on who they are looking at. The web service performs primitive face detection on captured images, and allows review of previously captured images (including other sensor data, for example the time and geographic location where the photo was captured).
 -2011-
   
 
 
-2010-  dejaview v2.5DejaView v2.5
The second version of the 'cut-down' device saw improvements including a larger battery alongside redesigned embedded software offering more energy efficient operation. The combination of both of these enabled the device to operate for at least a day.
 
 
 
 
dejaview v2DejaView v2.0
While development continued on the v1 device, it was quickly realised that there were a number of non-trivial technical issues that need to be resolved. To ensure that progress on the project was not hindered as a result of this, a 'cut-down' version was developed alongside. This cut-down version, the v2 device, had less functionality, including slower and more energy-consuming operation and a physically larger but lower resolution camera.
 -2010-
   
 
 
-2010-  dejaview v1DejaView v1
Following from the two early prototypes (EPICam and SUCam), DejaView was born on top of four basic principles, namely that the system should: 1) provide 'in-the-moment' support for memory, 2) leverage the power of the web to provide rich contextual information, 3) be formed from a hierarchy of components (a wearable device, a mobile phone, and the internet) to perform powerful and efficint operation, and 4) minimise the number of photos captured (and hence energy expended) using a suite of on-board sensors and intelligent algorithms. The wearable device (pictured right) featured a 3MP camera, bluetooth transceiver, ARM Cortex-M3 microcontroller, and an array of sensors.
 
 
 
 
SUcam
SUcamFollowing the work on EPICam, two interns were hired to address one of the research challenges that had been rasied: instead of just providing 'after-the-moment' support for memory, is it possible to provide useful 'in-the-moment' prompts? One such prompt was considered as providing real-time face-recognition, with the system telling the wearer who they are talking to. The resultant prototype (named 'SUcam'), formed from a webcam, PDA and an array of sensors, showed substantial promise for this concept. Furthermore, it also utilised a hierarchical hardware architecture, using an ultra-low-power TI MSP430 microcontroller to monitor an array of sensors, only waking up the high-power PDA when a photo and image-processing were required.
 -2010-
   
 
 
-2009-  EPICAM
epicamOur research into wearable memory aids began as an undergraduate 'group design project' for a team of four final-year MEng Electronic Engineering students. The team observed that a potential issue with the Microsoft SenseCam was that it took too many photographs, many of which were unusable or irrelevant. To resolve this, they created a new system (termed 'EPICam'), which took fewer photos through a combination of additional sensors, an eye-level camera, and a flexible rule-based system for triggering image capture. While the developed system was a briefcase-size prototype, the project raised a variety of research questions that we subsequently began to persue. For more information on EPICam, you can view a video summarising the student's project.
 
 
 
 
 
 
 
 

DejaView News

Below are some of the successes from our research on DejaView.

I Begin to trial DejaView!
17th July 2012

Two photos from DejaView: being interviewed, and caught and recognised shaving in the mirror!
Two photos from DejaView: being interviewed, and caught and recognised shaving in the mirror!
Today I got my own DejaView device to start to wear, play with, improve (and fix the bugs)! While we've had working devices and a working system for many months now, up until now any devices that we have had have been used by the rest of the research team and our clinicial collaborators in London.

Some of the photos from my first two days of wearing DejaView can be seen in the image:

The top photo was captured while myself and Paul Lewis were interviewed and filmed by a team from Sweden making a documentary about lifelogging (the documentary should appear later in the year!).

The bottom photo was captured while I was getting ready for work in the morning, where the reflection of me shaving in the mirror was captured by DejaView (which proceeded to recognise me and announce that I was looking at myself!)

For more information, click here.
 

Best Presentation Award at SenseCam 2012
4th April 2012

Alex Wood being awarded the Best Presentation prize
Alex Wood being awarded the Best Presentation prize
Our presentation entitled "DejaView: help with memory, when you need it" was awarded the best presentation prize at the SenseCam 2012 Symposium. The prize was awarded by Steve Hodges from Microsoft Research.

Abstract: Promising findings in the use of wearable memory aids such as SenseCam have been widely reported. However, to date, there has been relatively little consideration of the potential for offering memory help in real-time during daily living. Such assistance, in the form of proactive visual prompts comprising the four reported types of cue (people, places, objects, and actions), could help people with memory problems to immediately orientate themselves in a situation -- supplying details of where they are, or who they are with. This paper reports on the three-tier DejaView system, designed to provide such help.

DejaView works across a wearable device, a smartphone, and a remote computer, simultaneously recording a lifelog, finding appropriate cues from past experiences, and feeding relevant information back to the user. The real-time nature of this system required the design of a new wearable device, similar to SenseCam but more customisable and additionally capable of transmitting data over Bluetooth. Fitting this into the three-tier architecture allows for complex processing in the system without limiting the battery lifetime of the portable and wearable parts.

In the currently-implemented example, photos captured by the wearable device are compared against a database of faces stored on the remote computer. The user subsequently receives information about people around them via their smartphone. More generally, the architecture permits a wide range of intelligent methods for selecting useful cues, based on the user's environment, to be integrated into the system, facilitating the provision of real-time help for memory problems.

For more information, click here.
 

Alex Presents Research at SenseCam 2012
3rd April 2012

Alex presents his research, and also chairs a session
Alex presents his research, and also chairs a session
Alex Wood presented our research on 'DejaView' at the recent 2012 SenseCam Symposium. DejaView is a system designed to help sufferers of memory loss, in particular by allowing them to receive relevant real-time feedback. In the current demonstrator, this feedback gives the wearer information on who they are looking at - for example their name, relationship, when they last saw them etc. Alex was also awarded a bursary by the conference organisers to attend the symposium, and chaired one of the sessions.

For more information, click here.
 
     

Alex Presents DejaView Research at IET WSS
19th June 2012

Alex presenting his research at IET WSS 2012
Alex presenting his research at IET WSS 2012
Today Alex presented his research at the IET conference on Wireless Sensing Systems (WSS). His paper was on the topic of "Adaptive Sampling in Context-Aware Systems: A Machine Learning Approach", and is a method he is researching to attempt to maximise the usefullness of images captured by the DejaView system, while minimising energy consumption.

Alex's presentation was recorded, and his slides and a video of the presentation can be watched on the IET website by clicking here.



For more information, click here.
 

A Photo of DejaView, from DejaView
3rd April 2012

A DejaView presentation - captured by DejaView!
A DejaView presentation - captured by DejaView!
Alex's presentation on DejaView at the SenseCam Symposium 2012 in Oxford saw a milestone for us - the first time that the DejaView device (alongside the entire system, including the mobile phone and web server) has captured a DejaView presentation!

The photo on the left shows this seemingly trivial event - the first slide from Alex's presentation (Alex is just about visible on the left hand side of the photo)!
 

First DejaView PCBs
3rd June 2010

(from left) Dirk de Jager, Bashir Al-Hashimi, Wendy Hall, and myself with the PCBs
(from left) Dirk de Jager, Bashir Al-Hashimi, Wendy Hall, and myself with the PCBs
Today we received the PCBs for the DejaView device (version 1)! Nearly a year has passed since we first began to think about how technology can better assist sufferes of memory loss, and we are well on the way to having our first new devices.

The device, forming a part of the dejaview system architecture (alongside a mobile-phone and web service) includes a three megapixel camera, bluetooth radio module, ARM Cortex-M3 microcontroller and an array of different sensors.

For more information, click here.
 
a GM webdesign This page was last updated on 7th August 2012, and has been successfully validated as HTML 4.0 Transitional and CSS level 2.1
Website tested with Internet Explorer 7, Mozilla Firefox 3, Apple Safari 3, Google Chrome 1 and Opera 9.
For comments and suggestions on, please email webmaster@geoffmerrett.co.uk.

© Geoff Merrett 2017