IDEA/VISION In today’s age of ubiquitous advertising, we are constantly bombarded by bits of information vying for our attention. The “noise” of our environment has reached a fever pitch for almost all of our senses. While our sensory modalities are designed to efficiently filter out much of the sensory data that reaches us and focus only on the relevant information, we believe that combining this filtering process with technology in a human-machine symbiotic intervention can help augment our ability to focus – and, in turn, help us kick the bad habit of constantly diverting our attention to technology. Our intervention is an eyewear that is designed to block the user’s view whenever he or she is distracted by mobile phones. The eyewear recognizes when the user looks at the mobile phone screens and actively shuts the eyewear lenses.
MOTIVATION + BACKGROUND WORK While technology and gadgets like mobile phones assist in efficient task management and ceaseless connectivity, the downside of this pervasive technology is evident in our daily lives. Our phones pose a constant distraction in various contexts like driving, social gathering, personal/romantic bonding or even family get-togethers. This distraction can have adverse repercussions in our social lives but also result in a habitual lack of focus and shortened attention span.
Our goal is to use technology to augment user focus by blocking distractions. Reality is already suffused with information, so our aim is to clarify it, rather than complicate it.
Another motivation driving this project is to assist children suffering from ADHD. According to a study two million more children in the United States have been diagnosed with attention-deficit/hyperactivity disorder (ADHD) and one million more U.S. children were taking medication for ADHD over an 8 year period (2003-2004 to 2011-2012). In 2011-2012, 11 percent of U.S. children 4-17 years of age had been diagnosed with ADHD. Nearly one in five high school boys and one in 11 high school girls in the United States were reported by their parents as having been diagnosed with ADHD by a healthcare provider. This device can be used to enable focused study hours, especially for children struggling with the difficulty to concentrate on a given task.
In addition to being a focus-enabling device, we envisioned this gear as an artifact that gives a social message in the technologically-savvy society. The act of lens shutting was deliberately made performative by coupling it with red lights that flash to demand attention both from the user (for being distracted) and from people around (to make them realize that the user is trying to focus).
RELATED WORK One of the inspirations that motivated our work was Nicolas Damiens’ Tokyo No Ads. In this work, Damiens photoshops all of the visible advertisements out of typical Tokyo street scenes, and presents them as before-and-after animations. In doing so, he provokes the viewer to consider: What would life be like without ads? What if we could visibly “tune them out”?
The concept of a device that enforces concentration by isolating surrounding sensory noise was creatively applied in Hugo Gernsback’s The Isolator. This multimodal work from 1925 involved both hearing and vision, as it rendered the user deaf and restricted vision to tiny apertures. Oxygen is piped in via tube.
We also were inspired by the design used in Cyrus Kabiru’s sculptures. Kabiru creates these works from trash that he finds in his hometown of Nairobi. His work is part art, part performance, part stress-relieving humor-therapy. On top of exemplifying human-machine symbiosis, we see our eye gear as an art piece that makes a socially relevant statement of awareness about pervasive technology.
DESIGN AND IMPLEMENTATION The diagram for the system is simple. The decision of what to filter (in this case, cell phones) is offloaded to a CPU. A webcam captures what the user is seeing; that information is passed to the CPU, which determines whether or not a cell phone is present; and this ON/OFF decision is sent via Arduino to two servo motors that raise and lower lenses at the front of the armature. The device is thus a filtering interface that recognizes distractions and blocks the user from diverting their attention away from object of focus.
Our early prototypes of this used a pair of sunglasses as the armature, with paper shapes for lenses.
The final version was made from lasercut plexiglass and designed such that the webcam and servo motors (and accompanying wires) were properly attached to the armature.
The lenses were cut from cardstock and folded like a fan. This way, they could be discretely folded up into the armature when closed, and expanded out to fill the lens space when open.
In addition to these this, we chose to make the design of the eyewear very visually “loud.” We did this to provoke discussion around the idea of public accountability – do we break habits faster when the world can monitor our progress? To heighten this, we included two red LEDs that glow upon activation of the mechanism, illuminating the armature.
USAGE SCENARIO In our usage scenario, we have a student that is trying to study for her final exam. She grows tired of this, however, and attempts to look at her phone for entertainment. The eyewear notices the phone in her vision and promptly shuts the lenses. Only when she looks away (and back at her book) do the lenses retract.
CONCLUSION AND FUTURE WORK We have provided a provocation-of-concept in the form of eyewear that transforms to publicly block the wearer’s vision when he or she looks at a phone. This project is part of a larger vision: by cognitively offloading our filtering ability to machines, we can actively tune out what we consider to be “noise” in our lives, and enjoy the augmented quiet that results.
This framework (offloading the decision to filter to a machine that has the ability to do so) can easily be extended within vision as well as to other modalities. In addition, with smarter filters (or even classifying neural networks or other advanced artificial intelligence mechanisms) added to the system, more robust filtering definitions could be described. We provide the following scenarios as imaginative extensions:
“I don’t want to see anything other than my book while I’m studying.”
“I want to focus on the road while I’m driving.”
“I only want to hear positive thoughts today.”
In addition to addressing the adverse effects of ceaseless connective technology, we also aim to use this device to create interpersonal empathetic connection, by enforcing people to focus on each other and not be distracted. In today’s digital age, as we become more and more shielded from direct confrontation with alternative opinions, we become more and more critical of them, engaging an egocentric cycle that results in a complete loss of empathy. But perhaps, instead of hindering our ability to be humane and sensitive, this gear can augment it.
CITATIONS Collins, Franklin M. 2014. “The Relationship between Social Media and Empathy.” http://digitalcommons.georgiasouthern.edu/etd/1150/.
The memory palace is an ancient greek technique that can be used to memorize (almost) anything. It is used for tasks such as remembering full decks of cards, historic dates, long lists of words or foreign language vocabulary. Many memory contest champions, including eight-time world memory champion Dominic O’Brien, claim to use this method (Jusczyk, 1980).
The memory palace is a mnemonic technique that involves populating an imaginary scene with mental images that will help us remember the content we intend to memorize. However, getting started with the memory palace can be a demanding cognitive task. For subjects that are not trained as spatial thinkers imagining a scene vividly can prove to be challenging. To address this problem, we propose to make the memory palace real: This means offloading part of the cognitive task dedicated to imagining scenes to reality by using the tangible architectural spaces that surround us. We use the fact we all know how to navigate space naturally and use that to our advantage. We propose to combine the memory palace technique with augmented reality technology to create a study tool that will help anyone memorize more effectively.
The memory palace technique
The memory palace technique works as follows: First, find a concept you want to memorize and use a visual mental symbol to help you remember that association. For example, if you want to remember that the Dallas Cowboys won the Super Bowl in 1972, you could imagine a cowboy on a rodeo. Second, take the image you just created and link it to an architectural scene. It can be a scene of a place you know or an imaginary scene. This means, that for example, you should imagine the cowboy in front your office. Finally, to recall your memories, imagine the scene we just created in your head. The Dallas Cowboys will naturally emerge.
In order to remember more that one concept, simply attach an adjacent mental scene and link a new mental image to that scene. When recalling the full sequence, you need to revisit those scenes mentally in order you dropped the concepts you intended to remember. Intuitively, it might seem a counter-intuitive strategy since you need to remember both a place and the actual content you want to remember, but it’s, in fact, the opposite: Current neurological research has proven spatial navigation and memory both relate to the same part of the brain, the hippocampus (O’Keefe, 1978). Brain scans of “superior memorizers”, 90% of whom use the method of loci technique, have shown that it involves activation of regions of the brain involved in spatial awareness, such as the medial parietal cortex, retrosplenial cortex, and the right posterior hippocampus (Maguire, 2002). This technique takes advantage of this fact to facilitate encoding, storing and retrieval of information.
We have taken a step towards understanding the role spatial navigation plays in memory augmentation by developing NEVERMIND, a learning interface that is in line with how memories are stored internally. This system is divided into two main parts. The first part is an iPhone application dedicated to user interaction. It has three different modes: Encode, Store, and Retrieve.
In Encode mode, users create routes and pair them with images. In Store mode, users can train memory based on once the content is set by physically visiting their memory palaces. In Retrieve mode, users can recover previous knowledge lists and link them to a specific route. Additionally, the system allows users to share knowledge content playlists with other users and pair them with their own memory palaces.
The second part of the interface is dedicated to the display of images and runs images on the Epson Moverio BT-300 augmented reality glasses. We developed a program that runs in the Unity3D video game engine that is responsible for receiving the images from the iPhone sends and displaying them on the glasses.
We have tested the NEVERMIND interface on five different subjects with favorable results. The subjects were asked to memorize two similar lists of 10 items, one using NEVERMIND and the second using a printed list of items. The tasks were not time constrained. Results showed all users were able to remember the lists immediately after running the test. However, all five subjects were able to remember the content of the list 24 hours after the experiment and none was able to accurately recall items on the printed list. When asked about the two study methods, users claimed that studying with the NEVERMIND interface was more enjoyable and nearly effortless compared to traditional study methods.
We tested the NEVERMIND interface on a list of 10 Super Bowl winners from 1967 to 1976. We verified that the subjects had no previous knowledge of the content to remember. For the experiments, we predefined the mental imagery for the users. For example, we used a picture of a man on a horse to represent the Dallas Cowboys or a picture of an airplane to represent the Houston Jets. In all cases, we used routes users were familiar with.
Reusing the palace
We conducted preliminary studies on the reusability of the palace. We used a location of the previous memory palace to remember 15 digits of Pi. Our intuition points that the most demanding part of the technique is loading the palace in your head for the first time. Once the palace is loaded, scenes can be reused to store other content effectively.
Our motivation is to change the way students memorize. We spend a long time memorizing based on repetition. In our experiments, we suggest that there are more effective methods that are in line with the way our brain stores information. We propose an experiential way of learning, where the retrieval process is the act of mentally aligning ourselves to the location.
We see potential uses in education, as a technique to bootstrap knowledge as a starting point before tracing associations and inferences that are characteristic of higher levels of understanding. This interface could be used, for example, to help biology students study. Other uses include public speaking, speeches, toasts, presentations etc.
Mixed reality: We are planning on implementing a mixed reality version of NEVERMIND. At the moment, the graphical content supplied by the interface is not anchored to a specific spatial location. When the user approaches the target location, the image appears. This means, that AR images move with the user’s head motion. We have the intuition that anchoring images accurately with reality will lead to more memorable results.
Chunking the palace: Controlling image placement accurately would also open up new features of the interface. For example, adding hierarchy to the palace. With this feature, the user could control the amount of detail the user wants to remember. This would lead to remembering a set of concepts at different levels of hierarchy. The essential content could be recalled just enough to make a 30 second elevator pitch to an investor, more details could be added to make a 7 minute pechakucha presentation or if we recall all the content, we could deliver a 20 minute presentation on an idea.
Video review: We are planning on adding a set of features that will allow revising or studying the content of the palace without the need of being physically there. A first implementation will include a recording of the routes as the user sees them with overlays of the images to remember. This would result in a video that could be played at 10x speed, slowing down when the content appears and resembling memory consolidation that goes on through the REM phases of sleep. The video should be easily played forward and backward to help the memorization process.
Knowledge playlists: Each user’s memory palace is personal, but the content to remember can be shared.
We propose a social platform to the share and download knowledge with friends or classmates. Each user can use their own palaces and populate them automatically with content downloaded from the web. Predefined graphic associations could be built in and the user could alter the content. This would build a database of concept-image pairings sourced from the community of users. Potential applications of this feature include bootstrapping content into the student’s memory before a class.
Is memory obsolete?
Conducting this study also raised several questions related to the relationship between memory and technology. Why memorize? What role does memory play in the learning process? Is memory still relevant in the age of Google? We are becoming symbiotic with our computer tools, growing into interconnected systems that remember less by knowing information than by knowing where the information can be found. However, we believe that memories are one of our most precious possessions; they grow with our experience and vanish when we die. Like Ebbinghaus states, mnemonic techniques such as the memory palace can increase our ability to retain it will help us preserve our personal experiences stored in memory.
Previous interfaces dedicated to augmenting memory include the Remembrance Agent (Rhodes, 1997), that proactively logged the information the user needed to remember. Other studies include the use of virtual reality to rehabilitate memory with patients suffering from Alzheimer’s (Brooks, 2003). There have been previous studies on the recreation of a virtual memory palace. Most methods involve either a computer simulation model or virtual reality (Legge, 2012). We found another example that pairs an early version of a head mounted display with an interpretation of the memory palace technique (Ikei, 2008). However, the design of the interface of the hardware, software and interpretation of the memory palace technique is substantially different from the one described here.
We designed a learning interface to make memorization more durable and enjoyable. We have designed a model to help users master memory based on the coupling of space and memory. We have implemented an interface prototype, NEVERMIND, that facilitates memory encoding, retrieval, and storage. We have shown experimentally that using our interface, memories become more durable and subjects claim that the process is more enjoyable and effortless. With this work, we hope to make the memory palace accessible to the general user. We have designed a new way to memorize, based on a symbiotic relationship with technology, that enables us to learn in line with how our brains store information.
References and additional readings
Brooks, Rodney. “Intelligence Without Reason.”
Brooks, Rodney. “Planning Is Just a Way of Avoiding Figuring out What to Do next.”
Brooks, Rodney Allen. 1999. Cambrian Intelligence: The Early History of the New AI. Mit Press.
Laird, John. 2012. The Soar Cognitive Architecture. Cambridge,Mass. ; London, England: MIT Press.
Licklider, J. C. R. n.d. “Man-Computer Symbiosis.pdf.”
Markoff, John. 2011. “A Fight to Win the Future: Computers vs. Humans.” New York Times.
Marr, David, and Tomaso Poggio. 1976. “From Understanding Computation to Understanding Neural Circuitry.”
Schank, Roger C. 1990. Tell Me a Story: A New Look at Real and Artificial Memory. New York: Scribner
Barbey, Aron K, Antonio Belli, Ann Logan, Rachael Rubin, Marta Zamroziewicz, and Joachim T Operskalski. 2015. “Network Topology and Dynamics in Traumatic Brain Injury.” Current Opinion in Behavioral Sciences 4. 2015.
Bird, Chris M., Dennis Chan, Tom Hartley, Yolande A. Pijnenburg, Martin N. Rossor, and Neil Burgess. 2009. “Topographical Short-Term Memory Differentiates Alzheimer’s Disease from Frontotemporal Lobar Degeneration.” Hippocampus 20 (10): 1154–69. doi:10.1002/hipo.20715.
Draaisma, Douwe. “Metaphors of Memory.”
Draschkow, D., J. M. Wolfe, and M. L.- H. Vo. 2014. “Seek and You Shall Remember: Scene Semantics Interact with Visual Search to Build Better Memories.” Journal of Vision 14 (8): 10–10. doi:10.1167/14.8.10.
Drew, Trafton, Sage E. P. Boettcher, and Jeremy M. Wolfe. 2016. “Searching While Loaded: Visual Working Memory Does Not Interfere with Hybrid Search Efficiency but Hybrid Search Uses Working Memory Capacity.” Psychonomic Bulletin & Review 23 (1): 201–12. doi:10.3758/s13423-015-0874-8.
Foster, David J., and Matthew A. Wilson. 2006. “Reverse Replay of Behavioural Sequences in Hippocampal Place Cells during the Awake State.” Nature 440 (7084): 680–83. doi:10.1038/nature04587.
Hassabis, Demis, Carlton Chu, Geraint Rees, Nikolaus Weiskopf, Peter D. Molyneux, and Eleanor A. Maguire. 2009. “Decoding Neuronal Ensembles in the Human Hippocampus.” Current Biology 19 (7): 546–54. doi:10.1016/j.cub.2009.02.033.
Ishai, A. 2002. “Visual Imagery of Famous Faces: Effects of Memory and Attention Revealed by fMRI.” NeuroImage 17 (4): 1729–41. doi:10.1006/nimg.2002.1330.
Klein, Stanley B., Leda Cosmides, John Tooby, and Sarah Chance. 2002. “Decisions and the Evolution of Memory: Multiple Systems, Multiple Functions.” Psychological Review 109 (2): 306–29. doi:10.1037//0033-295X.109.2.306.
Kondo, Yumiko, Maki Suzuki, Shunji Mugikura, Nobuhito Abe, Shoki Takahashi, Toshio Iijima, and Toshikatsu Fujii. 2005. “Changes in Brain Activation Associated with Use of a Memory Strategy: A Functional MRI Study.” NeuroImage 24 (4): 1154–63. doi:10.1016/j.neuroimage.2004.10.033.
Llewellyn, Sue. 2013. “Such Stuff as Dreams Are Made on? Elaborative Encoding, the Ancient Art of Memory, and the Hippocampus.” Behavioral and Brain Sciences 36 (06): 589–607. doi:10.1017/S0140525X12003135.
Madl, Tamas, Ke Chen, Daniela Montaldi, and Robert Trappl. 2015. “Computational Cognitive Models of Spatial Memory in Navigation Space: A Review.” Neural Networks 65 (May): 18–43. doi:10.1016/j.neunet.2015.01.002.
Magnussen, Svein. 2009. “Implicit Visual Working Memory.” Scandinavian Journal of Psychology 50 (6): 535–42. doi:10.1111/j.1467-9450.2009.00783.x.
Maguire, Eleanor A., Elizabeth R. Valentine, John M. Wilding, and Narinder Kapur. 2002. “Routes to Remembering: The Brains behind Superior Memory.” Nature Neuroscience 6 (1): 90–95. doi:10.1038/nn988.
NAIRNE, JAMES S., and JOSEFA NS PANDEIRADA. 2016. “Adaptive Memory: The Evolutionary Significance of Survival Processing.” Accessed April 30. http://evo.psych.purdue.edu/downloads/2016_Nairne_Pandeirada.pdf.
Nairne, James S., Sarah R. Thompson, and Josefa N. S. Pandeirada. 2007. “Adaptive Memory: Survival Processing Enhances Retention.” Journal of Experimental Psychology: Learning, Memory, and Cognition 33 (2): 263–73. doi:10.1037/0278-7318.104.22.1683.
Naya, Y., and W. A. Suzuki. 2011. “Integrating What and When Across the Primate Medial Temporal Lobe.” Science 333 (6043): 773–76. doi:10.1126/science.1206773.
O’Keefe, John, and Jonathan Dostrovsky. 1971. “The Hippocampus as a Spatial Map. Preliminary Evidence from Unit Activity in the Freely-Moving Rat.” Brain Research 34 (1): 171–75.
O’Keefe, John, and Lynn Nadel. 1978. The Hippocampus as a Cognitive Map. Oxford : New York: Clarendon Press ; Oxford University Press.
Papassotiropoulos, Andreas, and Dominique J-F de Quervain. 2015. “Genetics of Human Memory Functions in Healthy Cohorts.” Current Opinion in Behavioral Sciences 4 (August): 73–80. doi:10.1016/j.cobeha.2015.04.004.
Peters, Marco, Mónica Muñoz-López, and Richard GM Morris. 2015. “Spatial Memory and Hippocampal Enhancement.” Current Opinion in Behavioral Sciences 4 (August): 81–91. doi:10.1016/j.cobeha.2015.03.005.
Schiller, Daniela. n.d. “Memory and Space: Towards an Understanding of the Cognitive Map.”
Sparrow, Betsy, Jenny Liu, and Daniel Wenger. 2011. “Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips.” Science 333.
Squire, Larry R., and Carolyn Backer Cave. 1991. “The Hippocampus, Memory, and Space.” Hippocampus 1 (3): 269–71.
Wixted, John T., and Ebbe B. Ebbesen. 1991. “On the Form of Forgetting.” Psychological Science 2 (6): 409–15.
Foer, Joshua. 2012. Moonwalking with Einstein: The Art and Science of Remembering Everything. London: Penguin Books.
Qureshi, A., F. Rizvi, A. Syed, A. Shahid, and H. Manzoor. 2014. “The Method of Loci as a Mnemonic Device to Facilitate Learning in Endocrinology Leads to Improvement in Student Performance as Measured by Assessments.” AJP: Advances in Physiology Education 38 (2): 140–44. doi:10.1152/advan.00092.2013.
Yates, Frances Amelia. 2002. The Art of Memory. Nachdr. Chicago, Ill.: Univ. of Chicago Press.
Memory Augmentation Interfaces
Brooks, B. M., and F. D. Rose. 2003. “The Use of Virtual Reality in Memory Rehabilitation: Current Findings and Future Directions.” NeuroRehabilitation 18 (2): 147–57.
Colley, Ashley, Jonna Häkkilä, and Juho Rantakari. 2014. “Augmenting the Home to Remember: Initial User Perceptions.” In , 1369–72. ACM Press. doi:10.1145/2638728.2641717.
DeVaul, Richard W., Vicka R. Corey, and others. 2003. “The Memory Glasses: Subliminal vs. Overt Memory Support with Imperfect Information.” In Null, 146. IEEE. http://www.computer.org/csdl/proceedings/iswc/2003/2034/00/20340146.pdf.
Feiner, Steven, ACM Digital Library, ACM Special Interest Group on Computer-Human Interaction, and ACM Special Interest Group on Computer Graphics and Interactive Techniques. 2008. Virtual Reality as a Tool for Assessing Episodic Memory. New York, NY: ACM. http://dl.acm.org/citation.cfm?id=1450579.
Feiner, Steven, Blair MacIntyre, Tobias Höllerer, and Anthony Webster. 1997. “A Touring Machine: Prototyping 3D Mobile Augmented Reality Systems for Exploring the Urban Environment.” Personal Technologies 1 (4): 208–17.
Green, C. Shawn, and Daphne Bavelier. 2015. “Action Video Game Training for Cognitive Enhancement.” Current Opinion in Behavioral Sciences 4 (August): 103–8. doi:10.1016/j.cobeha.2015.04.012.
Harman, Joshua. 2001. “Creating a Memory Palace Using a Computer.” In CHI’01 Extended Abstracts on Human Factors in Computing Systems, 407–8. ACM. http://dl.acm.org/citation.cfm?id=634306.
Hou, Lei, Xiangyu Wang, Leonhard Bernold, and Peter E. D. Love. 2013. “Using Animated Augmented Reality to Cognitively Guide Assembly.” Journal of Computing in Civil Engineering 27 (5): 439–51. doi:10.1061/(ASCE)CP.1943-5487.0000184.
Ikei, Yasushi, and Hirofumi Ota. 2008. “Spatial Electronic Mnemonics for Augmentation of Human Memory.” In Virtual Reality Conference, 2008. VR’08. IEEE, 217–24. IEEE. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4480777.
Kawamura, Tatsuyuki, Tomohiro Fukuhara, Hideaki Takeda, Yasuyuki Kono, and Masatsugu Kidode. 2007. “Ubiquitous Memories: A Memory Externalization System Using Physical Objects.” Personal and Ubiquitous Computing 11 (4): 287–98. doi:10.1007/s00779-006-0085-4.
Legge, Eric L.G., Christopher R. Madan, Enoch T. Ng, and Jeremy B. Caplan. 2012. “Building a Memory Palace in Minutes: Equivalent Memory Performance Using Virtual versus Conventional Environments with the Method of Loci.” Acta Psychologica 141 (3): 380–90. doi:10.1016/j.actpsy.2012.09.002.
Quintana, Eduardo, and Jesus Favela. 2013. “Augmented Reality Annotations to Assist Persons with Alzheimers and Their Caregivers.” Personal and Ubiquitous Computing 17 (6): 1105–16. doi:10.1007/s00779-012-0558-6.
Ragan, Eric D., Doug A. Bowman, and Karl J. Huber. 2012. “Supporting Cognitive Processing with Spatial Information Presentations in Virtual Environments.” Virtual Reality 16 (4): 301–14. doi:10.1007/s10055-012-0211-8.
Rhodes, Bradley J. 2016. “The Wearable Remembrance Agent.” In Proceedings of 1st International Symposium on Wearable Computers, ISWC’97, 123–28.