Category: Uncategorized

  • Virtual Insanity

    Virtual Insanity

    November 2025

    I stumbled across some old thoughts on the state of VR 8(!) years ago. In many ways, it feels like we are still stuck approximately where we were back then, but some of the new products coming out might change that. Without further ado…

    Predictions
    No new screens: Monitors, Phones, smartwatches, tvs, virtually any display or light emitting device that exists in the real world can and will be replaced with AR overlay. Traditional lighting might not even be necessary. If you map an area, you should be able to navigate it in complete darkness.
    -Scene hosting website
    -Smartears: Need to be able to achieve the same mixing that AR vision systems promise. Earbuds will be replaced with bone conductive headphones so that they do not compromise existing hearing.
    -Realtime scene capture at all times: Not just video/audio, but entire 3D scenes with audio placed correctly in the environment,
    -Head mounted and controlled camera
    -Haptic gloves with embedded camera in fingertip
    -Telepresence will be a huge boon for service based industries. Instead of having to send out an experienced employee with an apprentice to every site, put a camera and earpiece on each apprentice and let them call on the “master” as needed

    March 2017

    I can’t think of an emerging technology that is poised to change the world more than virtual reality. There are plenty of obstacles and a wide variety of problems that need to be solved, but if the ChiVR (pronounced “shiver”) meetup at Isobar taught me anything, it’s that there are lots of extremely talented people working on them. What kind of problems, you ask?

    1. The look. This one is tough. Head Mounted Devices, or HMDs, are really dorky looking. And they *have* to go on your face, in front of your eyes. This definitely adds an interesting element to social interaction. Some people are fine with this, but the vast majority are not. Snapchat spectacles seem to have approximately the right form factor, although even that is pushing it. Although this doesn’t do much for the way it looks in the real world, there is an app that replaces the image of the HMD with the user’s face. While I think it’s possible that people will eventually adopt the format 
    2. The content. Mark Meeker spoke on this during his talk and I can’t agree more. Underwhelming content is very easy to make, and gives everyone on the right half of the technology adoption curve the opportunity to write VR off prematurely.
    3. The demos. Until everyone has a HMD that can be cast to, VR demos will be underwhelming to some degree. The level of immersion is critical to practically any VR experience, so being forced to go back to two dimensions is a major obstacle to overcome. There were a couple moments that stuck out to me:
      1. Headset on or off? In order to properly navigate, the headset needs to be on. Delivering a speech while inside a VR scene, however is challenging both for the presenter and the audience. At several points during one of the demos, the presenter said “imagine if …” which left me thinking “This is a demo. I don’t want to imagine it, show it to me.”
      2. Even if the audience has headsets with audio, does the presenter communicate via voiceover, or perhaps with some sort of digital avatar? How is the audience represented? Can they move around, or do they simply get a “ride-along” experience? I can foresee situations where opting for one over the other would be preferable.
      3. Look ma, no hands. At the beginning of the demo, an interesting moment occurred where the previous presenter, who had been using a handheld wireless microphone and a powerpoint, handed the show to a new presenter who had a Vive. The Vive requires both hands to be occupied, which removes the ability to hold microphones. A head mounted mic would have solved this issue, but it was still interesting to see the previous presenter decide whether or not to play the role of “microphone stand.” She ended up opting to simply speak louder. Still, since navigating the virtual world required her to face away from the crowd, some of her presentation was necessarily delivered in the exact opposite direction of the audience. 
      4. The immersion. VR is a necessary precursor and special case of AR where the experience is completely immersive. I believe the final form of this technology will have a “mixer” where you can adjust and toggle the types and levels of each experience. Conventional monitor/keyboard IO systems are still way ahead of VR in terms of ergonomics, and I think part of it is because of how non-invasive it is. We control what we look at and for how long by simply shifting our gaze. VR is oppressively, definitionally immersive.
    4. I feel like it is important to draw a distinction between technical immersion and emotional immersion. Music, even without words, can create an profoundly immersive experience. The same could be said for text. 
    5. Input. This is also big, and has a number of interesting solutions. I feel like we are still waiting for the equivalent of Douglas Englebart’s “Mother of All Demos” moment to show us the path. Handheld and voice-controlled and eye-tracking input devices are all problematic in different ways, but it seems like some combination of these will likely be present until we have the “Neural Lace” Brain computer interface reference by Elon Musk and many others.

    Questions and Observations:

    1. Why aren’t there more apps using the Vive’s front-facing camera?
    2. Shouldn’t sufficiently advanced photogrammetry be enough to create an immersive environment? The brain’s visual system seems like an existence proof of this.
    3. Bone conduction headphones (as opposed to traditional ear/overear headphones) seem like a requirement for true augmented reality.
    4. There seems to be a need to relocate some of the components of an AR system in order for it to fit the desired form factor for HMDs. Once again, Magic Leap seems to be addressing this with a fiber scanning projector/camera system.
    5. There is a need for both head-mounted and hand controlled cameras. Fumbling for a phone to take a picture is going to seem hilariously antiquated once this technology is prevalent. We are moving towards a society where the default is that everything is being recorded unless it is specifically prohibited. How will this be implemented?
    6. Sufficiently advanced technology is indistinguishable from magic. We’re there, and what an amazing magic show it is turning out to be. I’ve got to be a part of this.
    7. Volumetric Video is the true future of VR. Capturing requires multiple perspectives, however. I’m picturing a set of (perhaps lighter than air) drones with cameras.

    April 2017

    I was glad to hear acknowledgment of the challenges facing adoption of VR. It’s not about the general public being luddites – although some of them certainly are – and fearing the changes that new technology bring – although some of the certainly do. Strapping something to your face *is* a big ask, and understanding that is extremely important. Considering how many people still refuse to wear a bike helmet, VR (specifically HMD design) has a long way to go before it can be accepted as a norm.

    One of the panelists described an extremely clever prototyping method where, prior to coding, he tested UX by sitting users down with a Google Cardboard with lenses removed and held up index cards with text on them to see how people interacted with the environment.

    Immersion of the “Haircut Illusion”

    360 Video Sucks

    Allowing the user to choose what they’re looking at seems like it would be a boon, but it actually can lead to a worse user experience than well-produced traditional video. The artistic choices that cinematographers make create immersive and a cohesive experiences effortlessly. Watching a 360 video often feels like a chore. Where is the action? Am I missing something important because I’m looking in the wrong direction?

    May 2017

    I think perhaps these events are losing some of the novelty for me. Having the same panelists speaking every time is great for new people, but I’m starting to hear the same ideas repeatedly from the founding members, when I’d really like to be seeing discussion from people working on other projects. It was still a great time, but I think we can do better.

    Had a chance to speak with the ShareVR team a little more

    July 2017

    Knuckles controllers open up many interesting possibilities. Could a typing experience be developed that could compete with traditional keyboards? Perhaps something based on the chorded keyboard a la http://asetniop.com/ with predictive elements and/or eye tracking?

  • Ornaments

    Ornaments

    Some holiday cheer!

    I can’t take credit for the design but I added a twist: Acetone Smoothed ABS

    The Golden Coffee!

    Mixed media for this lightbox-inspired design: Acrylic and Laser Cut hardwood

  • Escherius Acrylon

    Escherius Acrylon

    This lizard tessellates…

    The leftover acrylic looks cool too!

  • Signage

    Signage

    There’s something especially satisfying about producing physical representations of text. It just scratches an itch. I have vivid memories of playing with colorful fridge magnets as a child — trying to spell out funny words and sentences. Here are some attempts at giving form to text.

    Blackletter fridge magnets evoke movable-type printing presses.

    Basic extrusion on a vector image.

    Electroluminescent (EL wire) routed through 3D printed conduit

    Projection mapped with Lightform.

    Taking the concept up a level in the dynamism showroom:

  • Task-Quest

    Task-Quest

    Task-Quest is equal parts life-logger, productivity app, and old-school RPG. Earn XP and items by completing tasks and test your character’s power in campaign mode. Multiplayer mode let’s questmates see when shared tasks are completed or overdue.

    task-quest.com

  • Tralfee

    Tralfee

    “The creatures can see where each star has been and where it is going, so that the heavens are filled with rarefied, luminous spaghetti. And Tralfamadorians don’t see human beings as two-legged creatures, either. They see them as great millepedes—“with babies’ legs at one end and old people’s legs at the other,” says Billy Pilgrim.

    -Kurt Vonnegut, Slaughterhouse-Five, 1969

    I thought it might be cool to see my life like a Tralfamadorian.

    The goal is to, given a position in time and using all data available, consistently infer the following:

    -Position in 3D space
    -Orientation in 3D space
    -Time specific 3D model for character
    -Environment

    The following data is available:
    Personal:
    -Geolocation data exported from Google Maps “Timeline.” This forms the core dataset, but is not the only source of truth for position data without additional processing
    -Geotagged photos with timestamps
    -Financial transactions
    -Timestamped emails, texts, phonecalls, whatsapps, google drive files, google chats,
    -Browser history/Search History

    Environment:
    -Almanac style data on weather, sunrise/sunset
    -Other news: sports scores, entertainment, technology, politics, stock market, Wikipedia “what happened on this day” style
    -Gas prices, CPI
    -Internet Wayback machine
    -Calendar Events
    -Average traffic data
    -https://worldview.earthdata.nasa.gov/
    -Google Earth’s temporal data (go back in time)
    -Timestamped media (blogs, podcasts, etc)

  • 3D printing a brain

    3D printing a brain

    A friend of mine works in a neuroscience lab and mentioned that she recently had an MRI as part of a study. I decided that it might be a fun exercise to convert this to a printable 3D model.

    The standard file format for MRI scans is called DICOM. The first step that was required was to isolate the brain from rest of the bone and tissue that makes up the head. To perform the “skull-stripping” I used a program developed by UCLA called Brainsuite that automates the process.

    The next step was taking the skull-stripped brain and generating an STL file. I used a program developed by Brazil’s Ministry of Science and Technology called Invesalius to perform the conversion.

    I cleaned up the resulting STL to make it more printer-friendly. There were many defects in the resulting mesh which I used meshmixer to repair. This took forever! Although I was able to automatically detect and repair many of the imperfections, there were still quite a few that I needed to manually fill. I felt like like I was performing brain surgery!

    Finally, the model was ready to be printed. I used my trusty Ultimaker 2 Go to print the finished product at approximately 1:8 scale.

    And voila! It’s not every day you get to hand someone a model of their own brain. What a world we live in!

    Assets: Picture of final 3D printed brain, screenshots/gifs of intermediate progress (mricron, brainsuite, invesalius, meshmixer, cura)