AR/VR

Winter Hackathon 2018: Hacking History

A few years ago, I had the good fortune to travel to London to visit some friends. But before I talk about my trip, there is something you must know about me.


A few years ago, I had the good fortune to travel to London to visit some friends. But before I talk about my trip, there is something you must know about me. I grew up surrounded by British influences—my mother loved the Beatles, I loved the Cure, I watched hours of Monty Python and The Young Ones, and of course there were the stories of King Arthur and his lot, the plays of Shakespeare, Canterbury Tales—and that was just when I was younger. Suffice it to say that I was steeped in stories, songs, and lore that created for me a vivid image of the British Isles, and I really wanted to go there. So of course, this visit to England had me open mouthed and speechless at every turn, because quite literally every turn held some road, some building, some *thing* that I had read about, heard about, imagined about, and, frankly, it was a bit overwhelming to say the least. (If you have the chance, visit the Tower of London. It is so worth it. And yes, do the tour with the guys in the funny hats and outfits!)

So there I am, wandering agog around London, and I found myself being curious about the history of practically every (old) building I passed by. And of course, being the <strikethrough>nerd</strikethrough> engineer that I am, I found myself thinking “What this situation really needs is an Augmented Reality app, so I could just point my phone at a building and see what its history is!”

And so, our team’s hackathon project was born.

I was joined by two amazing women: Marri Gamard and Cat Janowitz (yes! this was an all-women team!), and we embarked on an adventure that seemed fairly daunting at the least: build an Augmented Reality (AR) app in Swift—having zero prior experience in AR and probably less than four hours of Swift experience between the lot of us—and use data that we weren’t sure existed.

What we ended up with was a working prototype that met all of our primary goals (if not quite our stretch goals), a deeper understanding of AR, more skills in writing apps in Swift, a terribly fun two days bringing my dream to life, and this video:

 

How we did it

We started off using a demo project that had been put together for the hackathon by our terrific fellow Toad, Phil Tseng. It contained many of the AR baseline bits and bobs we'd need to to get this thing off the ground. Since that demo was already using the ARKit library, we decided to use ARKit + CoreLocation to integrate the GPS information we were (hopefully) going to integrate (from somewhere?). From the ARKit+CoreLocation repo:

ARKit: Uses camera and motion data to map out the local world as you move around.

CoreLocation: Uses wifi and GPS data to determine your global location, with a low degree of accuracy.

ARKit + CoreLocation: Combines the high accuracy of AR with the scale of GPS data.

We also took a few pointers from this Canadian Augmented Yellow Pages proof of concept, which also implements ARKit + CoreLocation, specifically on how to work with SceneKit and CALayers, which are part of the geometry of the labels that we were trying to make appear on buildings. But the question still remained: Where do we get the data from?

For that, we're lucky to live in a city that takes sharing of its data so seriously. We headed straight to PortlandMaps.com, which led us in turn to Portland Maps' Open Data portal. Right there on the site, we could just download all the data we needed. But what if we wanted to connect to an API? We figured we'd download the data for now, since we assumed any request for an API key to access city data would take...a while.

It took five minutes to get an emailed response from an actual person with a fully functional API key. Maybe even less than five minutes. You go, city of Portland! You go on with your data-sharing self. :high-five:

And so, all the pieces were acquired, all the guidance we needed was found, and we stitched it all together in two days, with time enough to go out into the world and make a video.

 

What we learned

The Wayback Hackers runs on coffee, tea, water, and La Croix.
We ran on coffee, tea, water, and La Croix.

Here's a summary of some of our bigger takeaways:

  1. Augmented Reality geometry can be quite difficult to wrap your head around, even for something that seems, on the surface, quite simple. Even if you can conceptually understand what needs to be done, the implementation can have you going around in circles. SceneKit has a large array of geometries to use, and finding the right mix of CALayers and CATextLayers, and applying those layers to SCNPlanes—or should we use SCNBoxes?—which are then attached to a Node...well, getting all that straight and figured out when you've never used Swift before was a bit of a challenge. A challenge that we conquered. But then came...

  2. Styling a CATextLayer. Boy is that ever fiddly. Especially if you thought you could apply different styles to every line, or center some lines and not others—nope. It's just like a big dumb textbox with very little formatting, and what it does have is just applied to all the text. Had we more time, we perhaps would have experimented with using several CATextLayers on a single CALayer? Who knows. We'll figure that out some day, I'm sure.

  3. With a bit more time and practice, I think we all concluded that we could get the hang of Swift as a language. While it's not going to outshine Python as my first choice, building an app using Swift isn't the strange mysterious magical thing it used to be. Xcode, on the other hand...

  4. Xcode is like a giant bear with the appetite of a very finicky cat and the attitude of a chihuahua. And is sometimes about as clear as mud. I think we spent probably 30 minutes, all three of us, trying to find something as simple as, for instance, a 'Properties' window, only to find out that it was some inscrutable tiny little icon tucked away where you could hardly see it. There's a lot of Xcode that you just need to know.

 

So what's next?

We'll definitely continue to refine this project during our R&D time. For starters, we ended up using a small sample of the data from the API instead of using the API itself, so hooking up the API to the app is top of the list. Real-time filtering of the results from the API to those buildings in a walkable geographic distance is one improvement we're itching to make. A more refined textbox view with some styling applied would really make things pop better; for one, I think the grey is not the best choice. (Even though I *think* it was my idea. Sometimes my ideas are not good ones.) We also had the idea of making those textboxes clickable, so you could click into another ViewController, which would display more detailed information about that building, as well as photos from image archives and other relevant data. Maybe you could read about the architect who built it? Swipe through images of its construction and important historical moments? We're only restricted by the data we can get our grubby hands on! Another idea is to somehow create an overlay of the building as it looked when it was first built in the location that it stands today. My first foray into AR geometry, however, tells me that while I do think nothing is impossible when it comes to software, that last item would take a lot more time and brainpower than this project currently has at its disposal, now that, alas, the hackathon is over.

And once that’s done? Well, we’ll email whatever historical data archives there are in London and see if they’re as liberally free with their data as Portland is! And then I’ll have to go back to London to test it!  

 

Similar posts

Get notified on new marketing insights

Be the first to know about new B2B SaaS Marketing insights to build or refine your marketing function with the tools and knowledge of today’s industry.