Aaron Abentheuer
Follow
ABOUT ME
I am a multidisciplinary designer & prototyper from Austria, currently studying Interaction Design at HfG Schwäbisch Gmünd in Germany. You will also find me working on various side-projects. If you have an interesting project I could help you with, feel free to reach out to me!
•••
MAY 2015 Received a WWDC Scholarship from Apple.
WORK & PROCESS
For me, code is a creative tool. I believe that what we consider prototyping today will just be design tomorrow. Prototyping is the fundamental part of my process, starting on paper, progressing into higher fidelity with tools like Keynote, Quartz Composer & Xcode.
•••
SINCE NOV 2015 Interaction Design Intern at IDEO, Munich
SINCE JUNE 2015 Product Designer at Gate, Berlin
JULY - SEPT 2014 Intern at Visuelle Gestaltung — Daniel Utz, Stuttgart
PRESS & IRL
Recently my work has been showcased by Dezeen, PSFK, Inhabitat, The Next Nature Network, Inverse, Futurescope, iOS Dev Weekly, Gizmodo, Deutsche Welle & the American Institute of Graphic Arts.
•••
OCTOBER 21, 2016 RE·WORK Future of Food, London
GET IN TOUCH
MOBILE + FACETIME +49 176 72668383
MAIL + iMESSAGE mail@aaronabentheuer.com
SKYPE aaronabentheuer
© 2016 ME + (IN SOME CASES) OTHERS · BE NICE OR MY DOG WILL HAUNT YOU · §
Browser Browser
ALGORITHM + UI CONCEPTION
MID 2015

Smart Forward → A Big-Data Microinteraction.

It took a few iterations to come up with a surprisingly simple solution to help people think & learn better every day. Ultimately we redesigned the browser’s familiar forward button.

Different Topics.
PROJECT DEFINITION

Throughout the whole project we had one primary objective: Finding a new way to organize the world’s knowledge in a way that is highly contextual and connected to improve learning and understanding. The implementations however changed dramatically from what we initially thought was the right thing to do.

Initially we thought about about new ways of interactive learning by connecting location to small bits of information. We wanted to bring the “Method of loci” into the digital age by using for example an Apple Watch.

RECONSIDERATION

Creating an app dedicated to learning turned out not to be the right approach. We wanted a way to get the masses to think more connected, not only the people willing to invest time in it. That’s why we looked at the way regular people gather information. We started to look at the browser and search-engines.

Thinking about the way search-engines mostly work today we felt that a lot about its information architecture is subpar.

IA OF SEARCH

We believe that searching the web should belong to the browser, not a website like the one you are searching for. Change is already happening with more and more search moving into the system and browser.

YOSSARIAN

After J. Paul Neely gave a talk at our school we looked at his Yossarian creative search engine. We really liked the idea, implementing it on just another website only very few people know about just won’t make the masses more creative though.

At first, we had to ask ourselves: “Where does the data come from and what does it look like?” Suffice to say that we are designers, therefore we have not actually implemented what I’m about to show you. We drafted our ideas in a diagrammatically.

THE ALGORITHM

Based on your search-history our system could learn about things you’re already expert, or interested in. Analyzing the contents and tone of the webpage it finds people with similar interests and mindedness that are experts in other fields than you as well. If you arrive at a certain topic you are new to, it could look at these people’s favorite websites that cover your new topic. This way, hopefully, you get websites that are written in a way that makes you care about things you usually wouldn’t bother with.

Simplified visual representation of what we thought of. Based on a Markov chain we wanted to be able at every point in your web-browsing what the next most interesting website for you could be.

The algorithm provides us with a never-ending string of websites with relationships to the current one. These relationships get more abstract the further you go done this string. Through content-analysis we figure out once there’s a break in approach. For example if you look at a mathematical problem that at some point get’s highlighted in a musical way you get a break. Each one of those breaks is recorded. This leaves us with two modes, one with tight relationships that helps you better understand what you’re currently reading, and one with lose relationships that helps you discover new things and think creatively.

THE USER INTERFACE

Finding a great user interface was a challenging task. We tried many extensions to the existing user-interface in browsers until we stumbled upon something surprisingly simple: the back- & forward-buttons.

These two buttons, hardly ever thought about, provide a really great UI for what we’re trying to accomplish. Especially the forward-button, which lives mostly in its deactivated state, often even hidden by default, bares a lot of potential.

The basic interface was done at that point. It’s exactly the way you would expect. Clicking the back-button will always bring you back. Once you’ve gone back the forward-button will bring you forward again. If there’s nothing to go forward to though, our algorithm kicks in and will bring you to the next most interesting site.

The only significant addition is the so-called “fast-forward” button, which brings you to the next break in tone or context. This let’s you look at the topic from another angle and think more broadly and creatively.

Additionally, while hovering over the additional items in the fast-forward button you get a preview of the preloaded pages and a flowchart for you to figure out how the system made the connection to a certain topic.

I really want to stress how simple this solution is. It’s leveraging but extending an interaction users know incredibly well while adding a lot of value. Introducing a service like this often times fails because of a steep learning-curve or bad discoverability.

CONCLUSION

We’re very satisfied with the basic implementation we demonstrate in our prototype. We received incredibly positive feedback, especially mentioning the elegance and simplicity of our solution. We also talked to designers from major browser vendors who were very interested in the concept.

SPECULATIVE INTERFACE DESIGN
EARLY 2015

Cultivator → Bioprinting in the Kitchen of the Future.

We built a hardware + software prototype to paint a picture of a world in which 3D-printing of meat has become mainstream and how it could disrupt kitchen + food-culture.

PROJECT DEFINITION

When the project started out, we were looking at redesigning the grocery-shopping experience. Quickly, we realized, that this industry is ripe for disruption that might come from 3D-printing. We decided to investigate what a world would look like if bioorganic printing was an everyday technology.

RESEARCH

We were inspired by the “The state of food and agriculture.” study by the UN, “2063 Dining.” by Trendstop for Miele and “The Future of Kitchens.” by IKEA on a conceptual level and by Otl Aicher’s “The Kitchen is for Cooking” (1982) and Ettore Sottsass’ “About Kitchens” (1992) in terms of how the kitchen has developed in the past and how it might develop. Through these studies it became clear to us that (especially considering trends like Entomophagy consumption of meat will be more of a luxury product than it already is in the future and that it will be consumed in smaller doses. Therefore we decided that this should be a product that people have at their homes.

The user interface was done under the consideration of getting out of the way of the discussion about the topic of bioprinting as much as possible but still being provocative and forward-thinking enough so it could act as a conversation-starter.

From the beginning it was clear that we wanted to create the illusion of the interface being part of the hardware by using a black background that could blend in with the hardware. At the same time the interface should act more like a physical tool without a complicated menu-structure. That’s how we ended up with a horizontally scrolling menu of so-called “compositions” that can be printed with a simple touch of a button.

COMPOSITIONS

These are cards with all the information you might want to know about a particular meat. They show who created it together with a short description how you might want to process it. Most importantly they contain all health-information that might be interesting.

HEALTH MAGIC WAND

If you have health-information and are technically able to alter it you want to have a good UI for that. It wouldn’t make much sense though letting the user set each parameter individually. Therefore we used a piece of UI that is commonplace in creative applications, the magic wand tool. Basically if you press the “Adapt for my Health” button Cultivator tries to adapt the meat’s composition as much as possible without compromising the taste noticeably.

Interface
A CREATIVE TOOL

Although not implemented in the prototype we also thought about how you could define the shape and texture in a more creative and liberal way. We came up with an interface that allowed you to alter every characteristic you might want to change about a meat with a single sliding gesture.

Early Iteration
A THOUSAND TIMES NO

We could of course have gone much further and have endless ideas where we could take it. The result we wanted to achieve though was a functioning prototype that could help people understand what’s coming in the future with bioprinting technology, therefore we wanted to keep it as simple as possible.

The ultimate goal was to use the device at the exhibition to help people imagine a future with bioprinted meat in their kitchen while also making the benefits very clear. We incorporated a range of subtle hints about the side-effects of such a disruption might have, like changes in energy-consumptions and general trends in the kitchen landscape like self-cleaning surfaces.

self-cleaning mode

Every time the user didn’t interact with Cultivator for a while we showed the fingerprints the user left while using the system (using the actual locations of those markings) and digitally wiping them away to create an incentive to talk about the self-cleaning abilities.

SHOW TIME

Ultimately the day of the presentation came and we didn’t know how the reactions would be. Initially people, as expected, were sceptical but very curious. When they tried the prototype questions came up which we could answer with confidence and demonstrate on Cultivator itself.

At the exhibition we even displayed actual salami to further create the illusion of a working prototype.

COVERAGE + CONCLUSION

After the project was exhibited it was featured in press worldwide which was incredibly interesting to watch. We received hundreds of comments, remarks and scepticism. The project sparked a conversation about a really niche but important topic, which was exactly what we wanted to achieve.

Open Source Projects → Giving back to the Community.

Since prototyping is a fundamental part of my process, I sometimes come up with things that might be valuable for fellow designers, so I open-source them.

This UIWindow subclass encapsulates several features that I came up with in different projects to enhance accessibility and polish of some applications.

CONTROL-CENTER DETECTION

When prototyping Cousteau, an AirPlay-mirroring based educational application, one of the biggest hurdles for users was setting up AirPlay mirroring in the first place. That’s why I came up with a way to detect when the user opens Control Center and attempts to set up AirPlay mirroring to provide further assistance with the application’s user interface in the background.

ADAPTIVE ROUND CORNERS

While using the great Hyperlapse by Instagram/Facebook I noticed that they use rounded corners of the application’s window. Especially in an application like Hyperlapse, which is essentially a big camera-view this adds a nice touch and blends very well especially with a black iPhone. It starts to get messy however if you enter a different mode of iOS, like for example the iOS 7/8 multitasking-switcher. Since the effect is accomplished by a round-rect mask leaving black corners, these become unwanted ugly artefacts. My implementation gently animates those corners out and in depending on the context.

MORE & LICENSE

If you have any questions or contributions feel free to reach out or submit a pull request. I’ll expand this little UIWindow subclass over time with little enhancements. AAWindow is licensed under the MIT license.

Emotion and facial expression can be used in a variety of ways especially in installations or museum contexts. I wanted a tool to quickly prototype with facial expression in Swift.

CIDETECTOR + NSNOTIFICATION

The library is based off of Core Image CIDetector and can be accessed through simple NSNotifications. Talking to other designers learning Swift to prototype I realized that NSNotification is the easiest to grasp way to implement an event-based prototyping tool like AAFaceDetection

PROTOTYPING WITH FACIAL EXPRESSION

I’ve used this library in a variety of ways but actually never to react to the user’s emotion. I mostly use it to detect presence or altertness but there are a lot of other interesting fields of application.

MORE & LICENSE

AAFaceDetection has been used in several design projects in schools around the world. If you have any questions or contributions feel free to reach out or submit a pull request. AAFaceDetection is licensed under the MIT license.

Easy setup for projects using AirPlay Mirroring as well as some assistive UI.

AIRPLAY MIRRORING

AirPlay Mirroring is an interesting but barely used technology to project an interface to Apple TV. I first experimented with it in an educational application which has been controlled through an iPad. For a beginner there are some things you have to watch out for though, which is why AASecondaryScreen provides a simple implementation for you to work with.

ASSISTIVE UI

Setting up AirPlay Mirroring through Control Center can be hard for some users as we found out in user testing. That’s why AASecondaryScreen provides hooks for you to guide the user in the setup process.

MORE & LICENSE

If you have any questions or contributions feel free to reach out or submit a pull request. AASecondaryScreen is licensed under the MIT license.

LEAP MOTION EXPERIMENT
MID 2014

Loom → Sound Toy based on Gravity.

We used this installation to introduce people to the device and test how feasible different familiar patterns are for touch-less interaction in terms of accuracy and confidence.

Even though that’s not the best way the design-process can go in this project we were tasked to just do something interesting with a given technology, the Leap Motion controller. Right from the get-go we wanted to do something that’s true to the technology itself. We were really curious what the boundaries of this type of interaction in terms of accuracy & longevity were. That’s why we built in some hooks for user-testing capability.

After some experiments with physics libraries and particle systems we got really interested in using gravity not in a way to mimic nature but as a tool, in our case, to make some noise. Gravity has some very unique features, for example an inherit lag once you change it.

Cascade

Within two days we built a prototype using pBox2D and minim. One could create green elastic bridges and fix them into the grid system. Every time a particle hits a bridge it creates a sound. By performing a "circle" gesture the user could change the direction of gravity and free particles that got stuck in the web of green bridges.

After we posted a video of our prototype online we were invited to exhibit at PLATINE, a festival for media arts and alternative game culture alongside Gamescom in Cologne.

We gladly accepted and see this as an excellent opportunity to do the user-testing we initially wanted to do. It’s been incredibly valuable seeing how hundreds of users interact with the system and which problems they’re facing.

We also collected a lot of data about accuracy of interactions especially considering the interaction-duration during the festival. We’re currently analyzing this data and are exploring the creation of a library of user-interface elements based on this data.