Fashion4two

Currently when you want to ‘wear’ a piece of #digitalfashion outside of the #metaverse, you’ll find yourself pointing your phone with an AR app at someone else wearing a virtual creation. Or you have a very long selfie stick so you can view yourself. But actually the screen is a little small to see yourself properly. Especially when you need to include two people in that image. As is the case with this new creation I’ve made.

It’s a peek into the future of digital fashion, as it will appear in an era when it’s not just about solo try-on experiences. Soon you’ll be amongst other people who’re wearing digital fashion too. And that triggered me to start thinking about digital fashion that connects to other people neary. As a basic proof-of-concept test I’ve created this “Fashion4two” wearable that includes a script which changes the color from blue to red when you’re close together.

AR duo outfit

What’s going to be the main challenge for #digitalfashion? Mimicking physical fabrics in the most realistic way? Replacing some aspects of the current fashion production cycle in an environmentally friendly way?

Or will it be about exploring a new freedom, working with virtual materials and creating things that are not possible (or comfortable) in the ‘real’ world?

I’m in favour of the latter. Here’s a piece of virtual garment you don’t want to wear for real. But in #AR you can! This stretchy “duo outfit” changes its color from blue to red when you’re getting closer together. Anyone, except Mia and Vincent?

The (?) Metaverse

Today I got a Linkedin message asking: “Hello! Could you contribute your knowledge about the metaverse to our Q & A database?”

I replied:

Thanks for the invite. I must confess that I’m rather puzzled these days about the whole metaverse topic, so I doubt if I have anything relevant to contribute. For me the journey into a metaverse world started in Second Life, but it also ended there: https://sndrv.com/secondlife

Currently I’m fore mostly creating AR. Personally, I am in doubtful about the whole idea of shippable identity that goes from world to world. To me, it seems a benefit to be one person in one domain, and be a totally disconnected being in another space. But that seems to be the oppositie of what the tech world wants to hear these days.

I do even think that we should differentiate our own identities in the real world, and be able to switch from one mode to the other. That will become a relevant topic when we enter the age of AR, which will co-exist or be part of the metaverse. Conclusion: it’s very hard to say anything meaningful at this time, let’s see how this evolves organically. And let’s hope that happeny with the right dynamics between Silicon Valley and the rest of the world which will be impacted by choices and developments made collectively and individually.

Second Life walkie talkie walks

New AR hardware

I’ve been working in the AR domain since 2010, and most of my contacts know that. So when there’s something new, like yet another announcement of new #augmentedreality hardware, I sometimes get multiple messages from contacts to inform me about it. “Have you seen this?!” Yes, I have. That’s also my filter bubble.

To avoid getting those messages, I started photoshopping newly announced hardware into my profile images on social media. For the 2021 Chinese Oppo #AR wearable I pre-emptively recorded myself testing the device. A video that also expresses my foremost curiosity. We’re reading about the hardware specs of the device, but isn’t it the OS software what should interest us most?

You can give the device a virtual try by activating this #facefilter in Snapchat (Sorry, that’s the only platform that accepts these sketchy minimalistic AR filters)

https://www.snapchat.com/unlock/?type=SNAPCODE&uuid=142552586748453981d27a277c1c37b3&metadata=01

AR wearables

AR glasses come and go. But they’re probably here to stay. It’s about time for an update of my old augmented reality wearable overview graphic (see below). A new competitor is on its way! Oppo AR Glass: https://www.roadtovr.com/oppo-ar-glasses-2021-release/

Will people be tempted to buy this gadget, judging it on its’ price? Or on its’ weight and comfortability when wearing it 24/7? Or should we be uncomfortable, and suspicious, wearing Chinese hardware and having Chinese software (軟件更新,謝謝) running on this device, peeking into our lives wherever we go?

No worries. Let’s all have some thrust in the “Smartglass Ja1lbreAking Community sub-reddit” and split-up the hardware, software and cloud connection so we can safely enter the AR/MR universe of our choice!

Online/offline event at DDW20

It’s going to be a weird Dutch Design Week this year with no live events and gatherings in Eindhoven. But it makes me extra curious about the outcomes of the project I submitted for the virtual program. Despite being in that program, “Be Your Own Robot” is not an online-only event. It’s a hybrid event, mixing online connectivity with offline activity.
It’s about physical participation and being part of a collective experience as a distributed audience. Don’t worry, this is not going to be another Zoom session. There won’t be any streaming or fancy 3D rendering. This is about holding your phone in your hand, programming yourself, reacting on other participants elsewhere and experiencing their real-time presence as #DDW20 audience in an abstract way. Will you join on October 17th?

https://ddw.nl/en/programme/3600/be-your-own-robot

AR screensavers

It has become tradition. For every new augmented reality medium or platform, I’ve been making screensavers. Each time with an accompanying story about the relevance for the specific AR context.

2020 Screensaver for Zoom. With additional features. Telling your fellow chat partners why they’re looking at an empty space – filled with a screensaver

2019 Mobile AR: with the spatial features of ARcore it’s now possible to experience the screensaver as an immersive manifestation, happening in the space around you.

2013 Google Glass screensaver: when the whole world got very excited about Google Glass, this screensaver addressed the issue that peeking into your AR wearable there’s often nothing relevant to display at your current whereabouts and situation.

2011 My first AR screensaver: a recreation of classic screensavers as XXL scaled appearances. A screensaver for the screen of the future, meaning the world around you, seen through an AR device.

Interview: what to expect from AR in the near future?

In recent months I received a few questionnaires asking me to reflect on the future of AR. The replies I gave are in this blog-post.

What excites you the most about augmented reality?

Thanks to augmented reality our world has turned into a programmable universe. That means a lot of new opportunities because any imaginable thing can now be realized, for real, just by programming. All that’s needed is technical skills and time, no need to invest in physical material. And that means it’s possible to work at an unparalleled large scale. The AR domain is without limits.

What do you recommend for people who want to get into the AR field?

People starting with AR development might experience a gap between the fabulous possibilities being depicted in conceptual (fake) AfterEffects movies and the tough reality of creating successful AR experiences in reality. AR is not an easy medium. It’s not just a matter of getting the technology right, it’s the tight relation to the physical reality that’s difficult.

Although the basic AR infrastructure is improving, such as better spatial tracking and object recognition services, there are a lot of other factors influencing the user-experience. The physical situation and the user are two highly unstable factors. A dynamically changing environment requires a very thorough, flexible and smart understanding of the space. But people standing on the wrong spot, pointing into the wrong direction or doing the wrong things, will lead to a failed AR experience too. Designing interactions on a screen is easy, it’s easy to detect slides and clicks. But working with interactions in the real world (without standing next to the person to assist) are a challenge of a different kind.

First-time AR creators should be ready to experience some setbacks. But don’t give up and turn to VR, just because you’re in full control there. Go for the challenge to make it happen in AR!

What are your predictions for the AR industry and technology in the next 2, 10, and 25 years?

2 years

In the next few years we’ll get a better understanding of the type of AR wearables on the market, and which one to wear in which situation and for which purpose. The heavy duty glasses equipped with 3D tracking sensors will be useful when we need virtual items positioned at specific spots in our physical surroundings. But lightweight AR glasses will be sufficient for most occasions. They’ll show notifications originating from the cloud, or they react when something is being detected in the camera feed. Sometimes this leads to an instant reaction, but the parsing of everything that’s in your vicinity is happening for another reason too. The “lost keys app” for example uses the data gathered from the ongoing analysis of everything that is within your field of view. But that information gathered throughout the day is also used to train our personalized AI cloud.

10 years

At first some people will be wearing their AR glasses once in a while. But they’ll experience that the usefulness of their device will increase rapidly the more they use it. AR wearables will learn a lot about their users by analyzing their environment and their behavior. As a result, the recommendations will become better, more relevant and more valuable. And that will be an incentive to wear the the device more often, so an acceleration of this process will occur, ending in a situation in which we’ll wear our glasses most of the time.

25 years

In 25 years from now the last hardware hurdles will be solved. People who objected against wearing AR glasses will eventually be equipped with build-in displays in their retina. Living in an augmented reality is going to be default for those that can afford it. The type of software and the clouds you’re in will define your quality of life. There will be a micropayment mechanism for the moments when AR will intervene in your life. And hopefully there will also be a public domain zone, where both creators and users can use AR with all of its powerful features without the constraints imposed by the commercial Big Tech companies. Their role of patent fighting entities and gatekeepers of our augmented world is one to be wary of.

What do you think will be the positive and negative consequences of living in an augmented reality future?

Augmented Reality will mean the whole world will get an update, with interfaces popping up whenever there seems to be a need for one. Actually, it’s us humans that will get an upgrade, empowering us with an additional intelligence that is pro-actively guiding is and advising us what needs to be done and what not. It will radically change the relation between us and the world around us.

But with all that technology in our life, it will be a challenge to be your own robot. We will be controlled by messages, notifications and instructions showing up in our Heads Up Displays, but will we still be in control of what’s controlling us? A negative consequence of living in a customizable AR future is that it’s a lot of work to configure yourself. And that means there is a risk that for some people it will be too much ICT, causing them to switch to auto-pilot mode and accepting all default suggestions and instructions. AR will play a major role in the increase of the influence of AI on our everyday life. And although we might experience a perfect life in the future, that’s only valid from an efficiency perspective judged by an algorithm of which we don’t oversee the core values or working.

How to make sure that our future AR world will be valuable to us humans, instead of the other way around?

The commercial potential of AR is huge. For Big Tech it’s going to be very attractive to take control of the ‘eyeballs’ of the whole world population. But it would be a sad outcome if such a marvellous medium would become the domain of bussiness only. Therefore it’s important that we keep an eye open for AR apps and experiences that might not have a proper bussinessmodel, that therefore might not be as shiny as the high investment apps. But to keep the AR universe an open playground for everyone, it’s important that we stimulate a variety of creators by being an audience for them too. Besides a lot of functional purposes and meaningful use-cases, there is are a lot of opportunities with AR in the worlds of the arts too.

TEDx How to be your own robot?

When I was a child I enjoyed programming my commodore 64. I felt powerful, being able to create anything and letting the machine do what I wanted by just summarizing an idea into clearly defined instructions and rules. If this, then that. Some people think of programming as something obscure or complicated, but it’s as simple as: if it rains, I take an umbrella with me. When a gameplayer presses the left key, the character on the screen moves left. The basic principles are still the same, but the way computer creations manifest themselves has changed. And their impact too.

If we look at the developments of the computer industry, we actually see the changing role and position of computers in our society. With the capacity and power of computers growing, and their size and price shrinking, we can see a growth of the amount of tasks for which they could provide help and assist us. They started having an opinion about the world when they got connected to the internet. And with the mobile revolution, and the upcoming augmented reality revolution, a computerized vision on the world will be fully merged onto day to day reality.

But obviously we’re worried about this. We collectively complain about side-effects of being so utterly connected to the whole world, but no longer having an eye for our surroundings and people in our vicinity. And there’s discussion about the influence of social media companies. Their scripts define what we see and encounter. Mixing ads and fake news content into the flow of content we absorb each day. Currently, this only applies to what we see on our computers and mobiles. But once the age of wearables takes off, transparent screens in front of our eyes will have a continuous impact on what we see, but the impact will extent to include much more of our behaviour and what we do.

Our complaints have been heard. Facebook for example has announced to change their algorithms to let us see more of the activities of our friends and family. But why should one company decide and define this focus for all of the worlds population? Their algorithm is a black box. The only adjustment we can make is to switch on or off groups of people and their content. When switching everything off, an empty feed is the result.

There’s a growing community of people that are worried about the power of facebook which is based on all the data it feeds to us and grabs from us. Their algorithms study what we see, like and do. There are suggestions to urge these companies to let users own their own data, but that’s only halfway a solution. We want to own our data, but we also want to know which insights are gathered and which meta data is collected, and how that influences what we see on our feed.

So the split up of these social media companies needs to include their algorithms too. Just like IBM was split into a software and hardware part, and Microsoft was forced to decouple their Internet Explorer browser. As users of social media, we want transparency. We want to be able to decide which algorithm runs our feed. Some people will want ease of use. Some will want a fully customizable mechanism. If it rains, I take an umbrella. If it rains, I take an umbrella, except when it storms and I only need to walk a little distance, but not when .. etcetera.

In the case of a news feed, we can always quit a system if we don’t agree with lacking the freedom to choose the algorithm. But the discussion about deciding which algorithm runs your feed is going to be more fundamental once the social media companies are responsible for guiding us throughout our day to day life with instructive content about when and when we need to be. The mobile phone is already our personal assistant. But the assistant is becoming more than just an assistant. It’s going to be our coach, doctor and trainer, accessing stored data and live data gathered from sensors and our wearables. Instead of just matching dates and guiding us towards our appointments and bussiness meetings, it will keep guiding us throughout activities.

Currently, the devices are not having the right shape yet. Most of them aren’t inobtrusive and subtle yet, but progress is made rapidly.

Looking at the scheme of things and the flow of data, it’s clear that a black box should not be at the centre of everything, especially when the processing in the black box is increasingly based on artificial intelligence and untraceable neural networks. Instead of a black box at the centre, we should be at the centre. But for that to happen we need a new design for humans, once we really enter the cyborg era. We need an API, and application programming interface. A plug-in structure, so developers or fellow users can create scripts or services and we will be free to decide which specific components make up our day-to-day routine. We can shop around to include components by other companies, governments or open source communities. Or we can write our scripts and logic. And we’d better start doing that now, because when looking into the future by studying the patents that are already being filed today, we can see how the tech industry is getting ready to deploy us as robots. Being to big to beat, it’s going to be unavoidable that it’s going to be a battle between the big 5. Will the only freedom of choice be to choose a flavor? Will you end up in an google reality, an apple universe or will Amazon augment your reality?

Assisted by devices, it’s going to be a fantastic D I Y reality, we’re told. But are you really doing things by yourself, if you’re following the strict guidance of a device that controls you? Is it going to be a challenge to be your own robot? It will be, if we don’t change things radically right now. And that will be the real challenge.

For some people, it will be enough to at least know which scripts are running your life. For others, it will require a bit of programming. But that doesn’t mean it’s going to be complicated. It will be a matter of translating your preferences, your interests, your taste and your default behaviour into rules. Rules that will apply to you. We’ve already slowly grown accustomed to the influence of software and devices on our life, and we react based on what they tell us. So nothing is new. A device will tell us what we do, what to say and what to feel, but at least we know why this is the case.

Initially, these scripts will be basic. Definitely not comparable to the finetuned control the big companies will be able to give us, based on their years and years of monitoring us. But if we start building and shaping and creating our own behavioural scripts now and perhaps even sharing our some of our routines in an open source community, we can collaboratively create a hive mind that’s really us. Operating free of commercial purposes or stock holder interest. Start programming yourself today and escape the inevitable future! Be your own robot!