openPR Logo

Augmented Reality definition


Industry 4.0 , Augmented reality and smart logistic concept (© zapp2photo / fotoilia.com)

Industry 4.0 , Augmented reality and smart logistic concept (© zapp2photo / fotoilia.com)

Augmented reality is what happens when you mix VR and the real world – and the possibilities are incredible. Learn where the technology is right now and how it is set to absolutely transform our lives in the coming years.

Right now, there is a huge amount of buzz and excitement surrounding virtual reality. Ever sine the unveiling of the Oculus Rift several CES expos ago, the world has been readying itself for a world where VR is finally part of our daily lives. But the technology that powers VR is capable of more than just gaming. In fact, it can also lead to some entirely different experiences such as augmented reality (AR) and ‘mixed reality’. What do these terms mean? How are they different? And why should you be excited? Let’s take a closer look.

What is AR?

Augmented reality is effectively the world you see around you augmented with digital elements. So that might mean looking down a high street through a pair of goggles and seeing it labelled with discounts, deals and directions. Or how about if you could lift your phone up, point it at someone in front of you and then be told whether they are single or not?

Augmented reality can also be used for gaming: whether that means driving a remote controlled car around your living room floor and along the ceiling, or playing Pokemon Go and capturing monsters out in the wild.

And in other cases, it can even be a powerful productivity tool. For instance, how about being able to take monitors and add them to the walls in your room? This is precisely the idea behind the exciting HoloLens from Microsoft – an augmented reality device (which Microsoft describes as ‘mixed reality’) that is currently in the development stage. Demonstrations with the HoloLens though, show how it can be used to open up multiple windows for productivity purposes and then to display them on the walls in your home. Now you can have a Skype chat by talking at your fridge door, or you can use your living room wall as your main PC monitor.

openPR tip: Microsoft also envisages other uses for the HoloLens, including some exciting social applications. In one demo, they had an electrician walk a customer through the process of rewiring a fuse box by literally annotating on their very vision!

How it Works

In order to work, AR requires many of the same technologies as VR. These include the likes of head tracking – which allows the images on the screen to be updated in real-time to maintain 1-to-1 parity with the movements of the user. Likewise, some form of input is usually required, and the screen needs to have an appropriate resolution and refresh rate to avoid sickness.

On top of this, AR also requires another form of technology: computer vision. Computer vision describes the ability of an algorithm to view the world around it and understand spatial information: to know if your path is blocked, to know where it is in relation to other objects, to sense depth etc.

Computer vision in itself is one of the key applications of another technology: machine learning. Machine learning is a subcategory of artificial intelligence. This is not the aspect of AI that is responsible for the way that a program responds, or its functionality. Rather, it is merely what handles pattern recognition and learning. Machine learning algorithms use big data collected from apps in order to spot patterns – in this case that means spotting patterns in images that give clues as to depth, to sharp edges, etc.

Every phone is capable of basic computer vision and spatial awareness. If you play a game like Pokémon Go, then basic computer vision, the camera and the gyroscopic sensors, all allow you to move the phone around and see the position of the character on the screen change as though it were really inserted into the world around it.

But for more high tech applications, such as those seen with the HoloLens, we need much more advanced algorithms and sensors. For instance, more high-tech AR technology will use a dual lens set-up in order to gauge depth. This works just like human vision: looking at the difference between two different images and then using this in order to ascertain information such as distance and size.

AR Devices and Apps

As mentioned, any regular smartphone should be able to handle very basic computer vision and therefore very basic AR applications. There are many examples of apps that already do this:

Pokemon Go: As mentioned, Pokemon Go is an example of an AR app. This uses computer vision in order to place Pokemon into the world around the player and then lets them capture those characters via a short minigame.

Wikitude: Wikitude is essentially ‘Wikipedia for the real world’. This essentially allows you to hold your phone up to the world around you and get useful contextual information about the things happening in it – about commercial sites, about land marks, about people etc. Another similar app is Layar, which does effectively the same thing. Neither has quite caught on in a big way yet, but both are interesting glimpses of what’s possible.

Snapchat: Snapchat is the social app that allows users to take photos and then send them to friends and family for a limited amount of time. This is not about storing photos permanently, but rather seeing them temporarily to share a moment. Snapchat had somewhat lost its luster and many speculators were shocked at the decision to turn down an offer from Google (which wanted to buy the service). However, a new feature – filters – allowed the app to get back on top. These let users use the front-facing camera to change their face in a number of amusing, attractive and sometimes horrific manners. Whether you want cat ears, an aged face or something else, Snapchat can do that. This requires amazing computer vision in order to identify facial features and even follow them as the subject moves. Another similar app is the ‘Face App’.

RjDj: RjDj is a fascinating app for iOS that has been around for a long time. This is an augmented reality app but unlike other AR apps, it focusses on sound rather than vision. Yes: AR is any form of reality that has been digitally enhanced. In this case, users listen to ambient noise through their headphones which is then altered by the microphone on its way in. The result is that the world starts to sound trippy, echoey, magical, surreal… all depending on the setting you choose.

openPR tip: In order to access more advanced AR applications however, more powerful forms of this technology, specialist hardware is needed.

One excellent example of this is Project Tango. Project Tango is an initiative from Google that is focussed on adding additional functionality to phones by introducing a standardized sensor array to provide additional computer vision capabilities into a vast range of handsets.

Moreover, we’re beginning to see many VR and AR headsets that use outside in positional tracking. Unlike the HTC Vive and Oculus Rift, these solutions will not use arrays of sensors to track movements but will instead rely on technology built in to the unit. Potentially, this could allow full freedom of movement in a virtual setting with no boundaries and no wires. This is the same technology used in the HoloLens.

The Future of AR

The future of AR is very bright. Right now, Microsoft is pushing for more mixed reality in Windows 10 and is set to release a slew of devices. Lenovo and HTC are both poised to release inside-out tracking devices (standalone headsets) which will also be capable of mixed reality and AR in theory.

These will create a whole slew of new potential applications which will no doubt drive demand. For instance, imagine being able to visualize what a new item of furniture will look like in your home before you purchase it: being able to see a new table, sofa or entire new room layout suggested by an interior designer.

Some of this technology already exists: the Dulux app for instance allows you to see what your walls will look like painted in specific colors. The problem is that as of now, the app doesn’t work all that well and the preview is a far cry from the reality. Imagine if this worked well. Similarly, you could try on an outfit.

How about being able to sit in your living room and change what the color of the walls look like?

Or what about being able to head outside to train at your local park by diving over obstacles and fighting bad guys? What if ‘training’ became as exciting as playing a computer game? Of course, headsets will need to get lighter and less sweaty and limb tracking will need to improve… but it’s possible!

But what about the more data-oriented options? It is very possible that in the future, we might see a return of Google Glass.

Google Glass

Google Glass is an incredible concept – the promise of true augmented reality that can be worn seamlessly on our faces and that will then provide us with live, updated information of the world around us as and when we need it. The way the device works is simple – it picks up information about the world around you using a built-in-camera, and then displays this information on a tiny screen suspended just in the periphery so that we can see it when necessary. At the same time the device will be listening for voice commands as well as gesture controls at the side of the device. This way it can then display directions to help us get to wherever we need to be, it can bring up the results of a quick Google Search, and it record footage of whatever the wearer is seeing. 

The possible applications are almost endless, and the implications are truly exciting. Unfortunately, though, the first attempt to bring Glass to market went badly and the project was cancelled. Word has it that the entire project has not been abandoned though…

So the question is, how can Google make sure that their glass project becomes a hit in the vain of the Oculus rather than a disappointment?

Get the Developers on Board

openPR tip: For starters, Google need to make sure that developers are behind their product from day one. The smartphone industry has shown us just what a huge difference this can make – with mobile ecosystems living and dying on the number of apps they have.

There's no way that Google can envisage every possible use for their device, but put it in the hands of developers and pretty soon you will begin to see creative uses springing up and that all-important 'killer app' will be only a matter of time.

How do Google ensure developer support? First it means supporting those developers – by providing the best possible SDK and instructions with regular updates and lots of communication. At the same time it means rewarding development – currently all apps on Glass are free downloads which does little to entice creative who want to earn a little money. It also means making sure not to implement too many restrictions – allowing developers to access as much of the device as possible to really get every possible use out of it.

Have a Smart Strategy

For many, the idea of Google Glass is going to be a hard sell. Glass has obvious appeal to early adopters, but the average Joe is hardly going to want to wear something so borg-looking in public and risk being the subject of ridicule. 

Thus, Google are going to need to have a smart marketing campaign and strategy in place in order to see mass adoption. This might mean targeting those early adopter types and those developers in a very direct way to make sure that there's an elite 'core' of users with the devices. This will instantly make Glass more desirable and fashionable, and you will start to see more people wearing them as a result until they become commonplace.

At the end of the day: the technology ahs improved drastically to make such a thing possible and people are likely to become increasingly accepting as they become better used to experiencing AR in other walks of life.

Google’s Vision for the Future: AR Everywhere

Which brings us to Google’s grand vision for the future: a world of AI, AR and the entire world indexed and searchable.

Many people are bemused by the current direction Google is taking. This is a company that has always been about indexing the world’s information and making it searchable. AI is the latest tool it is using to do this, which is why Google Assistant is front and center in its plans. Google wants everyone to use Google Assistant.

For all this to happen though, Google has needed to tighten the integration between its hardware and software. It has had to take back control over the direction of Android and lead the charge with its own hardware offerings.

Sounds a bit like someone we know. In fact, if you look at Google’s line-up of devices, there’s now a Google alternative for nearly every major Apple product. There’s no Google Watch yet but I’m sure the prospect of a watch with Google Assistant is a tempting one. So, was Apple right in its approach all along? And what does this mean for OEMs?

If Google’s focus is now on providing us with a game-changing AI service that benefits from tight integration with hardware across multiple devices, why would you choose to use a device from another OEM that won’t have those same brains? Especially when Google’s devices will also benefit from buttery smooth performance thanks to that tight integration and faster updates?

But whatever way you slice it, it still means more competition for Samsung and co., and it’s not impossible to imagine that this could eventually result in some companies migrating away from the Android platform.

What’s more likely in the short term is that we’ll see companies offering their own AI solutions and this is precisely what has been happening. The obvious contender is Samsung’s Bixby, which likewise is all about delivering AI through a tighter integration with the phone’s hardware. Bixby even has its own button!

Meanwhile, the Kirin 970 from Huawei promises to offer enough power to handle on-board AI, meaning that commands won’t need to be sent to the cloud for processing. This actually gives it an advantage over Google Assistant theoretically making responses faster and more secure. I’m surprised that Google didn’t pack an equivalent CPU into its own hardware this time around, but it’s probably a safe bet for the Pixel 3. Especially seeing as Apple is also in on the act with its Neural Engine in the A11 Bionic Chip. (Note that Clips handles all of its processing on-board.)

This should mean more option for consumers, but it could also lead to increased fragmentation. It's already annoying having two separate assistants on the S8+ but such is the price we pay.

The end result is that companies will either a) adopt Google Assistant or b) add their own AI offerings.

And when they do this, they’ll also be creating a whole host of new AR possibilities.

openPR tip: Google Assistant has Google Lens: the ability for users to point their phone at a product and then search for that thing across stores. Bixby has a similar functionality.

By understanding the world around them, virtual assistants will be able to do so much more. How about face recognition and then pulling up someone’s Facebook page?

What’s more, is that the very same technology that powers AI also powers machine learning. AI uses ‘neural chips’ in order to recognize human language using machine learning algorithms. Likewise, those same machine learning algorithms are what’s used for computer vision. This is demonstrated by Pixel Sense in the new Pixel Phones. When phones can handle all this on board (which some already can) we’ll start seeing incredibly powerful and life changing AR applications.  And the push for mobile VR thanks to the Gear VR and Google Daydream will only push this further while also ensuring all our phones have dual lenses and 4K+ displays.

The future is incredibly exciting.


Press releases

Augmented Reality
Augmented reality app opens up new dimensions in orthopaedics technology. With the augmented reality app, Ottobock invites you on an interactive tour of the diverse world of orthopaedics technology. The search term 'Ottobock' in the app store take users directly to the software that can be downloaded for free. After installation on
Augmented Reality Technology
Report Overview A comprehensive analysis based on key parameters has been presented by the report published on the Augmented Reality Technology market. Using the data from 2020 to 2026, the report presents the market status and size in a forecast study. This presents the overall market valuation along with the CAGR
Develop your augmented reality ideas and Apps with Top Augmented Reality Companies
Market Highlights: The major growth driver of Mobile Augmented Reality Market includes growing mobile expansion, growing production of mobile app stores, and growing investment in smart devices among others. However, lack of technical awareness and technology restrictions are some of the major factors which are hindering the growth of Mobile Augmented
Augmented Reality Market
The global Augmented Reality (AR) market size is expected to reach USD 332.60 Billion in 2028 and register a steady revenue CAGR of 46.4% over the forecast period, according to latest analysis by Emergen Research. Rising demand for Augmented Reality (AR) technology to train, maintain, and assist across various industries such
What’s new Technology Advancement in Augmented Reality? | Know details on Augmented Reality Market
Augmented Reality Market analyses the report based on customer demand, supply and market size, current trends, issues, challenges, Forecasts, competition analysis. The report monitors the key trends and market drivers in the current scenario and offers on-the-ground insights and Futuristic Market Trends. Ask for Sample Copy of This Report @ https://www.orianresearch.com/request-sample/1147143