Mostrando entradas con la etiqueta Hardware. Mostrar todas las entradas
Mostrando entradas con la etiqueta Hardware. Mostrar todas las entradas

Metaio Abre nuevas oficinas de Investigación y Desarrollo de Hardware en Dallas

Metaio anuncia hoy la apertura de su flamante nuevas oficinas en Dallas, Tejas, como parte de una iniciativa de la compañía a nivel Interno para invertir recursos en la investigación y desarrollo de Tecnologías de Realidad Aumentada.

En el congreso de este año del Mobile World Congress, Metaio presentó el AREngine, que ya publicamos en este mismo blog, se trata del primer Hardware por IP expresamente diseñado para la aceleración de las nuevas interfaces de Realidad Aumentada como las Google Glass.
Aquí tienes el resto del artículo pendiente de Traducción:

Dallas, TX – 17 April 2013:  metaio | Metaio Opens New Office to Ramp Augmented Reality Hardware R&D | Augmented Reality Products & Solutions
Artículo Original
Metaio Opens New Office to Ramp Up Augmented Reality Hardware R&D

Metaio and its network of clients and developers already benefit from an award-winning R&D team in its headquarters in Munich, Germany. As an expansion to the existing research efforts, the opening of the brand-new office in Dallas will focus entirely on advancements and platform research in order to engineer the next generation of Augmented Reality software and devices.

Metaio has long been involved in collaborative research projects with partners in the mobile handset industry and will continue these strategic developments in designing hardware that will bring Augmented Reality into the future.

"Optimizing the software helps, but it’s not enough," said Metaio CTO Peter Meier. "In order to enable the kind of 'Always On, Aways Augmented" future that every popular representation of Augmented Reality from Iron Man to Google Glass promises, the devices themselves need to be optimized at the chipset level."

Augmented Reality experiences running on standard mobile devices tax the battery and the CPU heavily. Early research by Metaio already shows a drastic reduction in power consumption and increases in initialization speed by up to a factor of 60 when running on the AREngine. These types of performance improvements are absolutely necessary to enable everyday use of this incredible technology.

Metaio has grown substantially since expanding to the US and is hiring for multiple positions, including chipset R&D. Learn more at www.metaio.com/career

###

About Metaio

The worldwide leader in Augmented Reality (AR) research and technology, Metaio develops software products for visually interactive solutions between the real and the virtual world. Based on the Metaio Augmented Reality platform, digital and 3-D content can be integrated seamlessly into the user’s camera view looking upon the real world. Powering over 1,000 apps for enterprise, marketing, retail, publishing and industrial cases, over 30 Million consumers use Metaio’s AR software. Learn more at www.metaio.com
Continua leyendo

First Look: How The Google Glass UI Really Works | Co.Design: business + innovation + design



First Look: How The Google Glass UI Really Works | Co.Design: business + innovation + design


Google Glass has arrived like a piece of sci-fi memorabilia sent from the future. But with all the talk about wearing the Internet on your face and whether or not these glasses can ever be fashionable, the most obvious and important story has gone untold: What is Google Glass actually like to use?


In a 50-minute presentation at SXSW, Google’s Timothy Jordan walked developers through what the Google Glass "Mirror" API and interface looks like today. And if we were to sum it up in just a few words, it’s a multimedia twitter feed for your eyeballs. Here’s how that actually plays out:




THE BASIC CONTROLS


The Glass screen sits out of view, and it’s usually off, just like your cellphone screen. Its frame is essentially a trackpad with three main gestures. Tap once with a finger to select. Slide your finger along the temples to scroll. Swipe down to dismiss a screen.


To start things off, tap the frame with a finger (or nod your head), and you end up at Glass’s home screen. From the home screen, you can do a few things:
Swipe your finger down on the frame to dismiss the screen and go about your day--it’s basically the same thing as the Android’s back button.
Tap again and say “OK Glass” to issue a command, like “take a picture” or “Google how to use Glass”
Slide your finger back along the frame to view a few Google Now-esque “cards”--like the weather report.
Slide your finger forward along the frame--and this is the heart of the experience--which takes you through a “timeline” of everything, from the photo you just took, to a search you just made, to a video you were sent to a notification you received earlier from the New York Times. This is how Glass is much like Twitter, or may be, assuming you subscribe to several services for updates.



SOME CARDS ARE REALLY BUNDLES



So, you have cards--essentially your simplified app experiences. Now say you subscribe to the Times. They might send you a card (a headline and a photo) every hour.


Now you can ignore it, or you can tap Glass to explore more.


Because some cards aren’t just cards. They’re bundles. Any card with a pagefold in the upper right hand corner is hiding its own timeline. So to cycle through stories within that Times bundle, you swipe through just as you would your main Glass timeline.























MORE OPTIONS



So here you are, sliding through a bundle. You reach a card, maybe it’s a news article, and you wish you could hear more. Well you can. You tap the card, and you’ll be presented with a timeline of options, like “read aloud” or, theoretically, “share.”


If you were in email instead, tapping on that card would bring up “reply.”


If you were in Path, tapping a card might allow you to swipe through your reaction choices: smiling or frowning.



SO IT’S ALL ABOUT THE TIMELINE!




Glass is essentially several layers of timeline.

You have your main timeline. Kinda like your twitter feed of things you’ve done and received.
You have a timeline with information (or cards) hiding in bundles.
You have a timeline of options (share, reply, etc.) hiding in cards.


Yes, it’s still a little bit complicated to understand, because surprisingly there’s still no one video presented by Google that drives the idea forward. But most work flows will involve tapping something you’re interested in, then sliding through cards, then tapping a card, then sliding through options.


Lost? Just flick downward to get the heck out of there, like an Android phone.

















IT’S ALL ABOUT LIMITING INFORMATION AND OPTIONS



Google understands Glass’s biggest potential downfall, that a world of messaging and media distracts us from the world in front of our face. A deeper look into their interface logic reveals how they’re encouraging developers to deal with it: Push the simplest of updates to a user. And give them either one piece of information, or one option of what to do with that information, per screen.


So far, so good. Assuming your finger can buff up enough to slide around on Glass’s frame fairly often, you’ll be able to do a lot quickly on the platform. But at the same time, Google is admitting that Glass won’t do everything. It’s not providing the info-dense content we have on laptops, or even smartphones. And at least according to this presentation, the interface isn’t really geared at all toward creating immersive, augmented-reality applications.


Truth be told, Glass may seem a bit less ambitious, once you break down its UI architecture. But often, restraint takes as much ambition as anything else. Glass needs to aspire to be usable, first and foremost. And that’s exactly what it’s going for.


See the video here.


[Hat tip: The Verge]


Note: I did my best to accurately summarize and present the work flows in the video presentation. It’s possible, probable even, that I’m using some terms a bit differently than Google at times, or that I even screwed something up. We’ll all know more once the Glass Mirror API goes public.
Continua leyendo

Metaio presenta El Primer Chipset de Realidad Aumentada para Móviles

El Primer Chipset de Realidad Aumentada para móviles en el World Mobile Congress 2013 de la mano de ST Ericsson.

La realidad aumentada es una de las 10 tecnologías de nuestra década que más van a hacer cambiar el mundo, y se espera que se convierta en una característica estándar para la próxima generación de dispositivos inteligentes. Metaio, como siempre a la vanguardia en esta tecnología, con este chpset permite la inserción de casi cualquier contenido en 3-D y animaciones virtuales en el mundo real mediante el reconocimiento de imágenes, objetos y ambientes enteros.

En un futuro móvil que claramente exige que los dispositivos inteligentes estén “siempre activos” y conectados, La Realidad Aumentada IP de Metaio por Hardware, el llamado ‘AREngine’ reduce drásticamente el consumo de energía haciendo las experiencias de AR posibles y cada vez más cercanas al uso común.
Continua leyendo