Infinity AR

InfinityAR’s Technology can Turn any Device into a Powerful Content Augmentation Platform.
End Market(s): Media & Entertainment, General Business Services and Logistics
URL:
http://www.infinityar.com/

Company Type:  Software Developer and SDK Provider

Coverage & Analysis from the Information Feed:

  • All Coverage

Summary:  Infinity AR’s vision is about creating a new digital environment that will allow people to naturally interact with augmented content in their physical surroundings. InfinityAR’s technology can turn any device into a powerful content augmentation platform, using basic and affordable hardware – simple stereoscopic cameras.  Their advanced AR development engine enables accurate 3D scene digital representation of one’s current physical environment. This allows users to intuitively interact with augmented content in their physical surroundings, using natural hand movements.

Developers of wearable and mobile device apps can use the InfinityAR engine to easily and efficiently develop advanced AR applications and provide a rich AR experience to users, all with an extremely natural and intuitive user interface.

ODMs / mobile devices HW & wearable manufacturers can use the InfinityAR engine’s comprehensive capabilities to provide developers with an easy to use, yet sophisticated and fully compatible AR development platform. This will facilitate the creation of a rich AR app portfolio for wearable or mobile devices, making it more appealing for the mass market. Lastly, the InfinityAR engine works with affordable hardware that directly affects your BOM, competitiveness and performance.

Infinity Augmented Reality Inc. is listed on the NasDaq (OTCCQB: ALSO) stock exchange.

Leadership:

Investors:

  • infinity AR is publicly traded on the Nasdaq under the exchange symbol ALSO

Product / Service Offerings:

The Infinity AR Engine:  The Infinity AR engine uses very basic and affordable hardware – 2D stereoscopic cameras, to provide:

1. An accurate digital  3D scene representation of one’s current physical environment

  • Enabling an intelligent understanding of the mapped 3D scene by creating a depth map and 3D reconstruction

2. Information about a broad series of essential factors that influence the environment and are crucial for building high quality real-life AR experience such as:

  • Light sources, reflections, transparency, shadows, etc.
  • Recognition of real world objects, their physical characteristics and how they affect the scene

3. Ongoing analysis of user orientation and position in the environment

  • Sensing and presenting the environment from the user’s point of view – because it keeps moving.

How is the engine different?

  • It provides more than just 3D mapping  – detailed and accurate 3D scene representation, extracting data from 3D physical worlds in real time
  • Advanced natural user interface – natural hand movements to control augmented objects
  • Face to face operability – even if both sides wear glasses – 2 different users can “look” at the same object from their different positions
  • Works both in and outdoor, even in direct sunlight
  • The only AR engine using stereoscopic cameras – affordable hardware that’s also eliminating the need for a depth sensor
  • Efficient and smart computer vision algorithms – supporting a variety of mobile and wearable devices
  • Low power consumption

The technology behind the engine

Image matching

This engine performs the most basic need for all other components – the ability to accurately and robustly identify the same points in different image frames (taken from different location and/or time). This involves smart and diverse feature extraction (with sub-pixel fine-tuning), descriptors for each of those features, smart mechanisms to be able to create matches, followed by robust techniques such as RANSAC for removing outliers.

Position and orientation

This brings the ability to determine the exact viewpoint of the device, and is needed for correct rendering of the scene as well as interpreting hand movements (see below) in accurate 3D space. This process requires fusion of data sources (the IMU and the cameras) with techniques such as Kalman filter, as well as obtaining high-accuracy position and orientation from the visual information alone, using highly efficient variations of optimization techniques such as single photo resection.

Physical world digitization

This module allows users the ability to create a digital replication of the physical world – where things are, what they look like, are they reflective or transparent, and what light sources exist in the scene. Data is initially created by dense stereo matching, followed by techniques such as Structure-From-Motion, Bundle Adjustment, SLAM and PTAM, as well as shape analysis and ray-tracing.

Control and gesture NUI

This module allows control, through direct “contact” with virtual objects as well as through gestures. It requires learning through instruments such as neural nets typical hand poses and gestures (for locating the hands and for interpreting gestures). Tracking from moving camera (via background subtraction), is used to provide low latency. Lastly, predictive analysis based on both machine learning of user behavior as well as current scene structure is also used to reduce latency.

Customers: 

Competitors: WorldViz, Leap Motion, Augmensys, Augmate, Ubimax, IrisVR, Scope AR, Augmenta