Virtual Reality: Next-Level Job Training

There’s no question “job creation” is a hot topic in America today. Certain industries have been slowing down (ex. manufacturing, oil) or even dying out (ex. coal,) while new ones like clean energy and energy storage are growing to meet new needs and demands. In fact, more U.S. workers today are installing solar panels on rooftops than mining coal or extracting oil and gas. Automation isn’t solely to blame for this, nor are jobs necessarily being eliminated or moving overseas: It is our resources and needs that are changing, forcing manufacturing and oil jobs to evolve with the times, and new jobs requiring new skills to be created. Nevertheless, companies are struggling to fill their ranks, especially as the baby boomer generation hits retirement age. There is a shortage of both skilled and unskilled labor today to which American industry needs a solution.

To enterprises struggling to maintain or grow their workforces, Virtual Reality offers a powerful new paradigm for learning and capturing knowledge. Businesses are leveraging VR to train workers of all skill levels, ranks, backgrounds and work environments, from restaurant servers to astronauts, pilots and surgeons. With immersive VR headsets and customized software, job training can be more effective, less expensive and safer than traditional methods. Read how organizations both small and large are training employees in Virtual Reality:

Walmart

By the end of 2017, the world’s largest retailer plans to provide VR instruction in every one of its 200 U.S. “Walmart Academy” training centers, making Virtual Reality an integral part of training 140,000 Walmart employees annually.

With startup STRIVR Labs, Walmart has developed a collection of virtual training experiences on topics like management and customer service to supplement traditional training methods. Each Walmart Academy will be outfitted with an Oculus Rift headset and gaming PC system. The VR content will consist of scenarios up to five minutes long, with interactive on-screen cues prompting trainees to make decisions in situations they might encounter in real life.

In one scenario, the user gets to virtually experience the Black Friday rush, while in another he or she scans the produce and deli sections of a store, learning to spot problems like missing prices and how to help customers. Walmart’s VR training program began as a pilot in thirty of its training centers.

*Walmart’s Brock McKeel will be speaking at the Fall 2017 Enterprise Wearable Technology Summit, along with innovators from Coca-Cola, Audi, Staples, Gulfstream, and more.

United Rentals

United Rentals is the largest equipment rental company in North America, providing thousands of pieces of equipment and tools for industrial and construction sites. To train its sales staff, United Rental takes new hires through a weeklong training program, in which they’re given lectures and shown pictures of worksites. But the company has recently been testing Virtual Reality to complete the training in half that time, and make it more memorable.

In United Rentals’ VR training scenario, new employees have two minutes standing on the edge of a virtual construction site to observe and determine what equipment is missing; as soon as the site manager (an avatar) approaches, they have to make their sales pitch. For example, if the user were looking at an excavation filled with water, he or she would learn to recognize the opportunity to rent a pump to that customer.

United Rentals plans to train more seasoned employees in addition to new hires using VR technology.

JLG

This Oshkoch Corporation company designs, manufacturers and markets lift equipment for use in all industries. As one might imagine, job training at JLG can be dangerous. For instance, workers have to learn how to operate boom lifts from platforms that can be up to 185 feet above the ground. Virtual Reality presents a much safer and even more efficient way to train multiple operators at once.

With ForgeFx Simulations, JLG developed a networked training system simulator, allowing trainees from all over the world to operate machines in the same virtual reality construction site at the same time. This style of virtual group learning is far safer than training on real machines. Already, 50 of JLG’s customers have expressed interest in the program.

Honeygrow

Honeygrow is a Philadelphia-based upscale fast-food chain serving farm-to-fork stir-fry and salads. Before the privately-owned restaurant expanded, its owner would personally welcome all new hires. Today, there are 17 Honeygrow locations from Washington to Brooklyn, and still more in the process of opening. New workers are given a written manual, and initial training is largely left up to local managers.

Seeking a better way to introduce new employees to the corporate culture and teach them best practices, Honeygrow partnered with Klip Collective to create a unique virtual reality onboarding program that could be used at all of its locations. Wearing VR headsets, trainees are greeted by Honeygrow’s owner in a virtual restaurant; they hear the company philosophy, go on an interactive tour of the restaurant, and play a game to learn food-prep techniques and important health safety information.

Honeygrow has found that learning-by-doing in a virtual environment helps new workers grasp and retain their training. In the future, the restaurant may explore the use of Augmented Reality in addition to VR, which would allow trainees to do hands-on food prep with superimposed directions and a timer.

 

Augmented and Virtual Reality may be the answer to the current and impending labor shortage. Immersive technologies are useful for quickly and effectively training new workers (as well as recruiting them,) and even a company’s most experienced employees require training at various points in their careers. While VR allows workers to train in a virtual simulation of the workplace; AR (and also Assisted Reality) enables on-the-job and just-in-time training, begging the question: Will the future connected worker even need training?

 

About EWTS Fall 2017:

The Fall Enterprise Wearable Technology Summit 2017 taking place October 18-19, 2017 in Boston, MA is the leading event for wearable technology in enterprise. It is also the only true enterprise event in the wearables space, with the speakers and audience members hailing from top enterprise organizations across the industry spectrum. Consisting of real-world case studies, engaging workshops, and expert-led panel discussions on such topics as enterprise applications for Augmented and Virtual Reality, head-mounted displays, and body-worn devices, plus key challenges, best practices, and more; EWTS is the best opportunity for you to hear and learn from those organizations who have successfully utilized wearables in their operations. 

3 Great Use Cases of Wearable Tech for EHS

According to the most recent data from the International Labor Organization, every 15 seconds a worker dies from a work-related accident or disease. On top of 2.3 million deaths per year from occupational accidents, over 313 million workers suffer non-fatal work injuries. The great human cost also has an economic impact: For employers, on-the-job accidents cost billions of dollars annually due to production downtime and workers’ compensation fees.

Can technology help prevent work-related accidents and diseases? The majority of workplace injuries are easily preventable through real-time monitoring of workers. After all, connected workers – aware of (and sensed by) their environment through IoT technologies – are inherently safer.

Wearable technology can greatly improve workplace safety. For example,

  • Smart bands and sensors embedded in clothing and gear can be used to monitor workers’ health and wellbeing by tracking such factors as heartrate, respiration, heat stress, fatigue and exposure. Notifications can be sent to workers’ wearable devices when critical levels are reached.
  • Machine and environmental sensors can provide contextual information to field workers to help keep them informed and aware of their surroundings; and wearable GPS tracking can ensure they keep out of hazardous areas.
  • Smart glasses and other HUDs allow employees to access work instructions and manuals in the field, in addition to enabling remote guidance. This aids their productivity and makes them safer, since accuracy (doing a job correctly) and safety go hand-in-hand.
  • Camera-equipped wearables can also be used to document a job or incident for later review. Such data can be utilized for safety training and to identify safety issues in the work environment.

In addition to providing real-time safety information and alerts to workers, wearable devices make for a safer workplace simply by the way in which they are used, i.e. hands-free. There are some great real-world use cases of wearable technology for environmental health and safety. Read on to learn how three major enterprises are using wearables of different form factors to augment their safety efforts:

North Star Bluescope Steel

This steel producer is working with IBM on developing a cognitive platform that taps into IBM Watson Internet of Things technology to keep employees safe in dangerous environments.

The IBM Employee Wellness and Safety Solution gathers and analyzes sensor data collected from smart helmets and wristbands to provide real-time alerts to workers and their managers. If a worker’s physical wellbeing is compromised or safety procedures aren’t being followed, preventative measures can be taken.

North Star is using the solution to combat heat stress, collecting data from a variety of sensors installed to continuously monitor a worker’s skin body temperature, heart rate, galvanic skin response and activity level, along with the temperature and humidity of the work environment. If temperatures rise to unsafe levels, the technology provides safety guidelines to each employee based upon his or her individual metrics. For instance, the solution might advise an at-risk worker to take a 10-minute break in the shade.

With the IBM Employee Wellness and Safety Solution, data flows from the worker to the IBM Watson IoT platform and then to a supervisor for intervention/prevention. Watson can detect hazardous combinations from the wearable sensor data, like high skin temperature plus a raised heart rate and lack of movement (indicating heat stress,) and notify the appropriate person to take action. This same platform could be used to prevent excessive exposure to radiation, noise, toxic gases and more.

John Deere

John Deere, best known as a manufacturer of agricultural equipment and machinery, is using Virtual Reality headsets to evaluate and assess the “assembly feasibility” of new machine designs. Performing ergonomic evaluations in VR improves the safety of production employees by revealing the biomechanics of putting a proposed machine together. High risk processes can be identified and corrected before they pose a problem for the assembler on the shop floor.

In one of these VR reviews at John Deere, an operator puts on a headset and becomes completely immersed in a virtual production environment. Reviewers can see what the operator sees, and determine whether a potential design is safe to manufacture. They can see all the safety aspects that would go into assembling the product, including how the worker’s posture would be affected, whether there is chance of physical injury, what kinds of tools would be required, etc.

John Deere believes VR-aided design evaluations can result in less fatigue, fewer accidents, and greater productivity for its manufacturing team, and the method has already proven effective in reducing injuries at the company. Learn more about this use case at EWTS 2017, where Janelle Haines, Ergonomic Analyst and Biomedical Engineer at John Deere, will participate in an interactive workshop on “Leveraging Virtual Reality in the Enterprise.”

National Grid

The electricity and gas utility company is exploring wearable tech for lone worker health and safety. National Grid believes wearables can have multiple advantages in the workplace, including improving safety as well as speeding up the process of repairs and reducing costs. The ngLabs team is responsible for looking at the latest technologies; in one of its first projects, the team is focusing on the critical worker:

The project uses interactive wristbands developed by Microsoft to monitor the health, safety and wellbeing of workers who operate alone or remotely. The smart bands track location, measure vital statistics like heart rate, and enable remote/lone workers to send a signal to colleagues when they’ve arrived on site or checked out without having to make a call or fill out paperwork. Information is captured quickly, making it easier to spot problems and send alerts if something goes wrong.

Hear more about this use case in San Diego this May—David Goldsby, Technology Innovation Manager at National Grid, will present a case study on “Digital Disruption and Consumerization in Utilities” at EWTS ’17.

 

About EWTS 2017:

The 3rd annual Enterprise Wearable Technology Summit 2017 taking place May 10-12, 2017 in San Diego, California is the leading event for wearable technology in enterprise. It is also the only true enterprise event in the wearables space, with the speakers and audience members hailing from top enterprise organizations across the industry spectrum. Consisting of real-world case studies, engaging workshops, and expert-led panel discussions on such topics as enterprise applications for Augmented and Virtual Reality, head-mounted displays, and body-worn devices, plus key challenges, best practices, and more; EWTS is the best opportunity for you to hear and learn from those organizations who have successfully utilized wearables in their operations. 

Defining New Realities: Augmented, Virtual and Mixed

It’s a new year, so let’s start it off on the right foot or, rather, in the right reality. First item on the agenda: Clarifying our use of the terms AR, VR and MR.

As a larger community, enterprise wearable tech users, solution providers, experts and enthusiasts need to get on the same page in 2017. For one, they need to see “eye to eye” when it comes to distinguishing between Augmented Reality, Virtual Reality and Mixed Reality, for there are too many conflicting definitions out there. We cannot communicate and problem solve across industries without common understanding or a common framework.

Differing classifications for AR, VR and MR make clear communication between solution providers and end users problematic. Solution providers seem to have their own unique ways of not only describing the different realities but also of categorizing their own solutions; while end users often don’t fully understand the current capabilities and limitations of these technologies, or appreciate which “reality” would best serve their business needs.

Sibling technologies? Kissing cousins? Competing realities? And is MR truly a combination of both AR and VR?

It seems most people get the concept of Virtual Reality; it’s the differences between Augmented Reality and Mixed Reality that are less clear. End users and experts don’t seem to be on the same page, with everyone describing these new realities differently and some even throwing the term “Assisted Reality” into the mix. Let’s consider how several insiders are explaining AR, VR and MR; and then we will offer our own set of descriptions as a unifying framework for ongoing discussion.

J.P. Gownder (VP and Principal Analyst, Forrester Research) laid the academic groundwork for us during his presentation at EWTS ’16: According to this expert, Augmented Reality, Virtual Reality and Mixed Reality are a set of experiences that lie upon a continuum known as the Virtuality Continuum between the Real World and the Digital World (composed entirely of pixels.) These experiences are created using “fictitious or recorded content that [was once] in the real world but is now pixelated.” Main takeaway: AR, VR and MR are different experiences that extend upon the real world—all part of what J.P. called the “Extended Reality revolution.”

So how does an expert like Gownder define the three experiences?

  • AR: One possible experience is to augment what you see by superimposing information either off to the side or on top of your field of view. (Some people distinguish between AR, in which digital info appears over your field of view, and Assisted Reality, in which information appears in a corner of your vision.)
  • VR: You can also “augment the virtual world” in what J.P . terms “Augmented Virtuality.” Good VR is achieved through 3D imagery, 360-degree viewpoints, and 3D sound—all contributing to a highly immersive experience.
  • MR: J.P. approached MR as “a special case of AR with some VR characteristics.” Instead of mere superimposed information, MR features interactive holograms integrated into the user’s real world.

On the solution side, Atheer’s Christian Prusia had a slightly different take on AR, describing it as an experience in which you see the natural world but there is a “computer overlay” that follows you, remaining in your field of view even when you turn your head. “AR is aware of the real world but the UI is floating, not fixed.” MR, on the other hand, involves mapping the real world and tying a computer image to a fixed (anchor) point in real space. Finally, in VR, “everything is fake.”

So, again, AR involves a computer overlay of information in your field of view. This information can be contextual but the display is not anchored in the real world; it moves with you. VR is an entirely generated digital experience in a virtual space; and MR consists of computer images that appear to exist within and relate to the user’s real environment.

Joakim Elvander of Sony focused more closely on the nuances among and different uses for AR, MR and what some call Assisted Reality:

  • AR involves “in-field-of-view graphics,” and is most appropriate in those cases where there is a need for superimposed information yet it is still important to see the real world. (Your FOV remains largely unobstructed.)
  • MR features “3D models [attached] to an anchor in the real-world environment,” and is great for visualization. Reality is “just a backdrop” in this experience; the user is viewing and interacting with the computer-generated model, making for a potentially obstructive experience (because MR is more immersive than AR.)
  • Joakim also used the term “side-screen” in describing an experience like Assisted Reality, or what one might see through a pair of Google Glass. Assisted Reality involves purely textual or basic visual information that is not necessarily tied to the real world.

Confused yet? Some clarification is in order. Part of the problem lies in how solution providers like Christian and Joakim self-categorize or refer to their own technologies. Both used rather unique verbiage or phrasing above, while Gownder – representing Academia – drew upon the long history of these technologies. End users, for their part, seem to seek to define AR, VR and MR in terms of how they are applying them. Below we offer our own “definitive guide” to the differences among the new realities:

The EnterpriseWear Definitive Guide to AR, VR and MR

AR, VR and MR are three technologies that all create a computer-generated reality for the user to participate in, optimally through some kind of head-mounted display. Each one, however, presents its version of reality in a unique way, with computer-generated objects and images ranging from basic text and visuals to convincing holograms to lifelike simulations. What sets the three apart from one another is how those objects interact with the user and his or her environment.

Augmented Reality involves overlaying digital content onto the real world. In this experience, the user is still very aware of and can interact with his environment. For the sake of simplicity, I would argue that Assisted Reality is Augmented Reality, whether the computer-generated overlay appears in front of both eyes or just in the corner of one. The digital content can be quite basic (i.e. arrows and other universal symbols, simple text or drawn lines, perhaps triggered by your location or a verbal command or put there by a remote expert) or it might be more elaborate (a building plan, for ex.); but the information cannot be manipulated in a dynamic way and will remain in your field of view as you turn your head with your heads-up display on.

Mixed Reality is like the wild card in the discussion, often used interchangeably with AR though they are not the same. MR is more immersive than AR but less so than VR, blurring the line between the digital and real worlds more than AR but not replacing the real world with an entirely virtual experience as VR does.

MR is capable of 3D mapping the real world and superimposing convincing holographic images onto reality; the holograms are responsive to the real world because they are integrated into the user’s environment. Think of it this way: In AR, digital content appears on top of your view of the real world, but in MR holograms and other 3D content appear to share the user’s space and are receptive to both the user’s interaction and changes in the real-world environment.

Whereas AR and MR are additive experiences, Virtual Reality is immersive, creating a computer-generated environment that replaces the real world. The user interacts solely within this virtual world. So, in VR, your view of the real world – the real room you are standing in – disappears, being replaced with a virtual space filled with virtual objects and moving elements with which you can interact.

  • How do you distinguish among AR, VR and MR? Agree or disagree with our descriptions? Care to make your own suggestions or further clarifications? Let us know in the comments below. Let us, as a community, come up with one universal set of definitions. 

 

About EWTS 2017:

The 3rd annual Enterprise Wearable Technology Summit 2017 taking place May 10-12, 2017 in San Diego, California is the leading event for wearable technology in enterprise. It is also the only true enterprise event in the wearables space, with the speakers and audience members hailing from top enterprise organizations across the industry spectrum. Consisting of real-world case studies, engaging workshops, and expert-led panel discussions on such topics as enterprise applications for Augmented and Virtual Reality, head-mounted displays, and body-worn devices, plus key challenges, best practices, and more; EWTS is the best opportunity for you to hear and learn from those organizations who have successfully utilized wearables in their operations. 

Top Use Cases of Augmented and Virtual Reality in Architecture, Engineering and Design

In our last post, we talked about some of the opportunities that new realities – Augmented Reality, Virtual Reality, and also Mixed Reality – present to the AEC Industry, specifically to the design side of the industry (as opposed to construction.) With these technologies, one can view and manipulate virtual elements in real space or immerse him/herself in a digital recreation of a real-world environment. This can help both in designing a building and in enabling others to understand the design.

The user could be the architect in the initial design stage, or the owner/customer in the pitch or approval stage; a group of architects, engineers and designers working together on a single project, or the construction team responsible for turning a building plan into actual architecture—all putting on a headset or heads-up display to visualize and create. The three use cases that follow are prime examples of this, of AR, VR and MR being used to envision new buildings and streamline the design process.

 

TreeHouse

TreeHouse is a Texas-based home improvement startup offering eco-friendly, smart home solutions. For its second massive retail location in Dallas, TreeHouse CEO Jason Ballard wanted a store with zero annual energy costs—a completely sustainable store where consumers could buy sustainability solutions for their own homes. Now, the sustainability initiatives achieved at this, the company’s second big box store, are impressive. Just as impressive, however, is the use of Virtual Reality in the architectural design process that led to the groundbreaking for the store. In fact, VR was so valuable in this use case that TreeHouse intends to employ the technology in every step of future store planning. Let’s see why:

First thing, Ballard hired architecture firm Lake Flato along with a team of designers and technologists charged with the task of creating a virtual reality model of the new store based upon Lake Flato’s design. The team “hacked together” a system combining Unity (a video game development platform), SketchUp (a design program), and an Oculus Rift VR headset. This system would allow Ballard and the Lake Flato architects to virtually walk through the store, trying different design options and configurations and spotting problems before they became real-life headaches.

Big box retail stores typically use a lot of energy; in this project, the design team started with the idea of zero energy and then worked to reverse engineer the means to achieve it. Sustainable building requires conserving as much energy as possible and using renewable energy for any remaining power needs. Since the greatest “energy hogs” in a large store are usually lighting and air conditioning, the team knew they had to create something “bright and cool.” They used virtual reality to realize the key design decision, which was a saw-tooth roof line. Without going into too much detail (you can read more about the design specifics here), the roof design had three major effects: 1) lighting the space, 2) reducing solar heat gain, and 3) maximizing solar energy production; with the added bonus of looking really cool.

Using virtual reality technology in the design process on this project turned out to be both a critical energy- and cost-saving innovation. In one example, VR saved TreeHouse $50,000, by allowing the design team to try out an element of the original design – an elaborate staircase meant to evoke a tree trunk – and see that it would be a “great eyesore.” The staircase was subsequently scrapped for something simpler and cheaper before it was ever built.

As TreeHouse plans its third store, Ballard wants to create a full-time VR position at the company. Not only does he want to keep using virtual reality to design new stores but he also sees potential for the technology to help scout out store locations and design product displays in existing stores. He wants live models of all TreeHouse locations so that he can try out displays without traveling or resorting to trial-and-error, both of which are costly and environmentally-unfriendly. And, of course, Ballard wants to pass the benefits of VR along to TreeHouse’s customers, helping them to design sustainable homes in virtual reality for free.

 

TEG Architects and Thorntons

TEG is an Indiana-based architecture firm, and its client Thorntons is a Kentucky gas station and convenience store chain. TEG has been using Virtual Reality – specifically Samsung Gear VR headsets – to help bring clients like Thorntons into the design process by putting them into buildings that haven’t been built yet, or rather into 360-degree renderings of those buildings.

Before VR and the like, architects could show clients 2D drawings and 3D printouts to help them understand a space and give input on different design elements. In the end, however, a lot was left up to the customer’s imagination. Well, it’s pretty hard to imagine a 100,000-square-foot space (like the one in this use case), but having a 360-degree view of the architect’s design in virtual reality makes communication between architect and client easier.

Not only did VR give Thorntons better insight into and confidence in TEG’s design, but the technology actually revealed quirks of the design that might have been lost on paper, even to the design team. One aspect of a building that can get lost between 2D and 3D are sight lines. For example, in the Thorntons project, a monumental staircase – as originally designed – would have disrupted the sight line from the front to the back of the building; the architects were able to catch this blunder in virtual reality, and make the decision to narrow the staircase.

While the customer was thoroughly impressed by VR (so much so that Thorntons invested in four Samsung Gear VR headsets to use and experiment with internally); TEG found that incorporating the new technology required reformulating existing workflows. So in addition to the obvious technical challenges, adopting VR as a decision-making tool in architecture can necessitate accelerating the design process, or making certain design decisions much earlier on in the project lifecycle in order to create an effective 360-degree view. Another challenge – but one that TEG believes will be resolved as VR becomes more widely used in the industry – was to keep careful track of all design changes, ensuring that any alterations made in the design software appeared in the VR model and vice versa.

The TEG/Thorntons case goes to show that new realities can be a great tool for both architects and clients to make better-informed decisions, making for less unhappy and costly surprises all around.

 

Architect Greg Lynn and the Packard Plant Project

Greg Lynn is the owner of Greg Lynn FORM, a professor at the UCLA School of the Arts and Architecture, and the architect chosen to represent the U.S. at the 2016 Venice Biennale. For that event, Lynn was assigned the project of revitalizing the Packard Plant, an abandoned car factory in Detroit occupying 3.5 million square feet of space. Greg knew the project would stretch the imagination and that he had to be forward-thinking in his commission; so in perhaps the best-known use case of AR/MR in architecture, he decided to use the Microsoft HoloLens mixed reality headset along with Trimble’s building information modeling (BIM) software to develop his design for a new plant.

To begin, Lynn created a standard design model of the abandoned Packard Plant using the Trimble platform. He then used HoloLens to immerse himself in a holographic representation of the factory at scale. With the technology, Lynn was able to virtually navigate the space at all stages of the design – from the initial state in which he found the plant through each proposed design change – visualizing the project as he was designing it and without having to leave his Venice Beach office.

From the get-go, therefore, Lynn immediately understood the scale of the space he was working with because he could enter the Packard Plant in augmented reality and look around. He could also put on the HoloLens to compare the sizes of various structures and get a clear sense of how much space a given structure would take up, which enabled him to develop and perfect the proportions and individual features of his design without trying out different configurations in multiple 2D or 3D (scale) drawings and models. Even more, the technology allowed Lynn to model dynamic components of his design like vehicles and human beings, and make adjustments accounting for how traffic would flow in and around the factory.

Overall, AR/MR provided Lynn with the “perspective and foresight” to make design decisions months earlier in the design process than is normally possible, saving him in time and stress in addition to money and rework. We’ve also mentioned that new realities present an effective means of communicating one’s design to others via a shared experience, and Lynn did in fact incorporate the Microsoft HoloLens into his presentation for the Biennale.

Greg Lynn has been pretty vocal about AR, VR and MR in terms of the future of designing buildings as well as the future of building things. HoloLens personally helped him to both conceptualize and showcase his work, but the architect also sees potential benefits in construction project delivery and communication. In his opinion, new realities solve the greatest problem all architects encounter, which is getting a project from the screen to physical space.

“I’ve spent my whole life trying to get things from geometry into the physical world. HoloLens is going to bridge that gap between two-dimensional and three-dimensional and physical space—and that’s architecture.”

Augmented and Virtual Reality for Architecture, Engineering and Design

 

What is the potential for Augmented Reality and Virtual Reality in the AEC industry? How might viewing virtual objects integrated into one’s physical environment or immersing oneself into a virtual world benefit the AEC sector? In this article, we will focus specifically on the use of augmented and virtual reality technology on head-mounted displays by architects, engineers and designers in the building design process.

There is potential for both AR and VR in all stages of bringing a construction or engineering project to life; pretty much every enterprise involved in a large project could utilize these technologies to improve their own working methods and also in communicating with one another. In the actual building phase, for example, AR could be used to help project managers view plans and schematics overlaid on top of real structures, to allow workers to view step-by-step instructions for how to install something, and even to train future operators of a building. But before construction even gets going – in the initial design process – AR and VR could change the way architects, engineers and designers conceive of, collaborate on, and revise designs.

First thing, let’s talk about what goes into designing a building, including current visualization tools; and let’s define the different “realities,” i.e. AR, VR and also MR (Mixed Reality). Architects and engineers “trade in” the creation and manipulation of the real world, of real environments, real structures; so, essentially, they have to dream in three dimensions and then translate that vision into a two-dimensional representation which is then translated again into a real, three-dimensional space.

The process really begins, however, with collecting information, with visiting a site (or multiple sites) and documenting existing conditions. This information, as well as client demands and requirements, is put into consideration as the architect or engineer begins to brainstorm and develop a preliminary design. In addition to the property or site itself, the designer must think about how the final building will be used and experienced, including how people and objects will move through the space and what materials it will be composed of. Next come graphics, illustrations, plans, diagrams, elevations, even small scale models—lots of paper and lots of time using complex software go into getting the designer’s ideas into a format that can be shared and presented for input and feedback.

For a typical medium-sized or major commercial commission, the design is rarely a one-architect deal. Designing a building is an increasingly collaborative process, so while the initial creative idea usually belongs to one designer, the final design is a team effort (“it takes a village.”) The core design team might consist of an architect, a few engineers (structural, mechanical, services, fire), and specialist designers (landscape, interior, acoustic); and they might be supported by various experts and advisors like an urban planner, a sustainability consultant, and an expert in health and safety. Contributions may also be made by contractors and suppliers. These individuals come together, discuss options and restraints, and revise the original design until a final one is agreed upon. Much of the work undertaken – the multiple design possibilities expressed in a sequence of technical drawings and models – is rejected or aborted in the process.

Visualization technologies like computer-aided design (CAD) and building information modeling (BIM) help architects to plan projects and communicate their ideas; but they’re not always successful in doing so. For one, the software is highly complex; and the digital drawings and models produced are still confined to a two-dimensional screen, which makes it hard – for collaborators and clients as well as the architects themselves – to get a real, accurate sense of how a design will look, function, and take up space in the real world. CAD and BIM have certainly technologically enabled architects and engineers but the reality is that designers are still viewing blueprints on computer screens as well as paper; they’re still using pictures and drawings and plans, and it’s difficult to conceive, revise and execute a project based upon static renderings.

The drawbacks of those current technologies make the process of design reviews rather lengthy and expensive; and inevitably lead to issues down the road, during construction, that cost dearly. The problem lies in using 2D documents and 3D models – both digital and physical scale models – to simulate form and space, understand spatial relationships, and capture the experiential qualities of a building, which must impact the design. The existing tools just aren’t optimal for expressing the architect’s vision or tweaking a design. There’s a lot of room for unanticipated design flaws that will have to be corrected once building has already commenced—errors arising from the architect’s own oversight as well as the client and construction team not being able to clearly imagine the design. On the bright side, new technologies – AR, VR and MR – can help at all stages of a building project, from conception to revision to execution, improving both the individual and group design processes.

Wearable tech, AR/VR, IoT—they’re all about eliminating inefficiencies, bridging knowledge gaps, and streamlining processes in a business. An architect’s work is no different. There are inefficiencies in the design process: Multiple iterations of a design, miscommunications between architect and client or between design team and contractor, trial and error = inefficient. But new realities enable architects, engineers and designers to better and more easily visualize ideas and make quicker, more informed decisions, avoiding costly scenarios like customer dissatisfaction and tear downs/rework during construction.

AR, VR and MR are all related:

  • Augmented reality is additive, overlaying digital content onto the real world. The user is aware of and can still interact with his environment. Devices include the Sony SmartEyeglass, Recon Jet, Epson Moverio BT-300, Vuzix M300 and, of course, Google Glass.
  • Virtual reality is immersive, creating a computer-generated environment that replaces the real world. The user interacts solely within this virtual world. Devices include the Oculus Rift, Samsung Gear VR, and HTC Vive.
  • AR and MR are used somewhat interchangedly, but they are different. Some describe mixed reality as a kind of hybrid between the other two technologies. MR superimposes convincing holographic images onto reality; the holograms are integrated into the user’s environment, can be manipulated, and are even responsive to the real world. Devices: Microsoft HoloLens and Magic Leap.

So what do these technologies allow designers to do? Architects can actually use AR/VR to design buildings, not just to better convey their ideas or collaborate with others but to create and make design decisions. Virtual reality allows a user to virtually inhabit a space in three dimensions; this virtual space can be based upon – even identical to – a real-world environment. So, for instance, at the outset of the design process an architect could go out and capture a physical environment (the property he is designing for) and recreate it in VR, and then design within that virtual space with infinite “room” to experiment and test out different design concepts—all from the comfort of his or her office or studio. The architect could also take a design developed on an architectural platform like Autodesk and produce it in VR, allowing him – and others members of the design team – to virtually inhabit and manipulate a building that does not yet exist.

That is a much more powerful (and effective) means to visualize one’s ideas and evaluate design possibilities. An architect can translate a CAD model into an interactive walk through: Instead of viewing blueprints of 3D models on a 2D screen, he can put on, say, an Oculus Rift to virtually experience an architectural plan; achieving a far better sense of scale, form and space, of the physical limitations of a space, of how someone will move through a building, of how a particular design component will look or function, and the flaws in a design that might not otherwise be realized without multiple scale models or even until construction had gotten underway. In addition, the owner/customer and the construction team will be able to more comprehensively understand the building design before it is executed thanks to virtual reality technology.

Augmented reality and mixed reality will also help enhance and speed up the design process at the outset of a project, with the trickle-down effect of minimizing delays at the construction stage due to design errors and changes. AR and MR present their own unique opportunities to view and manipulate digital representations or facsimiles of physical realities, but these representations are not immersive. Apple CEO Tim Cook recently compared AR and VR, saying that augmented reality “gives the capability for both of us to sit and be very present talking to each other, but also have other things visually for both of us to see;” virtual reality, on the other hand, “sort of encloses and immerses the person into an experience.” So while VR is great for solo design, AR and MR are perhaps better for group design and collaboration.

So, for instance, mixed reality can be used to simulate a meeting or collaborative space, allowing the various professionals who make up the design team on a complex building project to work together in real time, viewing and interacting with the same virtual model – or rather with holograms superimposed on a physical model or integrated into a physical space (like the building site) – via a HoloLens headset; and they don’t have to be in the same room to do so. Essentially, with AR/VR/MR, the process of design reviews can be completely virtualized–all existing visualization tools and the 3D models they create can be “pushed” beyond the 2D screen into virtual environments or projections on physical models that designers can interact with(in) in real time.

In conclusion, the ability to immerse people into virtual worlds – specifically into digital simulations of proposed buildings – will be a game changer for the AEC industry. New realities – augmented, virtual, mixed – offer a new, far superior level of real-world scale, proportion and perspective over current tools. These technologies will empower architects, engineers and designers to become more innovative by freeing them from the limitations of 3D models in 2D formats and bringing their 3D dreams to life.

Stay tuned for our next post, in which we will share some of the top use cases of Augmented and Virtual Reality for architecture, engineering, and design.

 

About EWTS 2017:

The 3rd annual Enterprise Wearable Technology Summit 2017 taking place May 10-12, 2017 in San Diego, California is the leading event for wearable technology in enterprise. It is also the only true enterprise event in the wearables space, with the speakers and audience members hailing from top enterprise organizations across the industry spectrum. Consisting of real-world case studies, engaging workshops, and expert-led panel discussions on such topics as enterprise applications for Augmented and Virtual Reality, head-mounted displays, and body-worn devices, plus key challenges, best practices, and more; EWTS is the best opportunity for you to hear and learn from those organizations who have successfully utilized wearables in their operations. 

Join the Enterprise Wearable Technology Community (LinkedIn Group)