Bayer Accelerates Crop Science Data Collection with Smart Glasses and Voice

Written BY

Emily Friedman

February 19, 2020

Interview with Michael Calvillo and Carrie Roy

I recently had the privilege of interviewing Michael Calvillo and Carrie Roy, who are spearheading the use of voice and smart glasses to collect data at Bayer. Check out their answers below:

To begin, could you provide a little background on yourselves and what you do at Bayer?

Carrie: I’ve been consulting at Bayer for two years. My background is a bit odd. I was planning on going into Academia, so I have a PhD in Ethnography and ended up doing a two-year postdoc in Digital Humanities, like computational approaches to analyzing screenplays, literature, poetry, images, music and that type of stuff. Following that, I ended up working with data-inspired art – I’m really interested in how we engage with data and information – and then started working for a large consulting firm doing some work on  large, high-profile AI projects.

After moving to St. Louis with my family, I was interested in getting back into agriculture – I actually grew up on a cattle ranch in North Dakota and a lot of my extended family farms – and I’d heard good things about Bayer. During my post-doc I partnered with a lab working on VR; we had immersive 3D viewing environments and I was able to design software tools for researchers to visualize certain types of data. So, for example, creating a 3D tunnel from rings of high-frequency words from State of the Union speeches. I’ve always been interested in working with researchers, and so with the challenges we heard in our interviews with Bayer researchers a little over a year ago, I realized that whatever changes we made to the existing mobile data collection app it wasn’t going to address the issue of wanting to work hands-free, so that’s when I began thinking about leveraging some of the other emerging tech I’d been involved in.

Michael: My academic training is in human factors psychology. I received my PhD at the University of South Dakota and have about 30 years of experience in human-centered design. I’ve worked with large companies– a lot of the early work I did was in usability laboratories (working with one of the first companies that color-coded device connection ports with respective cables for an easier out-of-box experience), and my passion for human-centered design has been fueled ever since. I’ve also worked for software companies, doing training on conducting user-centered research, design, and usability testing. I also worked for a company focused on back-end processing of transaction data in the financial-record keeping industry.

I got into crop science back in 2012 when I started at Monsanto and have been working in the R&D seed pipeline ever since. The area we work in is Product Design, but we’re housed in the R&D, IT pipeline so the products we create support research scientists from early gene identification and editing all the way through larger field trials. It takes 8-10 years for one type of seed to go through our research pipeline before entering the commercial market. Carrie’s and my passions cross over around human-centered design, mine from a user-experience perspective and hers from the impact of emerging technologies—an interesting collaboration foundation.  

Can you speak a little more about this side of Bayer’s business and some of the pain points for the workers there?

Michael: We’re focused on those tasked with collecting scientific data. Businesses today are all looking to construct prediction models and in that way we’re in lock step with the Amazons of the world. A lot of what we do, however, involves manual data collection outdoors, where I’m going out and gathering performance information about a specific crop. So, we have people out in the field using mobile data collection devices. What we learned is that while the software generally delivers the functionality they need, it’s the experience they’re struggling with: entering the raw data by hand, environmental factors (rain, heat), and needing to use one’s hands. There’s also the problem of setting the device down to manipulate a plant and picking it back up combined with screen glare and small touch targets. It was a use case born out of a classic human-computer interaction problem.

Just hearing your background, is there one key piece you learned in your grad studies that carries over to these new technologies like smart glasses and AR?

Carrie: There’s nothing like using a technology and testing it to gain valuable learnings. You can talk about it in theory all you want, but until you get a chance to experience it…

Michael: The areas of sensation and perception, which are classic areas within psychology. One of the things that makes us unique is the way we’re exploiting speech-to-text (STT) capabilities. Relating it back to human physiology along sensory channels, we’re deliberately using the auditory channel to communicate, saying words like the number “10”, or this rating is “high” or “low” or “medium”, and then providing immediate feedback through the visual channel: I say “10” and I see the number 10. We’re looking at high-volume data collection use cases, and so if I only use the auditory channel (I say “10” andI hear “10”), that presents a lot of potential interruption. We’re using the auditory channel to input the data and the visual channel to quality check it; these work in a complimentary way to increase accuracy and speed in the workflow.


Carrie: The scale and scope are key, too. We have thousands of people all over the world, taking, say, 2,000 entries or visiting 2,000 plots in one day. With all that repetition, we need to pay attention to whether this a good experience over time.  

What was the first step? I know you did a $0-budget experiment to prove the use case.

Michael: Carrie really took the first step.

Carrie: I did a bit of research on the hardware out there, the kinds of screens, microphones and audio feedback available, and I started thinking how this workflow could work leveraging the capabilities of AR headsets. I created a storyboard of a person out in the field – they observe a certain rating, they say it, they see it for visual confirmation – and began drafting from there. We were essentially storyboarding the potential experience, talking to users to refine that, and looking at our hardware options.

Michael: The interviews were a launchpad for identifying the problem, setting the stage for us to move forward with a technology-based solution. We used the interview process for solution iteration and to validate our idea in a zero-budget scenario. 

How did you choose a device in a saturated market? There are a lot of smart glasses out there.

Michael: Part of it was the ability for us to connect with vendors. Not all are accessible; some are more responsive than others. We were in an early stage of market definition and validation so we didn’t necessarily have real dollars to spend on evaluation units. We were trying to see if anyone would send us evaluation units free of charge. We ultimately went with Iristick (www.iristick.com ) who was willing to work with us and put a little of their own skin in the game.

Carrie: The ergonomics of the glasses and the security of a tethered solution also put them at the top of the list. We didn’t want to add another device that needed to get security approval. The processing, heat and weight are all still carried on the mobile unit itself, which made for a lightweight, comfortable pair of AR glasses. They’re safety-rated, too, and it just so happens that the side coverings (there’s a bit of a touch pad and camera covering the sides of the eyes) matched up with protecting our workers’ eyes from the corn leaves. So, that was a happy accident.

Do you have a budget yet?

Michael: We’ve been able to secure over $200K in innovation funding, but we are still working with a number of business units on launch strategies. It’s probably going to be a multi-business funding situation. It’s part of our current challenge – how do we integrate this solution with our existing products? It’s a little lower uptake because we already have a product platform in place. We’re working with our leadership to figure out the right roadmap, but we are bridging the product discovery phase and moving closer to launch. It’s our goal in the next six to eight months to figure out where that sweet spot is for implementation. 

Do you have any advice for working with vendors or getting employees to adopt?

Carrie: Think carefully about if or how AR would really benefit the work that your company or users are doing. You don’t want to get technology that’s more than what you need. In terms of the field data we’re doing, informed reality (delivering a smaller heads-up display that mirrors what appears on the device screen) as opposed to a HoloLens solution (delivery of transparent holographic data displays) is really a better fit. It’s just what we need, not more. These are very specific use cases that benefit from hands-free verbal data collection. As we’ve been developing around that, we’re seeing additional advantages like the ease of QR scanning and photo capture and tagging. Just ensuring that initial use case is a really good fit for the technology—that’s a good first step.

Michael: I think we’ve done a good job of evangelizing, too. We brought hardware to our developers to get the technical community interested in AR/IR right away and then established a good cross section of testing and exposure to end users--people in greenhouses, warehouses, outdoors in the field, etc. Getting them involved at this level and incorporating their feedback in the iterations of our design helped to gain widespread traction.

Carrie: To give you an example: When we were in Hawaii, we created some small screens to display as people were collecting data and we noticed there was one screen that they never struggled with. So, we looked at the design of that screen and in our next iteration adopted that approach. I went to the greenhouse this morning and tested with three people and I barely had to do any training; the whole workflow was really quite intuitive to them.

Michael: From a human factors psychology point of view, it’s also about the correct allocation of function. The login or setup is on the phone, but once I’m ready to collect data, I put the phone away and rely on the headset; when I’m done,  I can quality check and view the records I captured through the phone interface again. It’s a matter of when to do what on the phone, and when/what to do on the headset.  

What did you learn working with today’s voice recognition tech? How did you determine/test the voice commands?

Carrie: We’re using a hardcoded list of scientific words and phrases. I think Siri would struggle with them, but we benefit from having a restricted coded list of acceptable IDs or tags. We’re also noticing that really strong winds blowing directly at the user’s face is a potential issue, so there might be a need to shield yourself. Hard consonants seem to register better, so at the end of the workflow we say “Confirm”, which comes through really well, whereas “Undo” – initiated with more of a soft vowel sound – in the case of an error, doesn’t always register. I’m sure the technology is going to improve. As a global company, we also need to think about different languages and different dialects in different regions. Is there a way to custom train for the individual user so we can get that 99.9% correct translation?

Michael: The technology we selected incorporates industry-leading voice recognition software allowing for speaker-independent operations. The system doesn’t need to be trained for effective operation right away. There’s effective noise-cancelling capability there, too. Our ability to customize a list of keywords was key, too, and some of the phonetic things Carrie has mentioned really informed our decisions.

Have there been any unexpected benefits of switching from manual data entry to voice?

Carrie: When you have a large set of words, like field scouting: Let’s say there’s a category of words related to insects, another for deficiencies, and another for weeds. In other types of software, you might have to go to that category and then go to a large drop-down list, which is a lot of time spent navigating. With a hardcoded list of acceptable words, as long as your user is familiar with those, it’s a lot easier and less time-consuming. 

Michael: Another key finding was how important it is to develop a robust workflow. In our context of seed science, it doesn’t take very long for the nuances of workflows to reveal themselves, and then you have many people you’re trying to satisfy. We kept our workflow intentionally robust so we were able to test in areas we hadn’t planned to test in ahead of time. For example, we did pollination workflows in a greenhouse; today, Carrie did some testing with plant height; we did outdoor scouting workflows in Hawaii… This is all data that’s more similar than different in terms of being able to use our base workflow.

What’s next?

Carrie: Now, we’re moving towards identifying groups interested in implementing and talking about funding, and Mike and I are also exploring more true augmented reality hardware. We started to sense there were two tracks here: One for extremely high data entry that needs something like informed reality (visual confirmation of all the values you’re verbally entering) and then more complex workflows that would benefit from being able to pull up reference images, training videos, location information, etc.—much more complex than what we can fit in a small screen. We put that to the side, but now we’d like to explore bringing together the physical world of crop science and all the accompanying data from our enterprise platform at your fingertips.

Michael: We found out early in our research that we had two parallel pathways, one along informed reality design and one involving deeper AR use cases. During the early going, we formed a relationship with the Microsoft Technology Center (MTC) here in St. Louis and actually built out some prototypes overlaying holographic imagery for locating a certain plant or showing how gestures could be used in a workflow. Although we felt we should focus on the field of data collection workflow first, we are now looking forward to broadening our agenda by leveraging Wi-Fi-enabled environments

Carrie: In any research environment, there are reasons we have limited ability to work with certain types of physical objects–maybe it’s contamination or logistics, but the ability to create virtual signs that are anchored in physical spaces is something I’m really interested in exploring more.

Michael Calvillo, PhD,  is a Senior Product Designer at Bayer Crop Science, US. He is a member of a large User-Centered Design team who focuses on delivering highly usable hardware and software solutions to R&D pipeline end-users. With over 25 years of experience as a UX practitioner with companies like Gateway Computers, Human Factors International, Inc., and DST Systems, Michael focuses his efforts primarily on business strategy and alignment research along with more traditional product-centric UX research methods. He received his doctoral degree in Human Factors Psychology from the University of South Dakota in 2003. He also holds a Master’s degree in Industrial/Organizations Psychology from Lamar University, and a Bachelor’s degree in Industrial Psychology from Morningside College.

Carrie Roy, Ph.D. is a Senior Product Designer with a focus on emerging technology. In her TEDx talk “When Art Collides with Data,” she explains how her post-doctoral work in a VR lab and data-inspired art exhibitions, offers a humanist’s perspective on how we engage with data and information. In the corporate world, she has focused on leveraging emerging technologies for gathering data (from gamification to AR) and visualizing data and information (VR/MR), to reveal new insights. Dr. Roy holds a Ph.D. from the University of Wisconsin, Madison, a M.A. from the University of Iceland and a B.A. from Harvard in Visual and Environmental Studies.

If you'd like to reach out to either Michael or Carrie, please email them: michael.calvillo@bayer.com | carrie.roy.ext@bayer.com

 

Further Reading
5 VR Gloves You Can Buy (or Pre-order) Today
April 16, 2024
New Questions Arise from XR End Users on the Factory Floor
April 16, 2024
Factory workers' questions indicate growing interest and acceptance of XR in manufacturing and beyond.